OpenAI's CEO Urges US Authorities to Establish a New AI Regulatory Agency

On May 4, US Vice President Kamala Harris invited experts in artificial intelligence (AI) to discuss the ethical implications of the new technology.

After that, US policymakers requested that the CEO of OpenAI, Sam Altman, appear before the Senate for additional AI deliberation.

John Kennedy, senator for Louisiana, asked the members present to elaborate on how artificial intelligence technology should be regulated. In response to Senator Kennedy's request, Altman proposed that the US government establish a new AI public office. He added that the U.S. should consider revising the current AI standards in place.

 

ALSO READ:Reports Say UK Needs New Rules for Potential Digital Pound.

 

Will the New Office Address Risks Associated with AI?

Altman suggested that the new agency should be responsible for issuing AI licenses under specific conditions that ensure regulatory compliance. He stated that the new office should be given the authority to ensure that AI companies fulfill the necessary safety requirements.

Altman explained in his address that the proposed office would be responsible for conducting independent audits of AI-related initiatives. He then demonstrated his dedication to his existing position and requested the authority to oversee the new organization.

Questions regarding how and who should regulate the new AI agency prompted heated debates among the attendees at the event. The authority promised to exercise control over the new AI office under certain conditions.

Professor Gary Marcus from New York University argued that AI should be regulated similarly to the Food and Drug Administration (FDA) in the medical field. Professor Marcus proposed that the regulators conduct a review of the new technology's safety in accordance with FDA guidelines. Such procedures would necessitate a review of the regulation by the regulators prior to the introduction.

 

Evaluation of Sam Altman's Proposal

He argued that if the authority intends to launch a project that will serve more than 100 million people, it is necessary to request that it be regulated. Professor Marcus described the functions and responsibilities of the newly established office.

He stated that the new office's regulators should be keenly observant of recent trends in the AI industry. The new office should conduct a pre-review and post-review of the project to determine whether or not AI-related modifications are required.

Christina Montgomery, IBM's Chief of Privacy & Trust, argued elsewhere that AI needs to be transparent and explicable. Montgomery stated that regulators must evaluate the AI risk.

 

ALSO READ:Bitget Gets Polish VSAP License Weeks After Lithuanian Registration

 

In addition, she requested that the regulators assess the impact of artificial intelligence and investigate how companies can be more transparent in carrying out their responsibilities. Montgomery urges regulators to assist companies in educating AI technology prior to its release.

Montgomery argued in her statement that by establishing an independent agency, necessary regulations to address AI risks will be delayed. She stated that regulatory agencies for AI continue to exist and operate in accordance with authority requirements.

Subsequently, Montgomery praised the efforts of regulatory agencies to monitor advances in AI. However, she mentioned the difficulties in confronting AI regulatory agencies, such as a lack of resources and total control over the advanced technology.

In addition to the efforts made by the U.S. government to address the risk posed by AI advancements, other regulatory bodies around the world are attempting to gain a comprehensive comprehension of AI technology. The European Union ratified an AI Bill in 2022 to ensure that AI-related projects undergo rigorous testing prior to launch.

 

Global AI Regulation

The EU adopted new regulations for artificial intelligence to ensure that the technology complies with ethical standards and contributes to the well-being of the European community. Due to security concerns, Italian regulators have adopted restrictive OpenAI ChatGPT measures. In addition, other nations, such as North Korea, Iran, Syria, and China, have imposed restrictions to prohibit ChatGPT in the region.

Reportedly, the Italian regulatory action compelled the OpenAI team to modify several ChatGPT features. The OpenAI technical team added additional features to the platform's privacy settings to enable the user to disable the communication history.

 

ALSO READ:Dispersion Capital Raises $40 Million to Fund Web3 Initiatives.

 

OpenAI initially trained the chatbox using the customer's existing data. The modifications made it possible for the user to deny the chatbox access to sensitive information.


Ojike Stella

1727 Blog posts

Comments
Gift Steve 49 w

Okay

 
 
Odunayo Oluwatomiwa 49 w

Alright

 
 
Atimi Peter 49 w

Okay

 
 
Stella Chikwem 49 w

Nice one