Follow Us

OpenAI, Microsoft, Google, and Others Enter AI Self-Regulation

Share on facebook
Share on twitter
Share on linkedin

Share

OpenAI, Microsoft, Google, and Others Enter AI Self-Regulation
Share on facebook
Share on twitter
Share on linkedin

Outpacing the Washington policymakers leading AI companies in the U.S. opted for an industry-led body to develop safety standards. Lawmakers in the United States still need to decide if there should be regulations surrounding artificial intelligence or not. Amid this obscurity, the sector’s leading companies chose to have their own rules. 

AI U.S. Companies to Have Their Own Set of Regulations

OpenAI, Microsoft, Google, and Anthropic introduced Frontier Model on July 26, 2023. The group argues that the model would propel artificial intelligence safety research and technical evaluation for upcoming generations. The companies also hint that the future versions shall be more potent than the current large language model powering the likes of Bing and Bard. 

The body would also serve as an intermediary between companies and the government to communicate regarding the potential risks of the technology. VP of global affairs of OpenAI, Anna Makanju, says that companies working on the most potent models should join hands in bringing adaptable safety practices. This would ensure that the AI tools shall be beneficial in the long run. 

Frontier Model binds these companies in a voluntary promise with the White House. From July 28, 2023, they will allow their system to undergo independent tests. Also, the procedure is to alert the users whether the generated image or video is artificially generated or not. 

On July 21, 2023, President Joe Biden announced that seven companies, including Amazon, Meta, Google, Microsoft and others, have agreed to standards set by the White House regarding artificial intelligence. However, it is still a voluntary exercise, and the regulations for artificial intelligence are in the making. 

Implications of this Initiative

Frontier Model asserts that companies are racing ahead of the United States government to establish a regulatory framework for artificial intelligence. At the same time, lawmakers are still busy exploring emerging technology threats. However, experts argue that the initiative comes with its risks. 

Policymakers have a notion that Silicon Valley cannot be entrusted with crafting their own rules. It is mainly because, over the years, there have been multiple incidents of privacy infiltration, child endangerment, and threats that affected democracies worldwide. 

During the July 25, 2023 hearing, Sen. Josh Hawley (R-Mo.) argued that the artificial intelligence companies are the same, which circumvented the regulatory oversight. Especially when social media giants had a feud with the regulators, he named Google, Meta, and Microsoft in particular, referring them to be AI developers and investors. 

Previous Attempts of Self-Regulation by Silicon Valley

Silicon Valley underwent self-regulation on multiple occasions. Social media giants came together to launch Global Internet Forum to Counter Terrorism (GIFCT). This body shared information about terror threats posted on their platforms with the governments. 

Meta also launched its Oversight Board; this body was independent and funded by experts. It worked on the controversial topic of content moderation. This body famously banned the ex-Presedent Donal Trump after the infamous attacks on Capitol Hill on January 6, 2021. 

Artificial Intelligence, other than being a revolutionary technology, has some underlying threats. Jurisdictions and experts are working on modules to regulate the technology, safeguard the masses and allow them to enjoy the benefits. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Download our App for getting faster updates at your fingertips.

en_badge_web_generic.b07819ff-300x116-1

We Recommend

Top Rated Cryptocurrency Exchange

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00