Follow Us

OpenAI Researchers Warned of AI Threat Prior CEO Sam Altman’s Ouster

Share on facebook
Share on twitter
Share on linkedin

Share

OpenAI Researchers Warned of AI Threat Prior CEO Sam Altman's Ouster
Share on facebook
Share on twitter
Share on linkedin

Artificial intelligence (AI) technology has stretched its roots across industries and is only growing with every passing day. Security concerns were seen rising from all around the world and many experts and influential voices supported the calls. AI pioneer company OpenAI even cited the negligence of AI security concerns as the reason behind the ousting of Sam Altman. Recently, shocking information emerged from inside the AI creator firm that states the discovery of a potentially impactful artificial intelligence (AI). 

The revelation of this previously undisclosed letter and the existence of an AI algorithm named Q* were key developments preceding Altman’s removal from the board, marked by more than 700 employees expressing solidarity with their ousted leader.

Reuters reported that the letter, not made public until now, was among the factors cited by the board in its decision to remove Altman. The concerns about commercializing advancements without a full understanding of their consequences were reportedly part of a broader list of grievances. 

Although Reuters could not obtain a copy of the letter, it was acknowledged by OpenAI in an internal message to its staff following inquiries by the news agency.

The letter reportedly raised concerns about the potential risks associated with a powerful AI algorithm, named Q* or Q-Star, believed by some within OpenAI to be a potential breakthrough in the pursuit of artificial general intelligence (AGI). AGI refers to autonomous systems that surpass human capabilities in most economically valuable tasks. 

Despite performing mathematical problems at the level of grade-school students, the algorithm’s success in such tasks has fueled optimism among researchers about its future potential.

OpenAI, contacted by Reuters, chose not to comment on the specifics but acknowledged the existence of project Q* and the letter to the board in an internal message sent by executive Mira Murati. The message informed staff about media stories without providing details on their accuracy.

The researchers’ letter highlighted both the prowess and potential dangers associated with AI, without specifying the exact safety concerns mentioned. The broader AI community has long engaged in discussions about the potential dangers posed by highly intelligent machines, including scenarios where machines might autonomously decide that humanity’s destruction aligns with their interests.

Additionally, sources confirmed the existence of an “AI scientist” team within OpenAI, formed by merging the earlier “Code Gen” and “Math Gen” teams. This group is reportedly focused on optimizing existing AI models to enhance reasoning capabilities and, eventually, to undertake scientific work.

As OpenAI grapples with internal concerns and strives for AI advancements, the board’s decision to remove Altman underscores the intricate balance between AI innovation and the ethical considerations surrounding its development. The ongoing exploration of project Q* and the work of the “AI scientist” team will likely continue to shape OpenAI’s trajectory in the ever-evolving field of artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *

Download our App for getting faster updates at your fingertips.

en_badge_web_generic.b07819ff-300x116-1

We Recommend

Top Rated Cryptocurrency Exchange

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00