OpenAI was working on a model that caused concern among the staff.
3 min readReports suggest that safety issues related to the new model Q* were raised with the board prior to the removal of CEO Sam Altman.
OpenAI was purportedly in the process of developing an advanced system, referred to as Q*, prior to the dismissal of Sam Altman, which triggered safety concerns among the staff. Before Altman’s departure, some OpenAI researchers reportedly raised alarms with the board, cautioning about potential threats to humanity posed by the model, according to Reuters.
The artificial intelligence model, pronounced as “Q-Star,” reportedly exhibited the ability to solve novel basic math problems, as detailed by the tech news site the Information. The rapid pace of development of this system caused unease among certain safety researchers. The capability to solve unfamiliar math problems is viewed as a significant advancement in the field of AI.
After days of turmoil at OpenAI in San Francisco, Altman was dismissed last Friday by the board but was reinstated on Tuesday night. This decision came in response to almost all 750 staff members threatening to resign if Altman wasn’t reinstated. Additionally, Altman received support from Microsoft, the company’s major investor.
Many experts express concerns that organizations like OpenAI are progressing too swiftly towards artificial general intelligence (AGI), a system with the ability to perform diverse tasks at or beyond human intelligence levels. This raises worries about potential challenges in maintaining human control over such advanced technologies.
According to Andrew Rogoyski from the Institute for People-Centred AI at the University of Surrey, a significant advancement would be a model’s ability to solve math problems not present in its training set.
“Many generative AI systems recycle or reshape existing knowledge, including text, images, or mathematical solutions already stored in libraries. If an AI can address a problem without encountering the solution in its extensive training sets, it signifies a noteworthy breakthrough, even if the math involved is relatively straightforward. The prospect of solving intricate, unseen mathematical problems would be even more thrilling,” stated Rogoyski.
Speaking the day before his unexpected dismissal last Thursday, Altman hinted at another breakthrough by the company behind ChatGPT. During the Asia-Pacific Economic Cooperation (Apec) summit, he shared, “Four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime.”
Initially established as a nonprofit with a board overseeing a commercial subsidiary led by Altman, OpenAI now has Microsoft as its principal investor in the for-profit venture. As part of the preliminary agreement for Altman’s reinstatement, OpenAI will have a new board chaired by Bret Taylor, a former co-CEO of software company Salesforce.
The developer of ChatGPT asserts its establishment with the aim of creating “safe and beneficial artificial general intelligence for the benefit of humanity.” The for-profit entity is declared to be “legally bound to pursue the nonprofit’s mission.”
Amid speculation that Altman was dismissed for jeopardizing the company’s core mission of safety, Emmett Shear, his temporary successor as interim chief executive, clarified this week that the board did not remove Sam over any specific disagreement on safety.