OpenAI established a super intelligent AI control group but then neglected it

13:36 19/05/2024

2 minutes of reading

A research team at OpenAI, whose mission is to find ways to control “super-intelligent” AI systems, has not received enough computing resources needed to do their work. This, along with other disagreements, caused several key team members to quit, including team leader Jan Leike.

OpenAI established a super-intelligent AI control group but then neglected it - Techlade

Leike believes that OpenAI should focus more on ensuring the safety of the next generations of AI. He is concerned that OpenAI is prioritizing new product launches over addressing important safety issues.

OpenAI did not immediately respond to the issue of resources promised and allocated to this research group.

The research team was founded last July with the ambitious goal of solving the core technical challenges of controlling super-intelligent AI over the next four years. However, as new product launches took up more of OpenAI leadership’s attention, the research team had to fight for more resources.

Although the group published several safety studies and funded millions of dollars in grants to outside researchers, they felt sidelined.

Leike warns that building machines that are smarter than humans is a potentially risky endeavor. He believes that over the years, safety culture has been overlooked in favor of flashy products.

Disagreements between OpenAI co-founder Ilya Sutskever and CEO Sam Altman further complicate the situation. Sutskever once asked for Altman to be fired because he was concerned that the CEO had not been honest with the board. However, under pressure from investors and employees, Altman was reinstated, and Sutskever did not return to work.

After Leike quit, Altman admitted there was still much work to be done and promised to deal with it. However, specific commitments are still unclear.

OpenAI, instead of maintaining a separate team, will disperse safety researchers into different departments. This makes many people worry that OpenAI’s AI development will not be as safe as before.

Share this article:

Keywords:

Comment (0)

Related articles

REGISTER

TODAY

Sign up to get the inside scoop on today's biggest stories in markets, technology delivered daily.

    By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy. You can opt out at any time.