Home Security Naming AI Companies: Why “Safe Superintelligence” is Misleading

Naming AI Companies: Why “Safe Superintelligence” is Misleading

0
monitor displaying error text
Photo by Pixabay on Pexels.com

Pro-tip for naming an AI company: avoid obvious oxymorons. This advice is relevant for Ilya Sutskever, former OpenAI Chief Scientist and co-founder. He is now launching an AI firm named Safe Superintelligence.

Sutskever, while not as famous as Sam Altman, is notable. Last year, he reportedly “solved” superintelligence, causing upheaval at OpenAI. This breakthrough led to Altman’s brief ouster.

Altman’s return brought reports of panic over the Super AI breakthrough. The OpenAI board and Sutskever considered halting progress. Altman likely disagreed, leading to his temporary departure.

True “Safe Superintelligence” is a myth. However, responsible superintelligence is possible…

In May, Sutskever announced his departure from OpenAI. This followed the unveiling of GPT-4o, known for its eerie capabilities. Altman expressed regret over Sutskever’s exit. Sutskever hinted at a significant personal project, surprising many.

The new company revealed on X (formerly Twitter) and a minimalist website, is his passion project. It responds to what unsettled him at OpenAI. Sutskever states the mission is Safe Superintelligence. The team, investors, and business model align to achieve SSI.

Balancing Superintelligence and Safety

The company aims to pursue superintelligence and safety together, with a focus on the former. Sutskever, likely not a pop-culture enthusiast, might miss the irony in the name. If his team watched the 2020 comedy “Superintelligence,” they might find it amusing. The movie, about an AI studying an average woman and humanity’s fate, highlights this.

Despite a 5.4 rating, the movie isn’t alone in predicting human versus AI scenarios. The term “superintelligence” has existed for over a decade, sparking both potential and concern. Nick Bostrom’s 2014 book, “Superintelligence: Paths, Dangers, Strategies,” emphasizes these fears. It questions whether AI will save or destroy us.

Media often depict superintelligent AI as a threat. The idea of machines surpassing human intelligence fuels anxiety.

monitor displaying error text
Photo by Pixabay on Pexels.com

Addressing AI Fears

Conversations about AI are now common, happening at work, home, social events, and TV shoots. People feel excitement and dread about AI’s future. Many fear AI surpassing human intelligence, threatening careers and lives. Though they may not know the term “superintelligence,” they grasp the concept. The potential of AI in everyday devices is a shared concern.

This week, new laptops with Microsoft’s Copilot+ were released. While not superintelligent, these AIs have narrow, focused capabilities. However, fears of AI systems becoming self-aware persist.

Imagine a downhill soapbox race without brakes and a driver who knows only 80% of the controls. This is the fear some have about AI.

As an AI observer, I believe such scenarios are unlikely in my lifetime. Initially, I thought general AI might arrive when I was old. Now, I predict it could happen within 18 months.

Sutskever’s company name seems almost comical given AI’s rapid development. The brakeless soapbox race analogy illustrates this well.

Striving for Responsible Superintelligence

“Safe Superintelligence” is a myth. Responsible superintelligence, however, is achievable. If I were on Sutskever’s team, I would suggest “Responsible Superintelligence” as the name. AI companies can promise to act responsibly and humanely. This may lead to “safer” superintelligence, but absolute safety is an illusion.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Exit mobile version