Sam Altman-led OpenAI is frequently in the spotlight, for both positive and negative reasons. Altman was dismissed from the company last year but reinstated shortly after. More recently, there was controversy surrounding the alleged use of actress Scarlett Johansson’s voice without consent in the new conversational mode of GPT-4o, a hot AI startup.
Despite this ongoing controversy, OpenAI is once again making waves on the internet for less than favorable reasons. Former OpenAI board members have revealed the true reasons behind Altman’s previous dismissal, indicating why it should have remained that way.
From Non-Profit to For-Profit?
OpenAI was initially established as a non-profit organization with the goal of making Artificial General Intelligence (AGI) accessible and beneficial to humanity. While it did eventually incorporate a profit-making unit to secure funding, its non-profit ethos remained central.
However, under Altman’s leadership, the profit-making aspect has reportedly taken precedence. This shift is highlighted by former board members Helen Toner and Tasha McCauley. Toner’s recent exclusive interview on the TED AI Show is currently circulating online.
Toner says,
“When ChatGPT came out November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter. Sam didn’t inform the board that he owned the OpenAI startup fund, even though he was constantly claiming to be an independent board member with no financial interest in the company.”
This revelation is staggering, particularly considering that ChatGPT was essentially the catalyst for the current AI turmoil. Keeping such crucial information hidden from the board is unquestionably shady.
She also reveals that Altman provided the board with “inaccurate information” on “multiple occasions,” obscuring the safety measures implemented in the company’s AI systems. Consequently, the OpenAI board was unaware of the effectiveness of these safety measures. You can listen to the full podcast here.
No Safety for the AI Trigger
It’s crucial for companies to prioritize responsible AI development, especially given the potential for things to go “horribly wrong.” Ironically, this sentiment echoes Altman’s own words.
Remarkably, this aligns with Musk’s perspective. Not long ago, Elon Musk sued OpenAI, alleging that the company had strayed from its original mission and become profit-driven.
In an interview with The Economist, former board members expressed concerns about Sam Altman’s return leading to the departure of safety-focused talent, severely impacting OpenAI’s self-governance policies.
They also advocate for government intervention to ensure responsible AI development. In response to the controversy, OpenAI recently established a Safety and Security Committee, tasked with providing recommendations on critical safety and security decisions for all projects within 90 days.
Interestingly, Sam Altman is a member of this committee. While I’m hesitant to believe all the accusations, if they are true, it could spell serious trouble. None of us want to see a scenario like Skynet becoming a reality.
Adding to the turmoil, Jan Leike, co-head of Superalignment at OpenAI, resigned recently over safety concerns and joined Anthropic, a rival firm. His detailed account of events, shared on his X handle, included the statement that “OpenAI must become a safety-first AGI company,” indicating a deviation from the current trajectory.
He also stresses the urgent need to “figure out how to steer and control AI systems much smarter than us.” However, that’s not the only reason Leike departed. He also stated,
“Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute, and it was getting harder and harder to get this crucial research done.“
A Toxic Exit for Employees
While Toner and other former OpenAI members have recently been disclosing shocking facts about the company, they also suggest that they “can’t reveal everything.”
Last week, a Vox report uncovered how former OpenAI employees were compelled to sign strict non-disclosure and non-disparagement agreements, violation of which would result in the loss of all vested equity in the company, amounting to millions of dollars. This agreement prohibits former employees from criticizing the company or speaking to the media. Despite Altman’s claim on X that he was unaware of this clause in OpenAI’s NDA, it is hard to believe.
Even if we consider Altman’s perspective, it underscores the disorganization within a crucial organization like OpenAI, further supporting the allegations against it.
It’s disheartening to see the board, which originally supported the company’s vision, now opposing it. Whether or not this is related to Altman dismissing them upon his return is unclear, but if these allegations are true, they are quite alarming.
Is the Future of AI in the Wrong Hands?
Numerous movies and TV shows depict the potential dangers of AI. Furthermore, it’s not just OpenAI aiming for AGI; industry giants like Google DeepMind and Microsoft are integrating AI into nearly all their products and services. This year’s Google I/O humorously revealed that AI was mentioned over 120 times throughout the event.
On-device AI represents the next major advancement, and we’re already witnessing some implementations, such as the Recall feature for next-gen Copilot Plus PCs. However, this has raised significant privacy concerns, as the feature actively takes screenshots of the screen to create a local vector index.
Simply put, AI is a permanent part of our world, whether we embrace it or not. What’s crucial is the responsible development and use of AI, ensuring it remains a tool for our benefit rather than a force that controls us. Are we entrusting the future of AI to the right hands? Particularly as AI labs push boundaries to give it more power and access to data, and AI becomes increasingly multimodal.
How do these recent revelations affect you? Do they keep you up at night like they do for me? Share your thoughts in the comments below.
0 Comments