Turmoil at OpenAI raises concerns about genAI’s future, rapid advance

The fallout over the firing of the co-founder of OpenAI continued this week, raising concerns that the uncertain future of the company could more broadly affect the future of generative AI (genAI) technology.

Hundreds of OpenAI employees — nearly the company’s entire staff — signed a letter Monday threatening to quit and go work for Microsoft unless everyone on OpenAI’s board of directors resigns and reappoints co-founder Sam Altman as CEO, according to a memo circulated on social media.

“Your actions have made it obvious that you are incapable of overseeing OpenAI,” the employees wrote. “We are unable to work for or with people that lack competence, judgment, and care for our mission and employees. Microsoft has assured us that there are positions for all OpenAI employees.”

Those jobs would be with Microsoft’s new advanced AI lab now being led by Altman and Greg Brockman, OpenAI’s former president and board member who resigned over Altman’s firing.

If OpenAI were to implode over the internal shakeup, which has still not been fully explained, industry experts said it would not markedly affect AI development. “The cat’s out of the bag, people know what these models can do and the recipes for doing it,” said Braden Hancock, head of technology and co-founder of Snorkel AI, a startup that helps companies develop large language models (LLMs) for domain-specific use.

“Open AI nailed the product marketing and delivery, but the core technology is being pursued simultaneously by at least a dozen well-funded and well-staffed major tech companies, not to mention research labs and hundreds of AI startups,” Hancock said. “They had a first-mover advantage, but generative AI is here to stay, regardless of who’s at the front of the marathon at any point in time.”

For companies now deploying or considering genAI platforms, Hancock’s advice is to build their AI strategies responsibly, which would include not overly relying on any single provider. “Just as being multi-cloud has been an essential part of risk management for enterprises for years, being multi-LLM moving forward should be as well,” he said.

The AI universe is a wild west show right now, according to Jack Gold, principal analyst with J. Gold Associates. And with the OpenAI leadership change, it just got wilder.

“You have the two creators, driving founders and leading forces of OpenAI now being hired by Microsoft, which today is an investor,” Gold said. “But in the future, Microsoft will become a direct competitor. Having them run the advanced AI lab at Microsoft gives them the ability to re-create and surpass what OpenAI has done. Microsoft has a massive amount of resources it can apply.”

OpenAI’s board consists of four members (in addition to Brockman), including Adam D’Angelo, the current CEO of Quora; Tasha McCauley, an adjunct senior management scientist at Rand Corporation; Helen Toner, director of strategy for Georgetown’s Center for Security and Emerging Technology;  and Ilya Sutskever, one of three company co-founders and its chief scientist.

Sutskever also signed the employee letter threatening to go to Microsoft if the board members do not resign. He also tweeted, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

Avivah Litan, Gartner distinguished vice president analyst, said the conflict within OpenAI illustrates the need for global regulation for safe and secure AI. “Our future safety shouldn’t depend on the capricious whims of individuals leading AI companies towards artificial general intelligence,” said Litan, referring to the march toward “the singularity” — when AI will no longer need human control.

President Joe Biden’s recent executive order, which put in place some guardrails around AI for federal agencies, represents a good start towards meaningful substantive regulation, but it needs to go further, Litan said. “The OpenAI turmoil should serve as a wake-up call for the urgent need for action and leadership,” she said.

Cliff Jurkiewicz, vice president of Global Strategy at AI-enabled hiring service Phenom, said that unlike disruptive technologies before it, ChatGPT and other genAI tools are enabling innovation to happen at a significantly faster pace. He believes what’s playing out publicly is OpenAI’s leadership struggle to keep up with that new faster pace while maintaining stability and confidence in their decision making. 

OpenAI’s board, he said, took the “fail fast approach of innovation and applied it to operating a company.”

“This isn’t innovation. This is chaos,” Jurkiewicz said. “The monetization of the technology was not aligned with the board’s mission and values. We live in a capitalist world. Organizations can be profitable and ethical at the same time. Publicly demonstrating a high ethical standard when it comes to the use of artificial intelligence will be the new measurement of a trusted organization — one that puts humans first.”

As a startup, OpenAI relies almost entirely on venture capital (VC) investors, and VCs, Gold noted, are often impatient for a quick return on their investments. That, he said, may have led to Altman’s firing.

In a bizarre twist, Altman was among more than 33,700 people to sign an open letter, along with notable tech luminaries like Apple co-founder Steve Wozniak, calling for a pause in the development of OpenAI’s GPT LLM. (LLMs are the algorithmic basis for chatbots like OpenAI’s ChatGPT.)

Industry experts speculated that the next iteration of the LLM, GPT 5, could achieve self-realization and open the door to the unknown in AI.

“While there would certainly be [AI] startups that suffer upfront, they are not the only game in town,” said Luis Ceze, CEO of AI model deployment platform OctoML and a University of Washington professor and VC at Madrona Ventures. “For example, open source today offers a wide variety of models for companies to essentially diversify. By doing so, these startups can quickly pivot and minimize risk.”

Ceze said there’s potentially a “major upside” to the OpenAI shakeup in that many open source models already outperform GPT 4 in terms of price-performance and speed. They just don’t yet have the recognition.

Copyright © 2023 IDG Communications, Inc.


This website uses cookies. By continuing to use this site, you accept our use of cookies.