Cloud

The fear and hype around AI is overblown


Two reputable news organizations — Reuters and The Information — recently reported sources claiming that recent drama around OpenAI’s leadership was based in part on a massive technological breakthrough at the company.

That breakthrough is something called Q* (pronounced cue-star), which is claimed to be able to do grade-school-level math, and integrate that mathematical reasoning to improve the choosing of responses.

Here’s everything you need to know about Q*, and why it’s nothing to freak out about.

The problem: AI can’t think

The LLM-based generative AI (genAI) revolution we’ve all been obsessing over this year is based on what is essentially a word- or number-prediction algorithm. It’s basically Gmail’s “Smart Compose” feature on steroids.

When you interact with a genAI chatbot, such as ChatGPT, it takes your input and responds based on prediction. It predicts the first word will be X, then the second word will by Y and the third word will be Z, all based on its training on massive amounts of data. But these chatbots don’t know what the words mean, or what the concepts are. It just predicts next words, within the confines of human-generated parameters.

That’s why artificial intelligence can be artificially stupid.

In May, a lawyer named Steven A. Schwartz used ChatGPT to write a legal brief for a case in Federal District Court. The brief cited cases that never existed. ChatGPT just made them up because LLMs don’t know or care about reality, only likely word order.

In September, the Microsoft-owned news site MSN published an LLM-written obituary for former NBA player Brandon Hunter. The headline read: “Brandon Hunter useless at 42.” The article claimed Hunter had “handed away at the age of 42” and that during his two-season career, he played “67 video games.”

GenAI can’t reason. It can know that it’s possible to replace “dead” with “useless,” “passed” with “handed” and “games” with “video games.” But it’s too dumb to know that these alternatives are nonsensical in a basketball player’s obit.

The Q* solution: AI that can think

Although no actual facts are publicly known about Q*, the emerging consensus in AI circles is that the technology is being developed by a team led by OpenAI’s chief scientist, Ilya Sutskever, and that it combines the AI techniques Q-learning and A* search (hence the name Q*).

(Q-learning is an AI-training tool that rewards the AI tool for making the correct “decision” in the process of formulating a response. A* is an algorithm for checking nodes in a graph and looking for pathways between nodes. Neither of these techniques is new or unique to OpenAI.)

The idea is that it could enhance ChatGPT by the application of something like reason or mathematical logic — i.e., “thinking” — to arrive at better results. And, the hype goes, a ChatGPT that can think approaches artificial general intelligence (AGI).

The AGI goal, which OpenAI is clearly striving for, would be an AI tool that can think and reason like a human — or convincingly pretend to. It could also be  better at grappling with abstract concepts. Some also say that Q* should be able to come up with original ideas, rather than just spewing the consensus of its dataset.

The rumored Q* model would also excel at math itself, making it a better tool for developers.

On the downside, the doom-and-gloom set even suggest that Q* represents a threat to humanity — or, at least, our jobs.

But here’s where the hype goes off the rails.

Not so fast: The fast pace of AI change is an illusion

Georgia Tech computer science professor Mark Riedl posted on the X social network that it’s plausible Q* is simply research at OpenAI aiming for “process supervision” that replaces “outcome supervision” and that when OpenAI published general information about this idea in May “no one lost their minds over this, nor should they.”

The idea of replacing word or character prediction with some kind of supervised planning of the process of arriving at the result is a near-universal direction in labs working on LLM-based genAI. It’s not unique to OpenAI. And it’s not a world-changing “breakthrough.”

In fact, AI doesn’t advance with individual companies or labs making massive breakthroughs that change everything. It only feels that way because of OpenAI.

Although OpenAI was founded in 2015, its culture-shifting ChatGPT chatbot was released only about a year ago. Since then, the tech world has been turned on its head. Thousands of LLM-based apps have emerged. Tech funding turned hard toward funding AI startups. And it feels like this brand of AI has already changed everything.

In reality, however, OpenAI’s innovation wasn’t so much in AI, but in the project of providing access to genAI tools to the public and to developers. The company’s ChatGPT services (and its integration by Microsoft into Bing Search) caught hundreds of other AI labs in companies and universities off-guard, as they had been proceeding cautiously for decades. ChatGPT set the rest of the industry scrambling to push their own research into the public in the form of usable tools and open APIs.

In other words, the real transition we’ve experienced in the past year has been about the transformation of AI research from private to public. The public is reeling, but not because AI technology itself suddenly accelerated. Nor is it likely to unnaturally accelerate again through some “breakthrough” by OpenAI.

Actually, the opposite is true. If you look at any branch of any technology or set of technologies that approaches AI, you’ll notice that the more advanced it gets, the slower further enhancements emerge.

Look at self-driving cars. I was physically present at the DARPA Grand Challenge in 2004. In that contest, the Pentagon said it would grant a million dollars to any organization with an autonomous car capable of finishing a 150-mile route in the desert. Nobody finished. But the next year and in the next DARPA Grand Challenge, the Stanford entry finished the route. Everyone was convinced that human-driven cars would be obsolete by 2015.

Fast forward to 2023 and activists are disabling autonomous cars by placing traffic cones on their hoods.

The highest level of autonomy is Level 4, and no Level 4 car is available to the public or capable of driving on any roads other than pre-defined, known routes and under certain conditions of time and weather. That last 5% will likely take longer to achieve than the first 95%.

That’s how AI technologies tend to progress. But we lose sight of that because so many AI technologists, investors, boosters, and doomers are true believers with extreme senses of optimism or pessimism and unrealistic beliefs about how long advancement takes. And the public finds those accelerated timelines plausible because of the OpenAI-driven, radical changes in the culture we’ve experienced as the result of AI’s recent public access.

So, let’s all take a breath and relax about the overexcited predictions about how AI in general, and Q* in particular, are about to change everything everywhere all at once.

Copyright © 2023 IDG Communications, Inc.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.