Cloud

Generative AI is off to a rough start


It’s been a rough month for generative AI (GenAI). First, AWS launched Amazon Q, its answer to Microsoft’s Copilot, only to have its own employees warn of “severe hallucinations” and data leaks. Then Google launched Gemini, its answer to ChatGPT, to much fanfare and an incredible demo, only to acknowledge after the fact that the demo was fake. Oh, and Meta released new open source tools for AI safety (hurray!) yet somehow failed to acknowledge the most egregiously unsafe aspect of GenAI tools: their susceptibility to prompt injection attacks.

I could go on, but what would be the point? These and other failures don’t suggest that GenAI is vacuous or a hype-plagued dumpster fire. They’re signs that we as an industry have allowed the promise of AI to overshadow current reality. That reality is pretty darn good. We don’t need to keep overselling it.

What we might need, despite its imperfect fit for GenAI, is open source.

Getting ahead of ourselves

I recently wrote that AWS’ release of Amazon Q is a watershed moment for the company: an opportunity to close the gap or, in some cases, outpace competitors. Mission accomplished.

Almost. One big problem, among several others that Duckbill Chief Economist Corey Quinn highlights, is that although AWS felt compelled to position Q as significantly more secure than competitors like ChatGPT, it’s not. I don’t know that it’s worse, but it doesn’t help AWS’ cause to position itself as better and then not actually be better. Quinn argues this comes from AWS going after the application space, an area in which it hasn’t traditionally demonstrated strength: “As soon as AWS attempts to move up the stack into the application space, the wheels fall off in major ways. It requires a competency that AWS does not have and has not built up since its inception.”

Perhaps. But even if we accept that as true, the larger issue is that there’s so much pressure to deliver on the hype of AI that great companies like AWS may feel compelled to take shortcuts to get there (or to appear to get there).

The same seems to be true of Google. The company has spent years doing impressive work with AI yet still felt compelled to take shortcuts with a demo. As Parmy Olson captures, “Google’s video made it look like you could show different things to Gemini Ultra in real time and talk to it. You can’t.” Grady Booch says, “That demo was incredibly edited to suggest that Gemini is far more capable than it is.”

Why would these companies pretend their capabilities are greater than they actually are? The reasons aren’t hard to discern. The pressure to position oneself as the future of AI is tremendous. And it’s not just AWS and Google. Listen in on recent earnings calls for public companies; every executive can’t seem to say “AI” enough. The AI gold rush is on, and everyone wants to stake their claim.

GenAI is still so nascent in its capabilities. For all the breathless reporting of this or that new model and all that it offers, the reality always dramatically lags behind the hype. Instead of fixing GenAI’s most pressing problem—prompt injection—we’re exacerbating the problem by inducing more enterprises to use fundamentally non-secure software.

We may need open source to help.

Open source to the rescue

I don’t mean that if we just open source everything, AI will magically be perfect. That hasn’t happened for cloud or any other area of enterprise IT, so why would GenAI be any different? Not to mention that as much as we like to throw around the term open source in the context of AI, it’s not even clear what we mean, as I’ve written. It’s likely that the industry, as Meta has done with its Purple Llama initiative, will focus on comparatively unimportant challenges. Simon Willison laments, “The lack of acknowledgment of the threat of prompt injection attacks in this new Purple Llama initiative from Meta AI Is baffling to me.”

In addition, systems like Gemini are multifaceted and complex. “There must be lots of engineering tricks and hard-coded rules, and we would never know how many models are inside the systems before open sourcing,” notes Professor Xin Eric Wang. This complexity means “open sourcing” a large language model or GenAI system currently raises as many questions as it answers. The Open Source Initiative (OSI) is grappling with these issues. OSI Executive Director Stefano Maffulli stresses: “What does it mean for a developer to have access to a model, and what are the rights that should be exercis[ed], and what do you need in order to have the possibility to modify [and redistribute] that model?”

It’s all unclear.

What is clear is that the efforts to make open source relevant for GenAI are incredibly important. We need more transparency and less black-box opacity. Microsoft, AWS, Google, and others will still feel compelled to position themselves as leaders, but open source separates fact from fiction. Code doesn’t lie.

Let’s rewind those Q, Copilot, and Gemini announcements, but imagine if instead of just private previews and demos, there was code. Think about how that would change the dynamic. Think about the humility it would compel. Given that by far the most common early adopters of GenAI within the enterprise are developers, as an O’Reilly survey uncovered, companies should speak their language: code. Most developers never look at the code for an open source project, but making it available so some do is important. It earns trust in ways that overzealous announcements don’t.

Open source isn’t a perfect answer to the troubles GenAI vendors are having. But the aspiration to greater transparency, which open source fosters, is desperately needed.

Copyright © 2023 IDG Communications, Inc.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.