Artificial Intelligence

DEF CON hacker convention: Thousands may test how far Artificial Intelligence can go


ChatGPT-maker OpenAI along with other Artificial Intelligence chatbot makers such as Google and Microsoft may soon let thousands of hackers to test the limit of Artificial Intelligence.

The AI bigwigs of the present are reportedly coordinating with the Biden administration to bring about one such event. 

“This is why we need thousands of people,” Rumman Chowdhury, a coordinator of the mass hacking event planned for this summer’s DEF CON hacker convention in Las Vegas that’s expected to draw several thousand people, was quoted as saying by Associated Press. 

“We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed.”

Users of ChatGPT, Microsoft’s Bing chatbot or Google’s Bard have have come to learn about AI’s tendency to fabricate information which is confidently presented as fact. These systems, built on what’s known as large language models, also emulate the cultural biases they have learned from being trained upon huge troves of what people have written online.

ALSO WATCH | ‘A bit scared’: OpenAI CEO says ChatGPT may wipe out current jobs

The idea of a mass hack caught the attention of the American government officials in March at the South by Southwest festival in Austin, Texas, where Sven Cattell, founder of DEF CON’s long-running AI Village, and Austin Carson, president of responsible AI nonprofit SeedAI, helped lead a workshop inviting community college students to hack an AI model, Associated Press report added. 

This year the event will be at a much greater scale. It is the first to tackle the large language models that have attracted a surge of public interest and commercial investment since the release of ChatGPT late last year.

Some of the details are still being negotiated. But companies that have agreed to provide their models for testing include OpenAI, Google, chipmaker Nvidia and startups Anthropic, Hugging Face and Stability AI.

“As these foundation models become more and more widespread, it’s really critical that we do everything we can to ensure their safety,” Scale CEO Alexandr Wang told Associated Press. “You can imagine somebody on one side of the world asking it some very sensitive or detailed questions, including some of their personal information. You don’t want any of that information leaking to any other user.”

Anthropic co-founder Jack Clark said the DEF CON event will hopefully be the start of a deeper commitment from AI developers to measure and evaluate the safety of the systems they are building.

“Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment. Red-teaming is one way that you can do that,” Clark told Associated Press. “We need to get practice at figuring out how to do this. It hasn’t really been done before.”

(With inputs from agencies)

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.