Big Data

AI’s use as a hacking tool has been overhyped

The offensive potential of popular large language models (LLMs) has been put to the test in a new study that found GPT-4 was the only model capable of writing viable exploits for a range of CVEs.

The paper from researchers at University of Illinois Urbana-Champaign tested a series of popular LLMs including OpenAI’s GPT-3.5 and GPT-4, as well as leading open-source agents from Mistral AI, Hugging Face, and Meta.


This website uses cookies. By continuing to use this site, you accept our use of cookies.