AXIOS April 26, 2024
Some large language models already have the ability to create exploits in known security vulnerabilities, according to new academic research.
Why it matters: Government officials and cybersecurity executives have long warned of a world in which artificial intelligence systems automate and speed up malicious actors’ attacks.
- The new report indicates this fear could be a reality sooner than anticipated.
Zoom in: Computer scientists at the University of Illinois Urbana-Champaign found in a paper published this month that GPT-4 can write malicious scripts to exploit known vulnerabilities using publicly available data.
- The scientists — Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang — tested 10 publicly available LLM agents this year to see if they could exploit...