AXIOS April 26, 2024
Sam Sabin

Some large language models already have the ability to create exploits in known security vulnerabilities, according to new academic research.

Why it matters: Government officials and cybersecurity executives have long warned of a world in which artificial intelligence systems automate and speed up malicious actors’ attacks.

  • The new report indicates this fear could be a reality sooner than anticipated.

Zoom in: Computer scientists at the University of Illinois Urbana-Champaign found in a paper published this month that GPT-4 can write malicious scripts to exploit known vulnerabilities using publicly available data.

  • The scientists — Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang — tested 10 publicly available LLM agents this year to see if they could exploit...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Cybersecurity, Survey / Study, Technology, Trends
OpenAI partners with Stack Overflow to make models better at coding
How can AI companies navigate a complex regulatory framework? — Compliance Labels
If AI Harms A Patient, Who Gets Sued?
Nvidia's DrEureka outperforms humans in training robotics systems
4 ways GenAI in healthcare improves patient experiences

Share This Article