VentureBeat February 20, 2025
This article is part of VentureBeat’s special issue, “The cyber resilience playbook: Navigating the new era of threats.” Read more from this special issue here.
Enterprises run the very real risk of losing the AI arms race to adversaries who weaponize large language models (LLMs) and create fraudulent bots to automate attacks.
Trading on the trust of legitimate tools, adversaries are using generative AI to create malware that doesn’t create a unique signature but instead relies on fileless execution, making the attacks often undetectable. Gen AI is extensively being used to create large-scale automated phishing campaigns and automate social engineering, with attackers looking to exploit human vulnerabilities at scale.
Gartner points out in its latest Magic Quadrant for Endpoint Protection...