VentureBeat February 20, 2025
Louis Columbus

This article is part of VentureBeat’s special issue, “The cyber resilience playbook: Navigating the new era of threats.” Read more from this special issue here.

Enterprises run the very real risk of losing the AI arms race to adversaries who weaponize large language models (LLMs) and create fraudulent bots to automate attacks.

Trading on the trust of legitimate tools, adversaries are using generative AI to create malware that doesn’t create a unique signature but instead relies on fileless execution, making the attacks often undetectable. Gen AI is extensively being used to create large-scale automated phishing campaigns and automate social engineering, with attackers looking to exploit human vulnerabilities at scale.

Gartner points out in its latest Magic Quadrant for Endpoint Protection...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Cybersecurity, Technology
Apple Intelligence to Expand to Vision Pro Headset in April
Hartford HealthCare taps AI to enhance virtual care access
Meet Tom: AI-Enabled Primary Care as a Service, Built to Scale
Apple Confirms Major Vision Pro Update With Apple Intelligence Coming Soon
Hospitals Are Adopting AI — But Can They Measure and Scale These Tools?

Share This Article