VentureBeat October 31, 2024
Taryn Plumb

While many existing risks and controls can apply to generative AI, the groundbreaking technology has many nuances that require new tactics, as well.

Models are susceptible to hallucinations, or the production of inaccurate content. Other risks include the leaking of sensitive data via a model’s output, tainting of models that can allow for prompt manipulation and biases as a consequence of poor training data selection or insufficiently well-controlled fine-tuning and training.

Ultimately, conventional cyber detection and response needs to be expanded to monitor for AI abuses — and AI should conversely be used for defensive advantage, said Phil Venables, CISO of Google Cloud.

“The secure, safe and trusted use of AI encompasses a set of techniques that many...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Cloud, Cybersecurity, Technology
OIG again deems HHS' infosec program ineffective
The top 15 data points that hackers steal from you
Phishing Decoded: How Cybercriminals Target You And How To Fight Back
Security awareness and training is a method, not an outcome
2025 Outlook: Tackling AI, Cybersecurity, and Regulatory Challenges

Share This Article