Google Cloud’s security chief warns: Cyber defenses must evolve to counter AI abuses
VentureBeat October 31, 2024
While many existing risks and controls can apply to generative AI, the groundbreaking technology has many nuances that require new tactics, as well.
Models are susceptible to hallucinations, or the production of inaccurate content. Other risks include the leaking of sensitive data via a model’s output, tainting of models that can allow for prompt manipulation and biases as a consequence of poor training data selection or insufficiently well-controlled fine-tuning and training.
Ultimately, conventional cyber detection and response needs to be expanded to monitor for AI abuses — and AI should conversely be used for defensive advantage, said Phil Venables, CISO of Google Cloud.
“The secure, safe and trusted use of AI encompasses a set of techniques that many...