VentureBeat January 24, 2025
Taryn Plumb

Typically, developers focus on reducing inference time — the period between when AI receives a prompt and provides an answer — to get at faster insights.

But when it comes to adversarial robustness, OpenAI researchers say: Not so fast. They propose that increasing the amount of time a model has to “think” — inference time compute — can help build up defenses against adversarial attacks.

The company used its own o1-preview and o1-mini models to test this theory, launching a variety of static and adaptive attack methods — image-based manipulations, intentionally providing incorrect answers to math problems, and overwhelming models with information (“many-shot jailbreaking”). They then measured the probability of attack success based on the amount of computation...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Cybersecurity, Technology
DigitalOcean Simplifies AI Agent Creation With Its Managed GenAI Platform
We asked OpenAI’s o1 about the top AI trends in 2025 — here’s a look into our conversation
Texas’s Left Turn On AI Regulation
Google Plans AI Training Push Amid Changing Global Regulations
All About DeepSeek - The Chinese AI Startup Challenging The US Big Tech

Share This Article