VentureBeat January 24, 2025
Typically, developers focus on reducing inference time — the period between when AI receives a prompt and provides an answer — to get at faster insights.
But when it comes to adversarial robustness, OpenAI researchers say: Not so fast. They propose that increasing the amount of time a model has to “think” — inference time compute — can help build up defenses against adversarial attacks.
The company used its own o1-preview and o1-mini models to test this theory, launching a variety of static and adaptive attack methods — image-based manipulations, intentionally providing incorrect answers to math problems, and overwhelming models with information (“many-shot jailbreaking”). They then measured the probability of attack success based on the amount of computation...