PYMNTS.com January 9, 2026

For the past several years, the AI boom has been inseparable from a race in cloud capacity. Training large models and running inference at scale drove unprecedented capital expenditures across hyperscalers, reinforcing the idea that bigger models required bigger data centers. A growing body of research is now challenging that assumption, arguing that the infrastructure requirements of artificial intelligence have been shaped more by early architectural choices than by unavoidable technical constraints.

A recent study from Switzerland-based tech university EPFL argues that while frontier model training remains computationally intensive, many operational AI systems can be deployed without centralized hyperscale facilities. Instead, these systems can distribute workloads across existing machines, regional servers or edge environments, reducing dependency on large, centralized clusters.

...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Big Data, Technology
Infographic: ECRI’s Top 10 Tech Hazards of 2026
Doctors Increasingly See AI Scribes in a Positive Light. But Hiccups Persist.
The Download: OpenAI’s plans for science, and chatbot age verification
AI Personas Of Synthetic Clients Spurs Systematic Uplift Of Mental Health Therapeutic Skills
Models that improve on their own are AI's next big thing

Share Article