PYMNTS.com January 9, 2026
For the past several years, the AI boom has been inseparable from a race in cloud capacity. Training large models and running inference at scale drove unprecedented capital expenditures across hyperscalers, reinforcing the idea that bigger models required bigger data centers. A growing body of research is now challenging that assumption, arguing that the infrastructure requirements of artificial intelligence have been shaped more by early architectural choices than by unavoidable technical constraints.
A recent study from Switzerland-based tech university EPFL argues that while frontier model training remains computationally intensive, many operational AI systems can be deployed without centralized hyperscale facilities. Instead, these systems can distribute workloads across existing machines, regional servers or edge environments, reducing dependency on large, centralized clusters.
...






