MIT Technology Review January 12, 2026
By studying large language models as if they were living things instead of computer programs, scientists are discovering some of their secrets for the first time.
How large is a large language model? Think about it this way.
In the center of San Francisco there’s a hill called Twin Peaks from which you can view nearly the entire city. Picture all of it—every block and intersection, every neighborhood and park, as far as you can see—covered in sheets of paper. Now picture that paper filled with numbers.
That’s one way to visualize a large language model, or at least a medium-size one: Printed out in 14-point type, a 200-billion-parameter model, such as GPT4o (released by OpenAI in 2024), could fill...







