MIT Technology Review December 8, 2021
Will Douglas Heaven

RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network

In the two years since OpenAI released its language model GPT-3, most big-name AI labs have developed language mimics of their own. Google, Facebook, and Microsoft—as well as a handful of Chinese firms—have all built AIs that can generate convincing text, chat with humans, answer questions, and more.

Known as large language models because of the massive size of the neural networks underpinning them, they have become a dominant trend in AI, showcasing both its strengths—the remarkable ability of machines to use language—and its weaknesses, particularly AI’s inherent biases and the unsustainable amount of computing...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
Amazon invests another $2.75B in Anthropic — reportedly ‘largest’ in company history
Policy at global conference focuses on three central issues
How Remote Patient Monitoring and AI Personalize Care
How Nebraska Medicine used AI to reduce first-year nurse turnover by nearly 50%
Social Isolation Linked to Being Older Than Biological Age, Greater Mortality

Share This Article