Medical Xpress February 28, 2025
Justin Jackson, Medical Xpress

Yale University, Dartmouth College, and the University of Cambridge researchers have developed MindLLM, a subject-agnostic model for decoding functional magnetic resonance imaging (fMRI) signals into text.

Integrating a neuroscience-informed attention mechanism with a large language model (LLM), the model outperforms existing approaches with a 12.0% improvement in downstream tasks, a 16.4% increase in unseen subject generalization, and a 25.0% boost in novel task adaptation compared to prior models like UMBRAE, BrainChat, and UniBrain.

Decoding into has significant implications for neuroscience and brain-computer interface applications. Previous attempts have faced challenges in predictive performance, limited task variety, and poor generalization across subjects. Existing approaches often require subject-specific parameters, limiting their ability to generalize across individuals.

In the study...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Provider, Radiology, Technology
AI-enabled clinical data abstraction: a nurse’s perspective
Contextual AI launches Agent Composer to turn enterprise RAG into production-ready AI agents
OpenAI’s latest product lets you vibe code science
WISeR in 2026: Legal, Compliance, and AI Challenges That Could Reshape Prior Authorization for Skin Substitutes
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs

Share Article