Building a rigorous science of modern AI
The Deep Learning Fundamentals group studies the principles that make modern AI systems work. Our research focuses on representation learning, self-supervised learning, and the foundations of reasoning in large language models. We combine mathematical analysis with empirical study to understand what structure modern models learn, how that structure is shaped by training, and why it leads to strong generalization and adaptation.
A central goal of the lab is to turn phenomena that are often treated as mysterious, such as neural collapse, label-efficient transfer, implicit low-rank bias, and semantic structure in self-supervised learning, into precise and predictive theory. We also study how pretrained language models can be used as components in principled, verifiable systems for searching over hypotheses, programs, and solution strategies.
Tomer Galanti is an Assistant Professor in the Department of Computer Science and Engineering at Texas A&M University. Prior to joining Texas A&M, he was a postdoctoral associate at MIT's Center for Brains, Minds & Machines, working with Tomaso Poggio. He received his Ph.D. from Tel Aviv University, advised by Lior Wolf, and interned as a Research Scientist at Google DeepMind in 2021.