Smarter AI models converge on similar ways to represent the world
2026. január 5. · MI Történik? · 2 perc olvasás
Do AI systems end up finding similar ways to represent the world to themselves? Yes, as they get smarter and more capable, they arrive at a common set of ways of representing the world. The latest evidence for this is research from MIT which shows that this is true for scientific models and the modalities they’re trained on: “representations learned by nearly sixty scientific models, spanning string-, graph-, 3D atomistic, and protein-based modalities, are highly aligned across a wide range of chemical systems,” they write. “Models trained on different datasets have highly similar representations of small molecules, and machine learning interatomic potentials converge in representation space as they improve in performance, suggesting that foundation models learn a common underlying representation of physical reality.”
The authors looked at 59 different AI models, including systems like GPT-OSS, ESM2, Qwen3 A3B, and ProteinMPNN. They then studied the representations of matter from five datasets (”molecules from QM9 and OMol25, materials from OMat24 and sAlex, and proteins from RCSB”),studying this “from string-based encodings and two-dimensional graphs of molecules to three-dimensional atomic coordinates of materials”. As with other studies of representation, they found that as you scale the data and compute models are trained on, “their representations converge further”. Relatedly, when you study the representations of smaller and less well performing models on in-distribution data you find their representations “are weakly aligned and learn nearly orthogonal information. This dispersion indicates the presence of many local sub-optima, showing that models achieve high accuracy during training by forming idiosyncratic representations that do not generalize even to other models trained on the same domain”. Their conclusion will be a familiar one to those who have digested ‘the bitter lesson’: “Scaling up training, rather than increasing architectural constraints or inductive biases, often yields the most general and powerful models. Although architectural equivariance is essential for simulation-focused applications of MLIPs like molecular dynamics, our work suggests that regularization, combined with sufficient scale, can allow inexpensive architectures to approximate the representational structure of more specialized, symmetry-enforcing models.”
Miért fontos?
Think of an elephant. It’s likely what you just thought of is fairly similar to what billions of other people might think of, because elephants are well known creatures and often the star of childrens’ books all over the world. Now think of a carbon atom. It’s likely whatever you just thought of isn’t nearly as shared with other people as your concept of an elephant, because fewer people have much understanding of atoms. Now think of a quasar. Some of you may not even have a ready representation to hand here because you’ve barely ever read about quasars, while astrophysicists will have very rich representations. The amazing and strange possibility that large-scale AI models hold is that they may be able to create a library for us of detailed representations of everything, and we will be able to validate that these representations have utility because they