===== Memory Mosaics at Scale =====
//Abstract://
[[zhang-2025|Memory Mosaics]], networks of associative memories, have
demonstrated appealing compositional and in-context learning capabilities on
medium-scale networks (GPT-2 scale) and synthetic small datasets. This work
shows that these favorable properties remain when we scale memory mosaics to
large language model sizes (llama-8B scale) and real-world datasets.
To this end, we scale memory mosaics to 10B size, we train them on one trillion
tokens, we introduce a couple architectural modifications (“memory mosaics v2”),
we assess their capabilities across three evaluation dimensions: training-knowledge
storage, new-knowledge storage, and in-context learning.
Throughout the evaluation, memory mosaics v2 match transformers on the learning
of training knowledge (first dimension) and significantly outperforms transformers
on carrying out new tasks at inference time (second and third dimensions). These
improvements cannot be easily replicated by simply increasing the training data
for transformers. A memory mosaics v2 trained on one trillion tokens still perform
better on these tasks than a transformer trained on eight trillion tokens.
{{ mosaic-icl.png?600 }}
Jianyu Zhang and Léon Bottou: **Memory Mosaics at Scale**, //Advances in Neural Information Processing Systems//, 38, Curran Associates, Inc., 2025.
[[http://leon.bottou.org/publications/djvu/neurips-2025.djvu|neurips-2025.djvu]]
[[http://leon.bottou.org/publications/pdf/neurips-2025.pdf|neurips-2025.pdf]]
[[http://leon.bottou.org/publications/psgz/neurips-2025.ps.gz|neurips-2025.ps.gz]]
@inproceedings{zhang-bottou-2025,
title = {Memory Mosaics at Scale},
author = {Zhang, Jianyu and Bottou, L\'{e}on},
booktitle = {Advances in Neural Information Processing Systems},
publisher = {Curran Associates, Inc.},
volume = {38},
year = {2025},
url = {http://leon.bottou.org/papers/zhang-bottou-2025},
}
==== Related ====
The memory mosaic essential ideas were presented in
Jianyu Zhang, Niklas Nolte, Ranajoy Sadhukhan, Beidi Chen and Léon Bottou: **Memory Mosaics**, //The Thirteenth International Conference on Learning Representations//, 2025.
[[papers/zhang-2025|more...]]