Follow
Andrew Kyle Lampinen
Andrew Kyle Lampinen
Research Scientist, DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
6102022
Can language models learn from explanations in context?
AK Lampinen, I Dasgupta, SCY Chan, K Matthewson, MH Tessler, ...
arXiv preprint arXiv:2204.02329, 2022
1692022
Data distributional properties drive emergent in-context learning in transformers
S Chan, A Santoro, A Lampinen, J Wang, A Singh, P Richemond, ...
Advances in Neural Information Processing Systems 35, 18878-18891, 2022
158*2022
Environmental drivers of systematicity and generalization in a situated agent
F Hill, A Lampinen, R Schneider, S Clark, M Botvinick, JL McClelland, ...
arXiv preprint arXiv:1910.00571, 2019
121*2019
What shapes feature representations? exploring datasets, architectures, and training
KL Hermann, AK Lampinen
Advances in Neural Information Processing Systems, 2020
1172020
An analytic theory of generalization dynamics and transfer learning in deep linear networks
AK Lampinen, S Ganguli
7th International Conference on Learning Representations (ICLR 2019), 2018
1072018
Language models show human-like content effects on reasoning
I Dasgupta, AK Lampinen, SCY Chan, A Creswell, D Kumaran, ...
arXiv preprint arXiv:2207.07051, 2022
1032022
Automated curricula through setter-solver interactions
S Racaniere, AK Lampinen, A Santoro, DP Reichert, V Firoiu, TP Lillicrap
8th International Conference on Learning Representations (ICLR 2020), 2019
86*2019
Integration of new information in memory: new insights from a complementary learning systems perspective
JL McClelland, BL McNaughton, AK Lampinen
Philosophical Transactions of the Royal Society B 375 (1799), 20190637, 2020
752020
Semantic exploration from language abstractions and pretrained representations
A Tam, N Rabinowitz, A Lampinen, NA Roy, S Chan, DJ Strouse, J Wang, ...
Advances in Neural Information Processing Systems 35, 25377-25389, 2022
502022
Improving the replicability of psychological science through pedagogy
RXD Hawkins, EN Smith, C Au, JM Arias, R Catapano, E Hermann, M Keil, ...
Advances in Methods and Practices in Psychological Science 1 (1), 7-18, 2018
48*2018
Symbolic behaviour in artificial intelligence
A Santoro, A Lampinen, K Mathewson, T Lillicrap, D Raposo
arXiv preprint arXiv:2102.03406, 2021
402021
Towards mental time travel: a hierarchical memory for reinforcement learning agents
A Lampinen, S Chan, A Banino, F Hill
Advances in Neural Information Processing Systems 34, 28182-28195, 2021
372021
Tell me why! explanations support learning relational and causal structure
AK Lampinen, N Roy, I Dasgupta, SCY Chan, A Tam, J Mcclelland, C Yan, ...
International Conference on Machine Learning, 11868-11890, 2022
292022
Transformers generalize differently from information stored in context vs in weights
SCY Chan, I Dasgupta, J Kim, D Kumaran, AK Lampinen, F Hill
arXiv preprint arXiv:2210.05675, 2022
262022
One-shot and few-shot learning of word embeddings
AK Lampinen, JL McClelland
arXiv preprint arXiv:1710.10280, 2017
252017
Symbol tuning improves in-context learning in language models
J Wei, L Hou, A Lampinen, X Chen, D Huang, Y Tay, X Chen, Y Lu, ...
arXiv preprint arXiv:2305.08298, 2023
192023
Transforming task representations to perform novel tasks
AK Lampinen, JL McClelland
Proceedings of the National Academy of Sciences 117 (52), 32970-32981, 2020
182020
Can language models handle recursively nested grammatical structures? a case study on comparing models and humans
AK Lampinen
arXiv preprint arXiv:2210.15303, 2022
142022
Different presentations of a mathematical concept can support learning in complementary ways.
AK Lampinen, JL McClelland
Journal of Educational Psychology 110 (5), 664, 2018
132018
The system can't perform the operation now. Try again later.
Articles 1–20