Follow
Noga Zaslavsky
Title
Cited by
Cited by
Year
Deep learning and the information bottleneck principle
N Tishby, N Zaslavsky
2015 IEEE Information Theory Workshop (ITW), 1-5, 2015
15692015
Efficient compression in color naming and its evolution
N Zaslavsky, C Kemp, T Regier, N Tishby
Proceedings of the National Academy of Sciences 115 (31), 7937-7942, 2018
2222018
Color Naming Reflects Both Perceptual Structure and Communicative Need
N Zaslavsky, C Kemp, N Tishby, T Regier
Topics in Cognitive Science 11 (1), 207-219, 2019
492019
Communicative need in colour naming
N Zaslavsky, C Kemp, N Tishby, T Regier
Cognitive Neuropsychology, 1-13, 2019
372019
A Rate-Distortion view of human pragmatic reasoning
N Zaslavsky, J Hu, RP Levy
Proceedings of the Society for Computation in Linguistics, 2020
342020
The forms and meanings of grammatical markers support efficient communication
F Mollica, G Bacon, N Zaslavsky, Y Xu, T Regier, C Kemp
Proceedings of the National Academy of Sciences 118 (49), 2021
332021
Let's talk (efficiently) about us: Person systems achieve near-optimal compression
N Zaslavsky, M Maldonado, J Culbertson
CogSci 2021, 2021
322021
Cloze Distillation: Improving Neural Language Models with Human Next-Word Prediction
T Eisape, N Zaslavsky, R Levy
Proceedings of the 24th Conference on Computational Natural Language …, 2020
30*2020
Semantic categories of artifacts and animals reflect efficient coding
N Zaslavsky, T Regier, N Tishby, C Kemp
41st Annual Meeting of the Cognitive Science Society, 2019
302019
Trading off Utility, Informativeness, and Complexity in Emergent Communication
M Tucker, R Levy, J Shah, N Zaslavsky
Neural Information Processing Systems (NeurIPS), 2022
29*2022
Beyond linear regression: mapping models in cognitive neuroscience should align with research goals
AA Ivanova, M Schrimpf, S Anzellotti, N Zaslavsky, E Fedorenko, L Isik
Neurons, Behavior, Data analysis, and Theory (NBDT), 2022
26*2022
Artificial neural network language models align neurally and behaviorally with humans even after a developmentally realistic amount of training
EA Hosseini, M Schrimpf, Y Zhang, S Bowman, N Zaslavsky, E Fedorenko
BioRxiv, 2022.10. 04.510681, 2022
252022
The evolution of color naming reflects pressure for efficiency: Evidence from the recent past
N Zaslavsky, K Garvin, C Kemp, N Tishby, T Regier
Journal of Language Evolution, 2022
222022
Efficient encoding of motion is mediated by gap junctions in the fly visual system
S Wang, A Borst, N Zaslavsky, N Tishby, I Segev
PLoS Computational Biology 13 (12), e1005846, 2017
182017
Probing artificial neural networks: insights from neuroscience
AA Ivanova, J Hewitt, N Zaslavsky
ICLR 2021 Brain2AI Workshop, 2021
132021
Information-Theoretic Principles in the Evolution of Semantic Systems
N Zaslavsky
Ph.D. Thesis, The Hebrew University of Jerusalem, 2020
112020
Efficient human-like semantic representations via the Information Bottleneck principle
N Zaslavsky, C Kemp, T Regier, N Tishby
NeuIPS 2017 Cognitively Informed AI workshop, 2017
112017
Toward human-like object naming in artificial neural systems
TN Eisape, R Levy, JB Tenenbaum, N Zaslavsky
ICLR 2020 Bridging AI and Cognitive Science workshop, 2020
52020
Artificial neural network language models predict human brain responses to language even after a developmentally realistic amount of training
EA Hosseini, M Schrimpf, Y Zhang, S Bowman, N Zaslavsky, E Fedorenko
Neurobiology of Language, 1-21, 2024
42024
Generalization and Translatability in Emergent Communication via Informational Constraints
M Tucker, R Levy, J Shah, N Zaslavsky
NeurIPS 2022 Workshop on Information-Theoretic Principles in Cognitive Systems, 2022
42022
The system can't perform the operation now. Try again later.
Articles 1–20