Follow
Michael E. Sander
Title
Cited by
Cited by
Year
Momentum Residual Neural Networks
ME Sander, P Ablin, M Blondel, G Peyré
International Conference on Machine Learning 139, 9276-9287, 2021
582021
Vision Transformers provably learn spatial structure
S Jelassi, ME Sander, Y Li
Advances in Neural Information Processing Systems 35, 2022
492022
Sinkformers: Transformers with doubly stochastic attention
ME Sander, P Ablin, M Blondel, G Peyré
International Conference on Artificial Intelligence and Statistics, 2022
412022
Do Residual Neural Networks discretize Neural Ordinary Differential Equations?
ME Sander, P Ablin, G Peyré
Advances in Neural Information Processing Systems 35, 2022
172022
Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective
ME Sander, J Puigcerver, J Djolonga, G Peyré, M Blondel
International Conference on Machine Learning, 2023
62023
Implicit regularization of deep residual networks towards neural ODEs
P Marion, YH Wu, ME Sander, G Biau
International Conference on Learning Representations, 2023
52023
How do Transformers perform In-Context Autoregressive Learning?
ME Sander, R Giryes, T Suzuki, M Blondel, G Peyré
arXiv preprint arXiv:2402.05787, 2024
2024
Unveiling the secrets of paintings: deep neural networks trained on high-resolution multispectral images for accurate attribution and authentication
ME Sander, T Sander, M Sylvestre
Sixteenth International Conference on Quality Control by Artificial Vision …, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–8