Follow
Surbhi Goel
Surbhi Goel
Postdoctoral Researcher, Microsoft Research NYC
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Reliably learning the relu in polynomial time
S Goel, V Kanade, A Klivans, J Thaler
Conference on Learning Theory (COLT) 2017, 2016
1102016
Learning neural networks with two nonlinear layers in polynomial time
S Goel, A Klivans
Conference on Learning Theory (COLT) 2019, 2017
86*2017
Learning one convolutional layer with overlapping patches
S Goel, A Klivans, R Meka
International Conference on Machine Learning (ICML) 2018, 2018
682018
Superpolynomial lower bounds for learning one-layer neural networks using gradient descent
S Goel, A Gollakota, Z Jin, S Karmalkar, A Klivans
International Conference on Machine Learning, 3587-3596, 2020
362020
Time/accuracy tradeoffs for learning a relu with respect to gaussian marginals
S Goel, S Karmalkar, A Klivans
Advances in Neural Information Processing Systems 32, 2019
352019
Approximation schemes for relu regression
I Diakonikolas, S Goel, S Karmalkar, AR Klivans, M Soltanolkotabi
Conference on Learning Theory, 1452-1485, 2020
312020
Statistical-query lower bounds via functional gradients
S Goel, A Gollakota, A Klivans
Advances in Neural Information Processing Systems 33, 2147-2158, 2020
282020
Eigenvalue decay implies polynomial-time learnability for neural networks
S Goel, A Klivans
Advances in Neural Information Processing Systems 30, 2017
272017
Efficiently learning adversarially robust halfspaces with noise
O Montasser, S Goel, I Diakonikolas, N Srebro
International Conference on Machine Learning, 7010-7021, 2020
252020
Quantifying perceptual distortion of adversarial examples
M Jordan, N Manoj, S Goel, AG Dimakis
arXiv preprint arXiv:1902.08265, 2019
202019
Learning Ising models with independent failures
S Goel, DM Kane, AR Klivans
Conference on Learning Theory (COLT) 2019, 2019
162019
Improved learning of one-hidden-layer convolutional neural networks with overlaps
SS Du, S Goel
arXiv preprint arXiv:1805.07798, 2018
152018
Tight hardness results for training depth-2 ReLU networks
S Goel, A Klivans, P Manurangsi, D Reichman
arXiv preprint arXiv:2011.13550, 2020
132020
Investigating the role of negatives in contrastive representation learning
JT Ash, S Goel, A Krishnamurthy, D Misra
arXiv preprint arXiv:2106.09943, 2021
102021
Learning ising and potts models with latent variables
S Goel
International Conference on Artificial Intelligence and Statistics, 3557-3566, 2020
9*2020
Acceleration via fractal learning rate schedules
N Agarwal, S Goel, C Zhang
International Conference on Machine Learning, 87-99, 2021
72021
Understanding contrastive learning requires incorporating inductive biases
N Saunshi, J Ash, S Goel, D Misra, C Zhang, S Arora, S Kakade, ...
arXiv preprint arXiv:2202.14037, 2022
62022
Learning mixtures of graphs from epidemic cascades
J Hoffmann, S Basu, S Goel, C Caramanis
International Conference on Machine Learning, 4342-4352, 2020
52020
Gone fishing: Neural active learning with fisher embeddings
J Ash, S Goel, A Krishnamurthy, S Kakade
Advances in Neural Information Processing Systems 34, 8927-8939, 2021
42021
Statistical Estimation from Dependent Data
Y Dagan, C Daskalakis, N Dikkala, S Goel, AV Kandiros
arXiv preprint arXiv:2107.09773, 2021
3*2021
The system can't perform the operation now. Try again later.
Articles 1–20