Peng Xu
Peng Xu
Verified email at stanford.edu
TitleCited byYear
Newton-type methods for non-convex optimization under inexact hessian information
P Xu, F Roosta, MW Mahoney
Mathematical Programming, 1-36, 2017
652017
Sub-sampled Newton methods with non-uniform sampling
P Xu, J Yang, F Roosta-Khorasani, C Ré, MW Mahoney
Advances in Neural Information Processing Systems, 3000-3008, 2016
492016
Second-order optimization for non-convex machine learning: An empirical study
P Xu, F Roosta-Khorasani, MW Mahoney
arXiv preprint arXiv:1708.07827, 2017
372017
Giant: Globally improved approximate newton method for distributed optimization
S Wang, F Roosta-Khorasani, P Xu, MW Mahoney
Advances in Neural Information Processing Systems, 2332-2342, 2018
192018
Socratic learning: Augmenting generative models to incorporate latent subsets in training data
P Varma, B He, D Iter, P Xu, R Yu, C De Sa, C Ré
arXiv preprint arXiv:1610.08123, 2016
19*2016
Accelerated stochastic power iteration
C De Sa, B He, I Mitliagkas, C Ré, P Xu
Proceedings of machine learning research 84, 58, 2018
142018
Inexact non-convex Newton-type methods
Z Yao, P Xu, F Roosta-Khorasani, MW Mahoney
arXiv preprint arXiv:1802.06925, 2018
132018
Newton-MR: Newton's Method Without Smoothness or Convexity
F Roosta, Y Liu, P Xu, MW Mahoney
arXiv preprint arXiv:1810.00303, 2018
42018
Trust region based adversarial attack on neural networks
Z Yao, A Gholami, P Xu, K Keutzer, MW Mahoney
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2019
12019
Passage Ranking with Weak Supervision
P Xu, X Ma, R Nallapati, B Xiang
arXiv preprint arXiv:1905.05910, 2019
2019
On the effectiveness and simplicity of linear recursive neural network
P Xu, R Wang
The system can't perform the operation now. Try again later.
Articles 1–11