Follow
Kenji Kawaguchi
Kenji Kawaguchi
Presidential Young Professor, National University of Singapore
Verified email at nus.edu.sg - Homepage
Title
Cited by
Cited by
Year
Deep learning without poor local minima
K Kawaguchi
Advances In Neural Information Processing Systems (NeurIPS), 586-594, 2016
8312016
Generalization in Deep Learning
K Kawaguchi, LP Kaelbling, Y Bengio
In Mathematics of Deep Learning, Cambridge University Press, to appear …, 2018
3722018
Interpolation consistency training for semi-supervised learning
V Verma, K Kawaguchi, A Lamb, J Kannala, A Solin, Y Bengio, ...
Neural Networks 145, 90-106, 2022
3472022
Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
AD Jagtap, K Kawaguchi, GE Karniadakis
Journal of Computational Physics 404, 109136, 2020
2062020
Theory of Deep Learning III: explaining the non-overfitting puzzle
T Poggio, K Kawaguchi, Q Liao, B Miranda, L Rosasco, X Boix, J Hidary, ...
Massachusetts Institute of Technology, CBMM Memo No. 073, 2018
120*2018
Bayesian optimization with exponential convergence
K Kawaguchi, LP Kaelbling, T Lozano-Pérez
Advances in Neural Information Processing Systems (NeurIPS) 28, 2809-2817, 2015
1052015
Depth Creates No Bad Local Minima
H Lu, K Kawaguchi
arXiv preprint arXiv:1702.08580, 2017
1012017
Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks
AD Jagtap, K Kawaguchi, G Em Karniadakis
Proceedings of the Royal Society A 476 (2239), 20200334, 2020
842020
How Does Mixup Help With Robustness and Generalization?
L Zhang, Z Deng, K Kawaguchi, A Ghorbani, J Zou
International Conference on Learning Representations (ICLR), 2021
672021
GraphMix: Improved Training of GNNs for Semi-Supervised Learning
V Verma, M Qu, K Kawaguchi, A Lamb, Y Bengio, J Kannala, J Tang
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021
662021
Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy
A Lamb, V Verma, K Kawaguchi, J Kannala, Y Bengio
arXiv preprint arXiv:1906.06784, 2019
522019
Depth with Nonlinearity Creates No Bad Local Minima in ResNets
K Kawaguchi, Y Bengio
Neural Networks 118, 167-174, 2019
502019
Elimination of all bad local minima in deep learning
K Kawaguchi, L Kaelbling
Artificial Intelligence and Statistics (AISTATS), 853-863, 2020
442020
Gradient descent finds global minima for generalizable deep neural networks of practical sizes
K Kawaguchi, J Huang
2019 57th Annual Allerton Conference on Communication, Control, and …, 2019
402019
Effect of depth and width on local minima in deep learning
K Kawaguchi, J Huang, LP Kaelbling
Neural computation 31 (7), 1462-1498, 2019
402019
Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning
Q Liao, K Kawaguchi, T Poggio
Massachusetts Institute of Technology, CBMM Memo No. 57, 2016
332016
Towards domain-agnostic contrastive learning
V Verma, T Luong, K Kawaguchi, H Pham, Q Le
International Conference on Machine Learning (ICML), 10530-10541, 2021
302021
Ordered SGD: A New Stochastic Optimization Framework for Empirical Risk Minimization
K Kawaguchi, H Lu
Artificial Intelligence and Statistics (AISTATS), 669-679, 2020
242020
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
K Kawaguchi
International Conference on Learning Representations (ICLR), 2021
212021
Analysis for iodine release from unit 3 of Fukushima Dai-ichi nuclear power plant with consideration of water phase iodine chemistry
J Ishikawa, K Kawaguchi, Y Maruyama
Journal of Nuclear Science and Technology 52 (3), 308-314, 2015
212015
The system can't perform the operation now. Try again later.
Articles 1–20