Carlos Florensa
TitleCited byYear
Stochastic neural networks for hierarchical reinforcement learning
C Florensa, Y Duan, P Abbeel
International Conference on Learning Representations (ICLR), 2017
1042017
Reverse curriculum generation for reinforcement learning
C Florensa, D Held, M Wulfmeier, P Abbeel
Conference on Robot Learning (CoRL), 2017
802017
Automatic goal generation for reinforcement learning agents
C Florensa, D Held, X Geng, P Abbeel
International Conference on Machine Learning (ICML), 2017
68*2017
Capacity planning with competitive decision-makers: Trilevel MILP formulation, degeneracy, and solution approaches
C Florensa, P Garcia-Herreros, P Misra, E Arslan, S Mehta, IE Grossmann
European Journal of Operational Research 262 (2), 449-463, 2017
13*2017
Self-supervised learning of image embedding for continuous control
C Florensa, J Degrave, N Heess, JT Springenberg, M Riedmiller
arXiv preprint arXiv:1901.00943, 2019
62019
“The magic of light!”-An entertaining optics and photonics awareness program
C Florensa, M Mart, SC Kumar, S Carrasco
Education and Training in Optics and Photonics, EWF2, 2013
12013
Goal-conditioned Imitation Learning
Y Ding, C Florensa, M Phielipp, P Abbeel
Advances in Neural Information Processing Systems (NeurIPS), 2019
2019
Sub-policy Adaptation for Hierarchical Reinforcement Learning
AC Li, C Florensa, I Clavera, P Abbeel
arXiv preprint arXiv:1906.05862, 2019
2019
Adaptive Variance for Changing Sparse-Reward Environments
X Lin, P Guo, C Florensa, D Held
International Conference on Robotics and Automation, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–9