Jeffrey Wu
Jeffrey Wu
OpenAI
Verified email at openai.com
Title
Cited by
Cited by
Year
Language models are unsupervised multitask learners
A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever
OpenAI blog 1 (8), 9, 2019
2623*2019
Language models are few-shot learners
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arXiv preprint arXiv:2005.14165, 2020
4762020
Release strategies and the social impacts of language models
I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ...
arXiv preprint arXiv:1908.09203, 2019
482019
Scaling laws for neural language models
J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess, R Child, ...
arXiv preprint arXiv:2001.08361, 2020
322020
Fine-tuning language models from human preferences
DM Ziegler, N Stiennon, J Wu, TB Brown, A Radford, D Amodei, ...
arXiv preprint arXiv:1909.08593, 2019
322019
Generative pretraining from pixels
M Chen, A Radford, R Child, J Wu, H Jun, D Luan, I Sutskever
International Conference on Machine Learning, 1691-1703, 2020
152020
Learning to summarize from human feedback
N Stiennon, L Ouyang, J Wu, DM Ziegler, R Lowe, C Voss, A Radford, ...
arXiv preprint arXiv:2009.01325, 2020
82020
The system can't perform the operation now. Try again later.
Articles 1–7