Ariel Herbert-Voss
Ariel Herbert-Voss
Verified email at g.harvard.edu
Title
Cited by
Cited by
Year
Language models are few-shot learners
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arXiv preprint arXiv:2005.14165, 2020
4762020
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arXiv preprint arXiv:2005.14165, 2020
1552020
Release strategies and the social impacts of language models
I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ...
arXiv preprint arXiv:1908.09203, 2019
482019
Toward trustworthy AI development: mechanisms for supporting verifiable claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
312020
Computing minimal interpolants in C1,(Rd)
A Herbert-Voss, MJ Hirn, F McCollum
arXiv preprint arXiv:1411.5668, 2014
132014
Language models are few-shot learners. arXiv 2020
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arXiv preprint arXiv:2005.14165 4, 0
9
Language models are few-shot learners.(2020)
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
arXiv preprint arXiv:2005.14165, 2020
82020
Computing minimal interpolants in
A Herbert-Voss, MJ Hirn, F McCollum
arXiv preprint arXiv:1411.5668, 2014
42014
Extracting Training Data from Large Language Models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
arXiv preprint arXiv:2012.07805, 2020
32020
The system can't perform the operation now. Try again later.
Articles 1–9