追蹤
Xuechen Li
Xuechen Li
在 cs.stanford.edu 的電子郵件地址已通過驗證 - 首頁
標題
引用次數
引用次數
年份
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
25792021
Alpaca: A strong, replicable instruction-following model
R Taori, I Gulrajani, T Zhang, Y Dubois, X Li, C Guestrin, P Liang, ...
Stanford Center for Research on Foundation Models. https://crfm. stanford …, 2023
1457*2023
Isolating sources of disentanglement in variational autoencoders
RTQ Chen, X Li, RB Grosse, DK Duvenaud
Advances in neural information processing systems 31, 2018
13082018
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
5672022
Inference Suboptimality in Variational Autoencoders
C Cremer, X Li, D Duvenaud
International Conference on Machine Learning, 2018
2942018
Scalable gradients for stochastic differential equations
X Li, TKL Wong, RTQ Chen, D Duvenaud
International Conference on Artificial Intelligence and Statistics, 3870-3882, 2020
2752020
Large language models can be strong differentially private learners
X Li, F Tramer, P Liang, T Hashimoto
arXiv preprint arXiv:2110.05679, 2021
2272021
Alpacaeval: An automatic evaluator of instruction-following models
X Li, T Zhang, Y Dubois, R Taori, I Gulrajani, C Guestrin, P Liang, ...
1692023
Alpacafarm: A simulation framework for methods that learn from human feedback
Y Dubois, CX Li, R Taori, T Zhang, I Gulrajani, J Ba, C Guestrin, PS Liang, ...
Advances in Neural Information Processing Systems 36, 2024
1562024
Neural sdes as infinite-dimensional gans
P Kidger, J Foster, X Li, TJ Lyons
International conference on machine learning, 5453-5463, 2021
1102021
Exploiting programmatic behavior of llms: Dual-use through standard security attacks
D Kang, X Li, I Stoica, C Guestrin, M Zaharia, T Hashimoto
arXiv preprint arXiv:2302.05733, 2023
982023
Stochastic runge-kutta accelerates langevin monte carlo and beyond
X Li, Y Wu, L Mackey, MA Erdogdu
Advances in neural information processing systems 32, 2019
692019
Scalable gradients and variational inference for stochastic differential equations
X Li, TKL Wong, RTQ Chen, DK Duvenaud
Symposium on Advances in Approximate Bayesian Inference, 1-28, 2020
552020
Foundation models and fair use
P Henderson, X Li, D Jurafsky, T Hashimoto, MA Lemley, P Liang
arXiv preprint arXiv:2303.15715, 2023
492023
Infinitely deep bayesian neural networks with stochastic differential equations
W Xu, RTQ Chen, X Li, D Duvenaud
International Conference on Artificial Intelligence and Statistics, 721-738, 2022
452022
Efficient and accurate gradients for neural sdes
P Kidger, J Foster, XC Li, T Lyons
Advances in Neural Information Processing Systems 34, 18747-18761, 2021
412021
When does differentially private learning not suffer in high dimensions?
X Li, D Liu, TB Hashimoto, HA Inan, J Kulkarni, YT Lee, A Guha Thakurta
Advances in Neural Information Processing Systems 35, 28616-28630, 2022
402022
When does preconditioning help or hurt generalization?
S Amari, J Ba, R Grosse, X Li, A Nitanda, T Suzuki, D Wu, J Xu
arXiv preprint arXiv:2006.10732, 2020
362020
Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe
X Yue, HA Inan, X Li, G Kumar, J McAnallen, H Sun, D Levitan, R Sim
arXiv preprint arXiv:2210.14348, 2022
342022
Exploring the limits of differentially private deep learning with group-wise clipping
J He, X Li, D Yu, H Zhang, J Kulkarni, YT Lee, A Backurs, N Yu, J Bian
arXiv preprint arXiv:2212.01539, 2022
272022
系統目前無法執行作業,請稍後再試。
文章 1–20