Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1 DJ Klionsky, AK Abdel-Aziz, S Abdelfatah, M Abdellatif, A Abdoli, S Abel, ... autophagy 17 (1), 1-382, 2021 | 13396* | 2021 |
Deberta: Decoding-enhanced bert with disentangled attention P He, X Liu, J Gao, W Chen arXiv preprint arXiv:2006.03654, 2020 | 2421 | 2020 |
On the variance of the adaptive learning rate and beyond L Liu, H Jiang, P He, W Chen, X Liu, J Gao, J Han arXiv preprint arXiv:1908.03265, 2019 | 2190 | 2019 |
Domain-specific language model pretraining for biomedical natural language processing Y Gu, R Tinn, H Cheng, M Lucas, N Usuyama, X Liu, T Naumann, J Gao, ... ACM Transactions on Computing for Healthcare (HEALTH) 3 (1), 1-23, 2021 | 1776 | 2021 |
Unified language model pre-training for natural language understanding and generation L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, HW Hon Advances in neural information processing systems 32, 2019 | 1738 | 2019 |
Multi-task deep neural networks for natural language understanding X Liu, P He, W Chen, J Gao arXiv preprint arXiv:1901.11504, 2019 | 1410 | 2019 |
Ms marco: A human generated machine reading comprehension dataset P Bajaj, D Campos, N Craswell, L Deng, J Gao, X Liu, R Majumder, ... arXiv preprint arXiv:1611.09268, 2016 | 710 | 2016 |
Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers B Wang, R Shin, X Liu, O Polozov, M Richardson arXiv preprint arXiv:1911.04942, 2019 | 522 | 2019 |
Representation learning using multi-task deep neural networks for semantic classification and information retrieval X Liu, J Gao, X He, L Deng, K Duh, YY Wang | 501 | 2015 |
Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization H Jiang, P He, W Chen, X Liu, J Gao, T Zhao arXiv preprint arXiv:1911.03437, 2019 | 463 | 2019 |
Cyclical annealing schedule: A simple approach to mitigating kl vanishing H Fu, C Li, X Liu, J Gao, A Celikyilmaz, L Carin arXiv preprint arXiv:1903.10145, 2019 | 410 | 2019 |
Unilmv2: Pseudo-masked language models for unified language model pre-training H Bao, L Dong, F Wei, W Wang, N Yang, X Liu, Y Wang, J Gao, S Piao, ... International conference on machine learning, 642-652, 2020 | 404 | 2020 |
Specification and estimation of social interaction models with network structures L Lee, X Liu, X Lin The Econometrics Journal 13 (2), 145-176, 2010 | 390 | 2010 |
The conduct of drug metabolism studies considered good practice (II): in vitro experiments L Jia, X Liu Current drug metabolism 8 (8), 822-829, 2007 | 352 | 2007 |
The involvement of P‐glycoprotein in berberine absorption G Pan, GJ Wang, XD Liu, JP Fawcett, YY Xie Pharmacology & toxicology 91 (4), 193-197, 2002 | 275 | 2002 |
Record: Bridging the gap between human and machine commonsense reading comprehension S Zhang, X Liu, J Liu, J Gao, K Duh, B Van Durme arXiv preprint arXiv:1810.12885, 2018 | 265 | 2018 |
Understanding the difficulty of training transformers L Liu, X Liu, J Gao, W Chen, J Han arXiv preprint arXiv:2004.08249, 2020 | 262 | 2020 |
ABC family transporters X Liu Drug transporters in drug disposition, effects and toxicity, 13-100, 2019 | 241 | 2019 |
Non-fullerene acceptor with low energy loss and high external quantum efficiency: towards high performance polymer solar cells Y Li, X Liu, FP Wu, Y Zhou, ZQ Jiang, B Song, Y Xia, ZG Zhang, F Gao, ... Journal of Materials Chemistry A 4 (16), 5890-5897, 2016 | 240 | 2016 |
Stochastic answer networks for machine reading comprehension X Liu, Y Shen, K Duh, J Gao arXiv preprint arXiv:1712.03556, 2017 | 237 | 2017 |