Secrets of rlhf in large language models part i: Ppo R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ... arXiv preprint arXiv:2307.04964, 2023 | 78 | 2023 |
Secrets of rlhf in large language models part ii: Reward modeling B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ... arXiv preprint arXiv:2401.06080, 2024 | 47 | 2024 |
Self-polish: Enhance reasoning in large language models via problem refinement Z Xi, S Jin, Y Zhou, R Zheng, S Gao, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2305.14497, 2023 | 30 | 2023 |
Zhiheng Xi R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ... Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng …, 2023 | 25 | 2023 |
Zhiheng Xi B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ... Jun Zhao, Xiao Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen, Tao Gui, Qi …, 2024 | 18 | 2024 |
Map-neo: Highly capable and transparent bilingual large language model series G Zhang, S Qu, J Liu, C Zhang, C Lin, CL Yu, D Pan, E Cheng, J Liu, ... arXiv preprint arXiv:2405.19327, 2024 | 15 | 2024 |
Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou, Z Xi, X Wang, ... arXiv preprint arXiv:2312.09979 4 (7), 2023 | 13 | 2023 |
Zhiheng Xi, Xiao Wang, Xiaoran Fan, et al. 2023. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou arXiv preprint arXiv:2312.09979, 2023 | 12 | 2023 |
Zhiheng Xi, Xiao Wang, Xiaoran Fan, Shiliang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023 S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou Loramoe: Revolutionizing mixture of experts for maintaining world knowledge …, 0 | 12 | |
Navigating the overkill in large language models C Shi, X Wang, Q Ge, S Gao, X Yang, T Gui, Q Zhang, X Huang, X Zhao, ... arXiv preprint arXiv:2401.17633, 2024 | 11 | 2024 |
LoRAMoE: Alleviating World Knowledge Forgetting in Large Language Models via MoE-Style Plugin S Dou, E Zhou, Y Liu, S Gao, W Shen, L Xiong, Y Zhou, X Wang, Z Xi, ... Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024 | 10 | 2024 |
Tooleyes: Fine-grained evaluation for tool learning capabilities of large language models in real-world scenarios J Ye, G Li, S Gao, C Huang, Y Wu, S Li, X Fan, S Dou, Q Zhang, T Gui, ... arXiv preprint arXiv:2401.00741, 2024 | 10 | 2024 |
Zhiheng Xi, Xiao Wang, Xiaoran Fan, Shiliang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, and Xuanjing Huang S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou Loramoe: Revolutionizing mixture of experts for maintaining world knowledge …, 2023 | 10 | 2023 |
Chinese tiny llm: Pretraining a chinese-centric large language model X Du, Z Yu, S Gao, D Pan, Y Cheng, Z Ma, R Yuan, X Qu, J Liu, T Zheng, ... arXiv preprint arXiv:2404.04167, 2024 | 9 | 2024 |
EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models W Zhou, X Wang, L Xiong, H Xia, Y Gu, M Chai, F Zhu, C Huang, S Dou, ... arXiv preprint arXiv:2403.12171, 2024 | 9 | 2024 |
Decorrelate irrelevant, purify relevant: Overcome textual spurious correlations from a feature perspective S Dou, R Zheng, T Wu, S Gao, J Shan, Q Zhang, Y Wu, X Huang arXiv preprint arXiv:2202.08048, 2022 | 9 | 2022 |
AgentGym: Evolving Large Language Model-based Agents across Diverse Environments Z Xi, Y Ding, W Chen, B Hong, H Guo, J Wang, D Yang, C Liao, X Guo, ... arXiv preprint arXiv:2406.04151, 2024 | 8 | 2024 |
Farewell to aimless large-scale pretraining: Influential subset selection for language model X Wang, W Zhou, Q Zhang, J Zhou, S Gao, J Wang, M Zhang, X Gao, ... arXiv preprint arXiv:2305.12816, 2023 | 8 | 2023 |
Kernel-whitening: Overcome dataset bias with isotropic sentence embedding S Gao, S Dou, Q Zhang, X Huang arXiv preprint arXiv:2210.07547, 2022 | 7 | 2022 |
Zhiheng Xi, Rui Zheng, Yicheng Zou, Tao Gui, et al. 2023b. Trace: A comprehensive benchmark for continual learning in large language models X Wang, Y Zhang, T Chen, S Gao, S Jin, X Yang arXiv preprint arXiv:2310.06762, 0 | 7 | |