Wenhui Wang
Wenhui Wang
Microsoft Research
Verified email at microsoft.com
Title
Cited by
Cited by
Year
Gated self-matching networks for reading comprehension and question answering
W Wang, N Yang, F Wei, B Chang, M Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
5292017
Unified language model pre-training for natural language understanding and generation
L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, HW Hon
Advances in Neural Information Processing Systems, 13063-13075, 2019
4192019
Graph-based dependency parsing with bidirectional LSTM
W Wang, B Chang
Proceedings of the 54th Annual Meeting of the Association for Computational …, 2016
1372016
Multiway Attention Networks for Modeling Sentence Pairs.
C Tan, F Wei, W Wang, W Lv, M Zhou
IJCAI, 4411-4417, 2018
682018
Unilmv2: Pseudo-masked language models for unified language model pre-training
H Bao, L Dong, F Wei, W Wang, N Yang, X Liu, Y Wang, J Gao, S Piao, ...
International Conference on Machine Learning, 642-652, 2020
592020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
arXiv preprint arXiv:2002.10957, 2020
582020
InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training
Z Chi, L Dong, F Wei, N Yang, S Singhal, W Wang, X Song, XL Mao, ...
arXiv preprint arXiv:2007.07834, 2020
292020
Cross-Lingual Natural Language Generation via Pre-Training.
Z Chi, L Dong, F Wei, W Wang, XL Mao, H Huang
AAAI, 7570-7577, 2020
262020
Learning to Ask Unanswerable Questions for Machine Reading Comprehension
H Zhu, L Dong, F Wei, W Wang, B Qin, T Liu
arXiv preprint arXiv:1906.06045, 2019
222019
Improved Dependency Parsing using Implicit Word Connections Learned from Unlabeled Data
W Wang, B Chang, M Mansur
Proceedings of the 2018 Conference on Empirical Methods in Natural Language …, 2018
152018
Harvesting and Refining Question-Answer Pairs for Unsupervised QA
Z Li, W Wang, L Dong, F Wei, K Xu
arXiv preprint arXiv:2005.02925, 2020
102020
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers
W Wang, H Bao, S Huang, L Dong, F Wei
arXiv preprint arXiv:2012.15828, 2020
32020
Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension
H Bao, L Dong, F Wei, W Wang, N Yang, L Cui, S Piao, M Zhou
Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 14-18, 2019
22019
The system can't perform the operation now. Try again later.
Articles 1–13