Visualbert: A simple and performant baseline for vision and language LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang arXiv preprint arXiv:1908.03557, 2019 | 2143 | 2019 |
Men also like shopping: Reducing gender bias amplification using corpus-level constraints J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang arXiv preprint arXiv:1707.09457, 2017 | 1258 | 2017 |
Neural motifs: Scene graph parsing with global context R Zellers, M Yatskar, S Thomson, Y Choi Proceedings of the IEEE conference on computer vision and pattern …, 2018 | 1201 | 2018 |
Gender bias in coreference resolution: Evaluation and debiasing methods J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang arXiv preprint arXiv:1804.06876, 2018 | 1091 | 2018 |
QuAC: Question answering in context E Choi, H He, M Iyyer, M Yatskar, W Yih, Y Choi, P Liang, L Zettlemoyer arXiv preprint arXiv:1808.07036, 2018 | 979 | 2018 |
Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations T Wang, J Zhao, M Yatskar, KW Chang, V Ordonez Proceedings of the IEEE/CVF international conference on computer vision …, 2019 | 552 | 2019 |
Don't take the easy way out: Ensemble based methods for avoiding known dataset biases C Clark, M Yatskar, L Zettlemoyer arXiv preprint arXiv:1909.03683, 2019 | 542 | 2019 |
Gender bias in contextualized word embeddings J Zhao, T Wang, M Yatskar, R Cotterell, V Ordonez, KW Chang arXiv preprint arXiv:1904.03310, 2019 | 490 | 2019 |
Neural AMR: Sequence-to-sequence models for parsing and generation I Konstas, S Iyer, M Yatskar, Y Choi, L Zettlemoyer arXiv preprint arXiv:1704.08381, 2017 | 373 | 2017 |
Situation Recognition: Visual Semantic Role Labeling for Image Understanding M Yatskar, L Zettlemoyer, A Farhadi Conference on Computer Vision and Pattern Recognition, 2016 | 314 | 2016 |
Robothor: An open simulation-to-real embodied ai platform M Deitke, W Han, A Herrasti, A Kembhavi, E Kolve, R Mottaghi, J Salvador, ... Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020 | 287 | 2020 |
Language in a bottle: Language model guided concept bottlenecks for interpretable image classification Y Yang, A Panagopoulou, S Zhou, D Jin, C Callison-Burch, M Yatskar Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 227 | 2023 |
For the sake of simplicity: Unsupervised extraction of lexical simplifications from Wikipedia M Yatskar, B Pang, C Danescu-Niculescu-Mizil, L Lee arXiv preprint arXiv:1008.1986, 2010 | 216 | 2010 |
What does BERT with vision look at? LH Li, M Yatskar, D Yin, CJ Hsieh, KW Chang Proceedings of the 58th annual meeting of the association for computational …, 2020 | 172 | 2020 |
A qualitative comparison of CoQA, SQuAD 2.0 and QuAC M Yatskar arXiv preprint arXiv:1809.10735, 2018 | 113 | 2018 |
Grounded situation recognition S Pratt, M Yatskar, L Weihs, A Farhadi, A Kembhavi Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020 | 108 | 2020 |
Visual semantic role labeling for video understanding A Sadhu, T Gupta, M Yatskar, R Nevatia, A Kembhavi Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021 | 78 | 2021 |
Molmo and pixmo: Open weights and open data for state-of-the-art multimodal models M Deitke, C Clark, S Lee, R Tripathi, Y Yang, JS Park, M Salehi, ... arXiv preprint arXiv:2409.17146, 2024 | 75 | 2024 |
Expertqa: Expert-curated questions and attributed answers C Malaviya, S Lee, S Chen, E Sieber, M Yatskar, D Roth arXiv preprint arXiv:2309.07852, 2023 | 74 | 2023 |
Holodeck: Language guided generation of 3d embodied ai environments Y Yang, FY Sun, L Weihs, E VanderBilt, A Herrasti, W Han, J Wu, N Haber, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024 | 72 | 2024 |