Volgen
Parishad BehnamGhader
Parishad BehnamGhader
Geverifieerd e-mailadres voor mail.mcgill.ca - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
P BehnamGhader, V Adlakha, M Mosbach, D Bahdanau, N Chapados, ...
First Conference on Language Modeling (COLM 2024), 2024
1572024
Evaluating correctness and faithfulness of instruction-following models for question answering
V Adlakha, P BehnamGhader, XH Lu, N Meade, S Reddy
Transactions of the Association for Computational Linguistics 12, 775-793, 2024
1282024
Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model
P BehnamGhader, S Miret, S Reddy
EMNLP 2023 Findings, 2023
282023
An Analysis of Social Biases Present in BERT Variants Across Multiple Languages
P BehnamGhader, A Milios
Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022, 2022
8*2022
Llm2vec: Large language models are secretly powerful text encoders, 2024
P BehnamGhader, V Adlakha, M Mosbach, D Bahdanau, N Chapados, ...
URL https://arxiv. org/abs/2404.05961, 0
5
Mg-bert: Multi-graph augmented bert for masked language modeling
P BehnamGhader, H Zakerinia, MS Baghshah
Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural …, 2021
22021
Exploiting Instruction-Following Retrievers for Malicious Information Retrieval
P BehnamGhader, N Meade, S Reddy
arXiv preprint arXiv:2503.08644, 2025
2025
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–7