Follow
Srinivas Parthasarathy
Srinivas Parthasarathy
Verified email at amazon.com
Title
Cited by
Cited by
Year
MSP-IMPROV: An acted corpus of dyadic interactions to study emotion perception
C Busso, S Parthasarathy, A Burmania, M AbdelWahab, N Sadoughi, ...
IEEE Transactions on Affective Computing 8 (1), 67-80, 2016
3352016
Jointly Predicting Arousal, Valence and Dominance with Multi-Task Learning.
S Parthasarathy, C Busso
Interspeech 2017, 1103-1107, 2017
1472017
Increasing the reliability of crowdsourcing evaluations using online quality assessment
A Burmania, S Parthasarathy, C Busso
IEEE Transactions on Affective Computing 7 (4), 374-388, 2015
1252015
Semi-supervised speech emotion recognition with ladder networks
S Parthasarathy, C Busso
IEEE/ACM transactions on audio, speech, and language processing 28, 2697-2709, 2020
982020
Ladder networks for emotion recognition: Using unsupervised auxiliary tasks to improve predictions of emotional attributes
S Parthasarathy, C Busso
arXiv preprint arXiv:1804.10816, 2018
652018
Training strategies to handle missing modalities for audio-visual expression recognition
S Parthasarathy, S Sundaram
Companion Publication of the 2020 International Conference on Multimodal …, 2020
492020
Self-supervised learning with cross-modal transformers for emotion recognition
A Khare, S Parthasarathy, S Sundaram
2021 IEEE Spoken Language Technology Workshop (SLT), 381-388, 2021
452021
Multiresolution and multimodal speech recognition with transformers
G Paraskevopoulos, S Parthasarathy, A Khare, S Sundaram
arXiv preprint arXiv:2004.14840, 2020
432020
Convolutional neural network techniques for speech emotion recognition
S Parthasarathy, I Tashev
2018 16th international workshop on acoustic signal enhancement (IWAENC …, 2018
352018
Using agreement on direction of change to build rank-based emotion classifiers
S Parthasarathy, R Cowie, C Busso
IEEE/ACM Transactions on Audio, Speech, and Language Processing 24 (11 …, 2016
322016
A study of speaker verification performance with expressive speech
S Parthasarathy, C Zhang, JHL Hansen, C Busso
Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International …, 2017
312017
Ranking emotional attributes with deep neural networks
S Parthasarathy, R Lotfian, C Busso
Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International …, 2017
282017
Role of regularization in the prediction of valence from speech
K Sridhar, S Parthasarathy, C Busso
Interspeech 2018, 2018
262018
Detecting expressions with multimodal transformers
S Parthasarathy, S Sundaram
2021 IEEE Spoken Language Technology Workshop (SLT), 636-643, 2021
252021
Improving emotion classification through variational inference of latent variables
S Parthasarathy, V Rozgic, M Sun, C Wang
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
222019
Predicting speaker recognition reliability by considering emotional content
S Parthasarathy, C Busso
2017 seventh international conference on affective computing and intelligent …, 2017
202017
Defining emotionally salient regions using qualitative agreement method
S Parthasarathy, C Busso
Interspeech 2016}, 3598-3602, 2016
202016
Multi-modal embeddings using multi-task learning for emotion recognition
A Khare, S Parthasarathy, S Sundaram
arXiv preprint arXiv:2009.05019, 2020
192020
Predicting emotionally salient regions using qualitative agreement of deep neural network regressors
S Parthasarathy, C Busso
IEEE Transactions on Affective Computing 12 (2), 402-416, 2018
182018
Preference-learning with qualitative agreement for sentence level emotional annotations
S Parthasarathy, C Busso
Interspeech 2018, 2018
172018
The system can't perform the operation now. Try again later.
Articles 1–20