Frédéric Elisei
Frédéric Elisei
CNRS, Gipsa-lab, Univ. Grenoble-Alpes
Verified email at - Homepage
Cited by
Cited by
I reach faster when I see you look: gaze effects in human–human and human–robot face-to-face cooperation
JD Boucher, U Pattacini, A Lelong, G Bailly, F Elisei, S Fagel, PF Dominey, ...
Frontiers in neurorobotics 6, 3, 2012
Audiovisual speech synthesis
G Bailly, M Berar, F Elisei, M Odisio
International Journal of Speech Technology 6 (4), 331-346, 2003
Gaze, conversational agents and face-to-face communication
G Bailly, S Raidt, F Elisei
Speech Communication 52 (6), 598-612, 2010
Can you ‘read’tongue movements? Evaluation of the contribution of tongue display to speech understanding
P Badin, Y Tarabalka, F Elisei, G Bailly
Speech Communication 52 (6), 493-503, 2010
LIPS2008: Visual speech synthesis challenge
BJ Theobald, S Fagel, G Bailly, F Elisei
Rackham: An interactive robot-guide
A Clodic, S Fleury, R Alami, R Chatila, G Bailly, L Brethes, M Cottret, ...
ROMAN 2006-the 15th IEEE International Symposium on Robot and Human …, 2006
An audiovisual talking head for augmented speech generation: models and animations based on a real speaker’s articulatory data
P Badin, F Elisei, G Bailly, Y Tarabalka
International Conference on Articulated Motion and Deformable Objects, 132-143, 2008
Analysis and synthesis of the three-dimensional movements of the head, face, and hand of a speaker using cued speech
G Gibert, G Bailly, D Beautemps, F Elisei, R Brun
The Journal of the Acoustical Society of America 118 (2), 1144-1153, 2005
Creating and controlling video-realistic talking heads.
F Elisei, M Odisio, G Bailly, P Badin
AVSP, 90-97, 2001
Visual articulatory feedback for phonetic correction in second language learning
P Badin, AB Youssef, G Bailly, F Elisei, T Hueber
Second Language Studies: Acquisition, Learning, Education and Technology, 2010
Statistical mapping between articulatory and acoustic data for an ultrasound-based silent speech interface
T Hueber, EL Benaroya, B Denby, G Chollet
Twelfth Annual Conference of the International Speech Communication Association, 2011
Degrees of freedom of facial movements in face-to-face conversational speech
G Bailly, F Elisei, P Badin, C Savariaux
Can you" read tongue movements"? Evaluation of the contribution of tongue display to speech understanding
Y Tarabalka, P Badin, F Elisei, G Bailly
Graphical models for social behavior modeling in face-to face interaction
A Mihoub, G Bailly, C Wolf, F Elisei
Pattern Recognition Letters 74, 82-89, 2016
Lip-synching using speaker-specific articulation, shape and appearance models
G Bailly, O Govokhina, F Elisei, G Breton
EURASIP Journal on Audio, Speech, and Music Processing 2009, 1-11, 2009
Scrutinizing natural scenes: controlling the gaze of an embodied conversational agent
A Picot, G Bailly, F Elisei, S Raidt
International Workshop on Intelligent Virtual Agents, 272-282, 2007
Learning multimodal behavioral models for face-to-face social interaction
A Mihoub, G Bailly, C Wolf, F Elisei
Journal on Multimodal User Interfaces 9 (3), 195-210, 2015
Can you" read tongue movements"?
P Badin, Y Tarabalka, F Elisei, G Bailly
Telma: Telephony for the hearing-impaired people. from models to user tests
D Beautemps, L Girin, N Aboutabit, G Bailly, L Besacier, G Breton, ...
Proceedings of ASSISTH, 201-208, 2007
Tracking talking faces with shape and appearance models
M Odisio, G Bailly, F Elisei
Speech Communication 44 (1-4), 63-82, 2004
The system can't perform the operation now. Try again later.
Articles 1–20