Prosody-driven head-gesture animation


Sargin M. E., Erzin E., Yemez Y., Tekalp A. M., Erdem A. T., Erdem Ç., ...Daha Fazla

32nd IEEE International Conference on Acoustics, Speech and Signal Processing, Hawaii, Amerika Birleşik Devletleri, 15 - 20 Nisan 2007, ss.677-678 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/icassp.2007.366326
  • Basıldığı Şehir: Hawaii
  • Basıldığı Ülke: Amerika Birleşik Devletleri
  • Sayfa Sayıları: ss.677-678
  • Marmara Üniversitesi Adresli: Hayır

Özet

We present a new framework for joint analysis of head gesture and speech prosody patterns of a speaker towards automatic realistic synthesis of head gestures from speech prosody. The proposed two-stage analysis aims to "learn" both elementary prosody and head gesture patterns for a particular speaker, as well as the correlations between these head gesture and prosody patterns from a training video sequence. The resulting audio-visual mapping model is then employed to synthesize natural head gestures from arbitrary input test speech given a head model for the speaker. Objective and subjective evaluations indicate that the proposed synthesis by analysis scheme provides natural looking head gestures for the speaker with any input test speech.