Affect Recognition using Key Frame Selection based on Minimum Sparse Reconstruction


Kayaoglu M., Erdem Ç.

2015 ACM International Conference on Multimodal Interaction, Washington, Amerika Birleşik Devletleri, 9 - 13 Kasım 2015, ss.519-524 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1145/2818346.2830594
  • Basıldığı Şehir: Washington
  • Basıldığı Ülke: Amerika Birleşik Devletleri
  • Sayfa Sayıları: ss.519-524
  • Anahtar Kelimeler: Emotion Recognition, EmotiW 2015 Challenge, Video Summarization, Affect Recognition, Affective Computing, TEXTURE CLASSIFICATION, EMOTION, FUSION
  • Marmara Üniversitesi Adresli: Hayır

Özet

In this paper, we present the methods used for Bahcesehir University team's submissions to the 2015 Emotion Recognition in the Wild Challenge. The challenge consists of categorical emotion recognition in short video clips extracted from movies based on emotional keywords in the subtitles. The video clips mostly contain expressive faces (single or multiple) and also audio which contains the speech of the person in the clip as well as other human voices or background sounds/music. We use an audio-visual method based on video summarization by key frame selection. The key frame selection uses a minimum sparse reconstruction approach with the goal of representing the original video in the best possible way. We extract the LPQ features of the key frames and average them to determine a single feature vector that will represent the video component of the clip. In order to represent the temporal variations of the facial expression, we also use the LBP-TOP features extracted from the whole video. The audio features are extracted using OpenSMILE or RASTA-PLP methods. Video and audio features are classified using SVM classifiers and fused at the score level. We tested eight different combinations of audio and visual features on the AFEW 5.0 (Acted Facial Expressions in the Wild) database provided by the challenge organizers. The best visual and audio-visual accuracies obtained on the test set are 45.1% and 49.9% respectively, whereas the video-based baseline for the challenge is given as 39.3%.