21st Signal Processing and Communications Applications Conference (SIU), CYPRUS, 24 - 26 Nisan 2013
In order to design algorithms for affect recognition from facial expressions and speech, audio-visual databases are needed. The affective databases used by researchers today are generally recorded in laboratory environments and contain acted expressions. In this work, we present a method for extraction of audio-visual facial clips from movies. The database collected using the proposed method contains English and Turkish clips and can easily be extended for other languages. We also provide facial expresssion recognition results, which utilize local phase quantization based feature extraction and a support vector machine. Due to larger number of features compared to the number of examples, the affect recognition accuracy improves significantly when feature selection is also performed.