Face recognition: Past, present and future (a review)


TAŞKIRAN M., KAHRAMAN N., EROĞLU ERDEM Ç.

DIGITAL SIGNAL PROCESSING, cilt.106, 2020 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Derleme
  • Cilt numarası: 106
  • Basım Tarihi: 2020
  • Doi Numarası: 10.1016/j.dsp.2020.102809
  • Dergi Adı: DIGITAL SIGNAL PROCESSING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Aerospace Database, Applied Science & Technology Source, Communication Abstracts, Compendex, Computer & Applied Sciences, INSPEC
  • Anahtar Kelimeler: Face recognition, Face identification, Facial dynamics, Image-based face recognition, Video-based face recognition, INDEPENDENT COMPONENT ANALYSIS, LINEAR DISCRIMINANT-ANALYSIS, SPARSE REPRESENTATION, FACIAL EXPRESSIONS, PERSON AUTHENTICATION, MOVING FACES, 3D, DATABASE, FEATURES, BINARY
  • Marmara Üniversitesi Adresli: Evet

Özet

Biometric systems have the goal of measuring and analyzing the unique physical or behavioral characteristics of an individual. The main feature of biometric systems is the use of bodily structures with distinctive characteristics. In the literature, there are biometric systems that use physiological features (fingerprint, iris, palm print, face, etc.) as well as systems that use behavioral characteristics (signature, walking, speech patterns, facial dynamics, etc.) Recently, facial biometrics has been one of the most preferred biometric data since it generally does not require the cooperation of the user and can be obtained without violating the personal private space. In this paper, the methods used to obtain and classify facial biometric data in the literature have been summarized. We give a taxonomy of image-based and video-based face recognition methods, outline the major historical developments, and the main processing steps. Popular data sets that have been used for face recognition by researchers are also reviewed. We also cover the recent deep-learning based methods for face recognition and point out possible directions for future research. (C) 2020 Elsevier Inc. All rights reserved.