Humans have identified each other by observing each other's faces for thousands of years. Only recently has technology evolved to the point where human faces could be recognized. Today, facial recognition technology is one of the most prominent biometric techniques. In the past decade, facial recognition technology has been implemented in products ranging from high-end security systems to smartphones. In this article, we will put a spotlight of a face recognition system.
1. What is Facial Recognition?
Facial recognition is a biometric technology using image sensing to identify persons by quantifying and analyzing their facial characteristics. Live camera feeds or captured images are processed to match or detect the person. Some of the major features of facial recognition include:
- Human Body Detection: Human bodies are traced and identified in the image data.
- Face Direction Estimation: Estimates the direction of the faces.
- Gaze Estimation: Estimates the gaze and state of the detected faces.
- Blink Estimation: Estimates the blink degree for both eyes of the detected face.
- Age Estimation: Estimates the age of the detected face.
- Gender Estimation: Estimates the gender of the detected face.
- Expression Estimation: Estimates the facial expression such as, neutral, happiness, surprise, anger, and sadness.
2. Facial Recognition Systems
A typical facial recognition system includes:
- A camera, as a source of image or video for facial recognition
- Database or trained model for classification
- Algorithms to extract features and classify the captured image
A facial recognition system may implement different processes starting from image capture to identification, but the essential steps are basically the same. These steps include:
- Face Detection: The first step of facial recognition is to identify face(s) available in the source image or video source. Usually, an image to be processed contains other objects along with the face, so face detection algorithms remove the unwanted objects in the image to extract the perfect face image.
- Preprocessing: Depending on the application, face preprocessing includes: illumination corrections, blur and focus corrections, filtering, and noise removal. Some of the commonly used preprocessing techniques include Wavelet Transformation, Histogram Equalization, Discrete Cosine Transformation, and Color Normalization.
- Feature Extraction: A feature is a piece of information that describes an image or a part of it. Feature extraction is the method of numerical computing values on regions, objects, and shapes detected in an image. Holistic and Local features are two types of feature extraction. The Holistic Feature extraction methods treat the face image as a whole, whereas Local feature extraction locates specific facial features such as eyes, nose, and mouth based on the distance between them.
- Feature Matching: Feature matching is the recognition process, where the feature vector developed from the extraction is matched to classes (persons) of facial images already stored in a database. The matching algorithms vary from Nearest Neighbor to advanced schemes such as Neural Networks. An identification application returns the matches and a verification application returns if the template is matched or not.
- Face Databases: Sizeable databases of face images are needed to train and adequately test face recognition algorithms. Many standard image sets are available online that can be used to train the models for mood, gaze, and other analysis. For facial recognition, the user needs to train the models with his own image sets so that it can detect the trained face.
3. Face Detection and Feature Extraction Methods
Broadly speaking, face detection and recognition algorithms include the following:
- Global/Appearance-Based: An appearance-based method depends on statistical analysis and machine learning to find relevant characteristics in a face image. The Principal Component Analysis (PCA) tool extracts eigenfaces1 represented by weight vectors from typical features of the images in the database. The unknown images are identified by finding the image in the database whose weights are closest to the weights of the captured image.
- Local/Feature-Based: In this technique, features are extracted using the size and the relative position of essential parts of the face. There are two methods used: Interest-point based and local appearance based. In interest point based methods, the first points of interest are detected, to extract features localized to the points. In the case of local appearance-based methods, the face gets divided into small regions to extract the local characteristics. It includes Dynamic link architecture (DLA), and Feature extraction by a Gabor filter.
- Hybrid Approaches and Methods: Hybrid methods include both feature-based and appearance-based methods simultaneously. It also uses statistical models.
4. Factors Affecting Facial Recognition
Some of the significant factors affecting facial recognition systems include:
- Lighting variations: Irregular illuminations can cause images to be brighter or darker, leading to a decrease in detection efficiency of the system. Normalization in the pre-processing stage helps to eliminate the irregularities.
- Age change: A human face like skin texture and shape vary as they age. Hence, periodic updating of the facial recognition system used in identity cards is required.
- Obstructions: Accessories such as sunglasses, hats, and scarves can decrease the efficiency of recognition and should be avoided.
- Image falsification: Recognition systems can be deceived by showing an image of the person to the camera. Even 3-D techniques used to overcome such issues sometimes fail to distinguish identical persons.
- Noise: Noise while capturing an image such as a bad sensor affects the efficiency of the recognition system.
- Blur: Motion or atmospheric blur are primary sources of blurred facial images and affect the recognition.
Most facial recognition system applications are no longer limited to security. Here are some examples:
- Criminal Identification: Facial recognition systems do not need co-operation from the subject for identification, making it easy to use in public places to identify people with existing criminal records in the database. They can be applied in access control systems in building and workplaces.
- Electronic ID by Government: IDs like an e-passport issued by government organizations across the globe use facial recognition.
- Retail Stores: In stores, facial recognition is used to collect the behavior of consumers (i.e., choice of product, area sweep, and time spent and satisfaction level on checkout.)Demographic data such as age, sex, and gender, coupled with behavioral analytics, can be used to enhance sales and marketing.
- Healthcare: Facial recognition aided with AI and ML is used in the healthcare sector to monitor patients and improve medication adherence practices and pain detection.
- Device and Appliances: Smartphones have a face-unlock feature, making them more secure, as well as enhancing user experience. Personalized appliance behaviors are possible, for example, a coffee vending machine can store the preference of a user and can brew the perfect coffee for the user.
- Social Media and Internet: Major social media giants (e.g., Facebook) use facial recognition to automatically identify persons in an uploaded photo. Google photos use facial recognition to provide image-based search and to create albums for sharing with an identified person.
6. Privacy Concerns
There have always been apprehensions about the violation of people’s right to privacy during deployment of facial recognition systems. Extreme data protection and security have to be an integral part of the implementation.
7. Facial Recognition Devices and Use Cases
It is a complex task to implement a recognition model and train the model for identification. Dedicated off-the-shelf hardware modules with pre-trained software packages help to build a facial recognition system using any simple computing device.
OMRON has introduced Human Vision Components (commonly referred to as HCV), which are modular solutions used for image sensing applications. This HVC includes an algorithm of the OKAO which is OMRON’s technology for Image Sensing to recognize the conditions of people. OKAO comes as Software-IP (Intellectual Property) or Hardware-IP (ASIC). Libraries are provided in a compiled binary format to protect IP.
The OKAO system is pre-trained with facial expressions of more than one million faces, reducing the need for training. The software can process an image within a fraction of seconds (<300ms), making it ideal for real-time applications.
The B5T-007001-010 image sensor from Omron Electronic Components uses OKAO Vision Image Sensing Technology. It is 50° long-distance (as far as 17m) type human vision components system suitable for various image sensing applications. B5T-007001-020 is another version of the module with wide angle (range between +90°and -90° horizontally) support. The modules implement ten algorithms of the OKAO vision image sensing technology, including face recognition, expression estimation, and hand detection. The device also logs confidence value on a scale of 0 to 1000 showing how accurate the recognition might be.
This device includes a camera and a separate main board connected via a flexible flat cable. The output image can be chosen from three types: no image output, 160x120 pixels and 320x240 pixels. It comes with a UART and USB communication interface. Figure 2 shows the block diagram of the B5T-007001 module, featuring an external host microcontroller or processor. The module comes with required driver and demo software to test with any PC. Omron also provides C, Android, C# and python sample codes to develop your own application. For learn more about OMRON's Human Vision Components, please visit
- Eigenfaces is the name given to a set of eigenvectors used in human facial recognition.