Discussion on face recognition live detection technology: multi-physical feature fusion plus classifier algorithm technology

Discussion on face recognition live detection technology: multi-physical feature fusion plus classifier algorithm technology

With face recognition,
Face unlock
And other technologies are widely used in daily life such as finance, access control, attendance, and the integration of human and identification, and face anti-counterfeiting

/Face Anti-Spoofing technology has received more and more attention in recent years.

In simple terms, living body detection is to identify whether the face image detected on the imaging device (camera, mobile phone, etc.) is from a real face, or some form of attack or disguise. These types of attacks mainly include photo (including paper photos and photos on electronic devices such as mobile phones and tablets) attacks, video playback attacks, and mask attacks.

Living body detection includes normal color (

RGB) The detection on the camera also includes the detection on the infrared camera and the three-dimensional depth camera. The latter two are relatively easy to implement. Here we mainly discuss the live detection on a common RGB camera.

Earlier
The way of motion detection has high security, but it requires the user to perform several actions, so the experience is not good. The current living body detection does not require user actions and is called silent living body detection.
In addition, the live body test must be completed in real time,

1 second, preferably within 300 milliseconds to complete the recognition.

The current mainstream living body recognition algorithms can basically be divided into two types. The first type uses a specific physical feature, or a fusion of multiple physical features, through deep learning training classifiers, to distinguish whether it is a living body or an attack (or which form of attack). Another way is to use convolutional neural networks (

The CNN method uses a deep neural network to extract features directly on the RGB image (or converted to other chromaticity spaces), and finally uses a classifier to distinguish living or non-living. In order to extract the information of multiple frames instead of a single frame in time, the RNN method can also be used in combination. The CNN method can achieve good results, but it is usually time-consuming and cannot meet the requirements of real-time recognition of embedded devices in practical applications. Moreover, when the physical features are selected and used appropriately, the first method can achieve very good or even better results than CNN.

The physical features in living body detection are mainly divided into texture features, color features, spectral features, motion features, image quality features, etc. In addition, they also include heartbeat features. Among them, texture features include many, but the most mainstream is

LBP, HOG, LPQ, etc. In addition to RGB color features, academic circles have found that HSV or YCbCr has better performance in distinguishing between living and non-living bodies, and are widely used as different texture features. The principle of spectral characteristics is that living and non-living bodies have different responses in certain frequency bands. The movement feature extraction target changes in different time is an effective method, but it usually takes a long time and cannot meet the real-time requirements. There are many ways to describe image quality characteristics, such as reflection, scattering, edge, or shape.

After comparing various algorithms, our live detection has selected a variety of physical feature fusion and classifier algorithms. Our computing advantages are reflected in the following aspects.

1)

Our physical features have selected several of the above descriptions, covering texture features, color, frequency spectrum features, image quality features, and optional motion features.

2)

We have studied several ways of integration:

i) fusion on features; ii) in the auto encoder (auto encoder

)
Fusion on

iii) Integration on classification scores. We chose the method with the highest accuracy, good usability and fast speed.

3)

The classifier algorithm we use surpasses the traditional support vector machine (

SVM) algorithm, using the latest deep learning methods, such as center loss.

After the algorithm is optimized, it can effectively extract the different responses of live and non-living bodies on various cameras. According to our test results on cameras and mobile phones from multiple mainstream manufacturers, the recognition accuracy is up to

More than 99%. At the same time, the real-time performance is excellent, and the recognition can be completed quickly within 300 milliseconds.

Of course, generalization performance is a challenge for all living body recognition, which requires the collection of massive data for training, covering the main mobile phone and camera models on the market. We want to use our

SDK developers, can share the live and non-live photos you can get, so that we can further optimize the accuracy of our algorithm.

To our

AI

Face recognition and live detection that can be downloaded and used for free on the open platform

SDK ai.deepcam.cn/#/home

Everyone is welcome to use it.