![]() |
Liveness Engine
1.0
|
The Liveness Engine is a software library designed to verify that a user in front of a camera is a live person, thereby preventing spoofing attacks that use photos or videos. It functions as a challenge-response system, periodically instructing the user to perform simple actions such as blinking or turning their head. The engine then analyzes the video feed to confirm these actions were completed successfully.
The Liveness Engine heavily relies on gaze and pose estimation components to monitor the user's actions.
core::face::FaceDetectorYunet: The first step in the pipeline is to reliably detect a face in the video stream using this robust DNN-based detector.
core::face::FaceMesh: Once a face is found, this class generates a detailed mesh of facial landmarks. These landmarks are the essential input for the head pose and eye state estimation modules.
core::filter::PoseKalmanFilter: To ensure that the head pose tracking is smooth and less susceptible to noise, a Kalman filter is used. This provides a more stable stream of orientation data for verifying head-turning challenges.
LivenessEngine::init(resourcePath) initializes the engine and all its dependencies, including the face detector and gaze estimation modules.LivenessEngine::process(input, output) is the main loop where the engine analyzes the current frame, updates the user's facial state (head pose, eye openness), checks if the state matches the current challenge, and provides visual feedback.gaze::GazeEstimation::process(...) is called within the main loop to get the latest analysis of the user's head and eye movements.FaceDetectorYunet::process(frame) detects the user's face in each frame.FaceMesh::process(frame, detections) provides the detailed landmark points required by the gaze and eye estimators.The project files are organized in the C:/Projects/Engine/AntalEngine/Engine directory. Key header files include: