Iris tracking enables many applications, such as hands-free interfaces for assistive technologies and understanding user behavior beyond clicks and gestures. It is also a challenging computer vision problem. The major challenge in iris tracking is that sometimes eyes may appear different under variable light conditions, or occluded by the hair, or the shape may depend on the head’s angle of rotation and the person’s expression. Current solutions rely heavily on specialized hardware that requires an expensive headset or a remote eye tracker system. Since mobile devices have limited computing resources, these approaches do not serve mobile devices’ purposes.
In March 2020, TensorFlow announced a new package for detecting facial landmarks in the browser. Recently, in addition to this package, it came up with the addition of a new feature, i.e., iris tracking through TensorFlow.js face landmarks detection model.
Introduction to Face Landmarks Detection
Face Landmarks Detection model works with the help of the MediaPipe iris model. The MediaPipe iris model can track landmarks for the iris and pupil with a single RGB camera, in real-time, without using any specialized hardware. The model also returns landmarks for the eyelids and eyebrow regions, enabling the detection of slight eye movements such as blinking.
Major improvements offered by FaceLandmarksDetection are:
- Iris key points detection
- Improved eyelid contour detection
- Improved detection for rotated faces
These improvements and comparisons are shown in the GIF below:
Face Landmarks Detection is a lightweight package that contains hardly ~3MB of weights, making it better for real-time inference on most mobile devices. Further, TensorFlow.js and MediaPipe teams plan to add depth estimation capabilities to the model mentioned above, using the improved iris coordinates. Thus, Face Landmarks Detection is an improved model with iris tracking, and it looks forward to more developments in the model.
Demo: Use this link to try this new package in your web browser.