‘CharacterLipSync’, a deep learning system generating real-time lip-sync for live 2-D animation

Two researchers at Adobe Research and the University of Washington recently published a paper, introducing a deep learning-based system that creates dwell lip sync for 2D animated characters. The system uses a long-short-term memory (LSTM) model to generate live lip sync for layered 2D characters.

According to the paper, “Our system takes streaming audio as input and produces viseme sequences with less than 200ms of latency (including processing time).”

The results of this system ‘CharacterLipSync’ was so impressing that Adobe introduced it in some part of the Adobe Character Animator.

🚀 JOIN the fastest ML Subreddit Community

Paper: https://arxiv.org/pdf/1910.08685.pdf

Github: https://github.com/deepalianeja/CharacterLipSync

Image Source: https://github.com/deepalianeja/CharacterLipSync

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

Check out https://aitoolsclub.com to find 100's of Cool AI Tools