‘CharacterLipSync’, a deep learning system generating real-time lip-sync for live 2-D animation

Image Source: Paper https://arxiv.org/pdf/1910.08685.pdf

Two researchers at Adobe Research and the University of Washington recently published a paper, introducing a deep learning-based system that creates dwell lip sync for 2D animated characters. The system uses a long-short-term memory (LSTM) model to generate live lip sync for layered 2D characters.

According to the paper, “Our system takes streaming audio as input and produces viseme sequences with less than 200ms of latency (including processing time).”

The results of this system ‘CharacterLipSync’ was so impressing that Adobe introduced it in some part of the Adobe Character Animator.

AdvertisementCoursera Plus banner featuring Johns Hopkins University, Google, and University of Michigan courses highlighting data science career-advancing content

Paper: https://arxiv.org/pdf/1910.08685.pdf

Github: https://github.com/deepalianeja/CharacterLipSync

Image Source: https://github.com/deepalianeja/CharacterLipSync


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.