‘CharacterLipSync’, a deep learning system generating real-time lip-sync for live 2-D animation

0
888
Image Source: Paper https://arxiv.org/pdf/1910.08685.pdf
-Advertisement-

Two researchers at Adobe Research and the University of Washington recently published a paper, introducing a deep learning-based system that creates dwell lip sync for 2D animated characters. The system uses a long-short-term memory (LSTM) model to generate live lip sync for layered 2D characters.

According to the paper, “Our system takes streaming audio as input and produces viseme sequences with less than 200ms of latency (including processing time).”

-Advertisement-Python Specialization from University of Michigan

The results of this system ‘CharacterLipSync’ was so impressing that Adobe introduced it in some part of the Adobe Character Animator.

Paper: https://arxiv.org/pdf/1910.08685.pdf

Github: https://github.com/deepalianeja/CharacterLipSync

Image Source: https://github.com/deepalianeja/CharacterLipSync

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.