The field of human pose tracking, also known as motion capture (MoCap), has been studied for decades and is still a very active area of research. The technology is used in gaming, virtual/augmented reality film making computer graphics animation, and others to provide body (and facial) motion data of character animations; humanoid robot control motions while interacting with a system or device’s interface screens.
Optical Motion Capture is a computer vision and marker-based MoCap technique that has been the gold standard for accurate motion capture. The rapid development of professional optical MoCap technologies in the last decade has been due to game engines, allowing for quick and easy consumption of motion capture data. Traditional optical motion capture solutions still present difficulties. Purchasing a professional system is expensive and cumbersome, while the equipment itself can be sensitive to movement in some ways. In addition, there is still a need for robust MoCap methods that overcome the barriers of time and complexity in post-production.
Researchers from the Centre for Research and Technology Hellas, National Technical University of Athens, and the University of Lincoln introduce ‘DeepMoCap.‘ DeepMoCap is a low-cost, robust, and fast optical motion capture framework that uses IR-D sensors and retro-reflectors. The setup is flexible because of its simplicity compared to gold standard marker-based solutions as well the equipment required for this process being cheap too! There are even 3D optical data automatically labeled in real-time without any post-processing needed.
According to the research group, DeepMoCap is a very unique approach that employs fully convolutional neural networks for automatic 3D optical data localization and identification using IR-D imaging. The process has been denoted as “2D reflector set estimation”. It replaces manual marker labeling and tracking tasks required in traditional MoCap systems. The convolutional pose machines (CPM) architecture has been extended with the notion of time by adding a second 3D optical flow input stream and abstractly using 2D Vector Fields.