Jinxiang Chai

Associate Professor


Computer Science and Engineering

Texas A&M University


College Station,

Texas 77843-3112, USA



Office: 527D H.R. Bright


Phone: 979-845-3510

Fax: 979-845-1420


Mocap Lab: 505 H.R. Bright

Email: jchai@cs.tamu.edu






Jinxiang Chai is currently an associate professor in the Department of Computer Science and Engineering at Texas A&M University. He received his Ph.D in 2006 from the Robotics Institute at Carnegie Mellon. His primary research is in the area of computer graphics and vision with broad applications in other disciplines such as virtual and augmented reality, robotics, human computer interaction, and biomechanics. His most recent interest lies in realtime 3D human motion sensing and its applications in virtual and augmented reality. He draws on ideas from graphics, vision, machine learning, robotics, biomechanics, neuroscience and applied math. He received an NSF CAREER award for his work on theory and practice of Bayesian motion synthesis.




Realtime human motion sensing for virtual and augmented reality. I am extremely interested in developing realtime human motion sensing technologies essential for virtual and augmented reality applications. Notable recent examples include full-body motion capture using a single depth camera, realtime and automatic 3D facial performance capture with eye gaze using a single RGB camera, and realtime capture of 3D hand gesture using a single depth camera.


Human motion modeling, understanding, simulation and control. The primary goals of our research herein are: (1) to develop efficient representations and computational models to analyze how humans move by utilizing prerecorded motion data, physics, biomechanics principles, and control theories; and (2) to apply the new models to solve important and challenging problems in computer graphics, computer vision and robotics such as human motion sensing and recognition, synthesis and control of animated human characters, and motion planning and control for humanoid robots.


Motion capture for everyone. We are developing next-generation motion capture technologies that minimize the cost and intrusiveness of motion capture so that the technology is practical and easily accessible to every home user. We are also interested in developing new techniques for capturing the physics of complex movements from real world.


Facial acquisition, modeling and animation. One of holy grail problems in computer graphics has been the realistic animation of the human face. We have been developing new methods to animate and control virtual faces by capturing and analyzing facial performances of real people.


Hand motion acquisition, modeling and simulation. We are exploring new techniques for capturing high-quality hand gestures, grasping and manipulation for hand motion understanding, modeling, simulation and control.


Visual modeling and understanding. We seek to build systems that can visually model and understand complex movements such as full-body movements, facial deformations, hand gestures, multi-person interaction, or animal movements. For example, we have developed new techniques for modeling deformable objects and articulated bodies using a single camera. Before that, I had been working on image-based modeling and rendering.


Data-driven graphics and vision. I am genuinely interested in data-driven approaches for solving ill-posed graphics and vision problems, such as animation control, deformation modeling, and vision-based motion tracking. I am particularly interested in learning techniques that can scale up to massive and heterogeneous datasets.




CSCE 441: Computer Graphics


 SELECTED RECENT PUBLICATIONS: (For more details, see my Projects page)


        Realtime 3D Eye Gaze Animation Using a Single RGB Camera

          ACM Transactions on Graphics, 2016

        Video-audio Driven Real-time Facial Animation

          ACM Transactions on Graphics, 2015

        Realtime Style Transfer for Unlabeled Heterogeneous Human Motion

          ACM Transactions on Graphics, 2015

        Leveraging Depth Cameras and Wearable Pressure Sensors for Full-body Kinematics and dynamics Capture

          ACM Transactions on Graphics, 2014

        Automatic Acquisition of High-fidelity Facial Performance Using Monocular videos

          ACM Transactions on Graphics, 2014

        Controllable High-fidelity Facial Performance Transfer

          ACM Transactions on Graphics, 2014 

        Robust Realtime Physics-based Motion Control for Human Grasping

          ACM Transactions on Graphics, 2013

        Video-based Hand Manipulation Capture Through Composite Motion Control

          ACM Transactions on Graphics, 2013

        Generative Statistical Models for Semantic Motion Analysis and Synthesis

          ACM Transactions on Graphics, 2012

        Accurate Realtime Full-body Motion Capture Using a Single Depth Camera   

               ACM Transactions on Graphics, 2012

        Leveraging Motion Capture and 3D Scanning for High-Fidelity Facial Performance Acquisition

          ACM Transactions on Graphics, 2011

   Physically-valid Statistical Motion Models for Human Motion Synthesis

          ACM Transactions on Graphics, 2011

   VideoMocap: Modeling Physically Realistic Human Motion from Monocular Video Sequences

          ACM Transactions on Graphics, 2010

   Interactive Generation of Human Animation with Deformable Motion Models

          ACM Transactions on Graphics, 2010

   Interactive Manipulation of 3D Facial Expressions Using Facial Priors

          ACM Transactions on Graphics, 2010




Peihong Guo (Ph.D student): facial capture & modeling

Fuhao Shi (Ph.D student):  facial capture, modeling and animation

Jianjie Zhang (Ph.D student):  physics-based human motion modeling & control

Peizhao Zhang (Ph.D student):  realtime human motion capture & interaction

Xiaolei Lv (Ph.D student, ICT at CAS):  physics-based simulation & humanoid robots

Congyi Wang (Ph.D student, ICT at CAS):  human motion modeling and animation

Le Su (Ph.D student, ICT at CAS): human motion tracking and recognition

Zhiyong Wang (Ph.D student, ICT at CAS): facial tracking and recognition




Xiaolin Wei (2007-2011, Ph.D, now at Google)

Jianyuan Min  (2007-2011, Ph.D, now at Google)

Hui Lou (2007-2012, Ph.D, now at Google)

Yen-Lin Chen (2007-2013, Ph.D, now at Robert Bosch Research)

Yangang Wang (Ph.D, Tsinghua Univ., now at Microsoft Research Asia)

Wenping Zhao (Ph.D, MSRA & USTC):  now R&D at a startup company




Conference chair - Symposium on Computer Animation (SCA), 2013

Program chair - Computer Animation and Social Agents (CASA), 2011

Program committee - ACM SIGGRAPH (2016, 2015, 2013); ACM SIGGRAPH ASIA (2012, 2011, 2009)

Program committee - Symposium on Computer Animation (SCA), 2015, 2014, 2012, 2011, 2010, 2009, 2008

Program committee - ACM Symposium on Interactive 3D Graphics and Games (I3D), 2014, 2013, 2012, 2011

Program committee - Pacific Graphics, 2012, 2011, 2010