Jinxiang Chai

Associate Professor

 

Computer Science and Engineering

Texas A&M University

 

College Station,

Texas 77843-3112, USA

 

 

Office: 527D H.R. Bright

 

Phone: 979-845-3510

Fax: 979-845-1420

 

Mocap Lab: 505 H.R. Bright

Email: jchai@cs.tamu.edu

 

 

     

Symposium on Computer Animation:

 

I am helping to organize ACM SIGGRAPGH / Eurographics Symposium on Computer Animation 2013, July 19-21 in Anaheim, just before SIGGRAPH. The paper submission deadline is next Tuesday (April 16)!

Please submit your best work to sca2013.cs.tamu.edu/.

 

SHORT BIO:

 

Jinxiang Chai is currently an associate professor in the Department of Computer Science and Engineering at Texas A&M University. He received his Ph.D in 2006 from the Robotics Institute at Carnegie Mellon. His primary research is in the area of computer graphics and vision with broad applications in other disciplines such as robotics, human computer interaction, and biomechanics, virtual and augmented reality. He is particularly interested in developing representations and efficient computational models that allow acquisition, analysis, understanding, simulation, and control of natural human motion, including movements of the body, face and hand, multi-person interaction, and crowds.  He draws on ideas from graphics, vision, machine learning, robotics, biomechanics, neuroscience and applied math. He received an NSF CAREER award for his work on theory and practice of Bayesian motion synthesis.

 

RESEARCH PROJECTS AND INTERESTS:

 

Human motion modeling, understanding, simulation and control. The primary goals of our research herein are: (1) to develop efficient representations and computational models to analyze how humans move by utilizing prerecorded motion data, physics, biomechanics principles, and control theories; and (2) to apply the new models to solve important and challenging problems in computer graphics, computer vision and robotics such as synthesis and control of animated human characters, human motion tracking and classification, and motion planning and control for humanoid robots.

 

Motion capture for everyone. We are developing next-generation motion capture technologies that minimize the cost and intrusiveness of motion capture so that the technology is practical and easily accessible to every home user.  Notable examples include full-body performance capture using low-cost sensors, video-based motion capture, and realtime motion capture using RGBD cameras. We are also interested in developing new techniques for capturing the physics of complex movements from real world.

 

Facial acquisition, modeling and animation. One of holy grail problems in computer graphics has been the realistic animation of the human face. We have been developing new methods to animate and control virtual faces by capturing and analyzing facial performances of real people.

 

Hand motion acquisition, modeling and simulation. We are exploring new techniques for capturing high-quality hand gestures, grasping and manipulation for hand motion understanding, modeling, simulation and control.

 

Visual modeling and understanding. We seek to build systems that can visually model and understand complex movements such as full-body movements, facial deformations, hand gestures, multi-person interaction, or animal movements. For example, we have developed new techniques for modeling deformable objects and articulated bodies using a single camera. Before that, I had been working on image-based modeling and rendering.

 

Data-driven graphics and vision. I am genuinely interested in data-driven approaches for solving ill-posed graphics and vision problems, such as animation control, deformation modeling, and vision-based motion tracking. I am particularly interested in learning techniques that can scale up to massive and heterogeneous datasets.

 

 TEACHING:

 

CSCE 641: Computer Graphics

                    

 SELECTED RECENT PUBLICATIONS: (For more details, see my Projects page)

       

        Controllable High-fidelity Facial Performance Transfer

          ACM Transactions on Graphics (to appear in SIGGRAPH 2014)    

        Accurate Robust 3D Facial Capture Using Single RGBD Images

          IEEE International Conference on Computer Vision (ICCV 2013)  

        Robust Realtime Physics-based Motion Control for Human Grasping

          ACM Transactions on Graphics (Presented at SIGGRAPH ASIA 2013)

        Video-based Hand Manipulation Capture Through Composite Motion Control

          ACM Transactions on Graphics (Presented at SIGGRAPH 2013)

        Generative Statistical Models for Semantic Motion Analysis and Synthesis

          ACM Transactions on Graphics (Presented at SIGGRAPH ASIA 2012)

        Accurate Realtime Full-body Motion Capture Using a Single Depth Camera   

          ACM Transactions on Graphics (Presented at SIGGRAPH ASIA 2012)

Combining Marker-based Mocap and RGB-D Camera for Acquiring High-fidelity Hand Motion Data

         ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2012)

Leveraging Motion Capture and 3D Scanning for High-Fidelity Facial Performance Acquisition

ACM Transactions on Graphics (Presented at SIGGRAPH 2011)

Physically-valid Statistical Motion Models for Human Motion Synthesis

ACM Transactions on Graphics (Presented at SIGGRAPH 2011)

Realtime Human Motion Control with A Small Number of Inertial Sensors                                              

ACM SIGGRAPH Symposium on Interactive 3D graphics and games (i3D 2011)

Intuitive Interactive Human Character Posing with Millions of Example Poses

IEEE Journal on Computer Graphics and Applications (CG&A), 2011

VideoMocap: Modeling Physically Realistic Human Motion from Monocular Video Sequences

ACM Transactions on Graphics (Presented at SIGGRAPH 2010)

Interactive Generation of Human Animation with Deformable Motion Models

ACM Transactions on Graphics (Presented at SIGGRAPH 2010)

Interactive Manipulation of 3D Facial Expressions Using Facial Priors

ACM Transactions on Graphics (Presented at SIGGRAPH 2010)

Example-based Human Motion Denoising

IEEE Transactions on Visualization and Computer Graphics (TVCG), 2010

Synthesis and Editing of Personalized Stylistic Human Motion

ACM SIGGRAPH Symposium on Interactive 3D graphics and games (i3D 2010)

 

CURRENT STUDENTS:

 

Peihong Guo (Ph.D student): human motion capture, analysis & generation

Fuhao Shi (Ph.D student):  facial capture, modeling and animation

Jianjie Zhang (Ph.D student):  physics-based human motion modeling & control

Peizhao Zhang (Ph.D student):  realtime human motion capture & interaction

Xiaolei Lv (Ph.D student, ICT at CAS):  physics-based simulation & humanoid robots

Wenping Zhao (Ph.D student, MSRA & USTC):  hand motion capture, modeling and control

Yangang Wang (Ph.D student, Tsinghua Univ.):  video-based motion capture

Congyi Wang (Ph.D student, ICT at CAS):  human motion modeling and animation

 

GRADUATED STUDENTS:

 

Hui Lou (2007-2012, Ph.D, now at Google)

Yen-Lin Chen (2007-2013, Ph.D, now at Robert Bosch Research)

Jianyuan Min (2007-2012, Ph.D, now at Google)

Xiaolin Wei (2007-2011, Ph.D, now at Google)

 

PROFESSIONAL ACTIVITIES:

 

Conference chair - Symposium on Computer Animation (SCA), 2013

Program chair - Computer Animation and Social Agents (CASA), 2011

Program committee - ACM SIGGRAPH (2013); ACM SIGGRAPH ASIA (2012, 2011, 2009)

Program committee - Symposium on Computer Animation (SCA), 2012, 2011, 2010, 2009, 2008

Program committee - ACM Symposium on Interactive 3D Graphics and Games (I3D), 2013, 2012, 2011

Program committee - Pacific Graphics, 2012, 2011, 2010