Conference 3 Dec - 6 Dec Exhibition 4 Dec - 6 Dec

Attendees

    Technical Papers

    01 Full Conference1 - Full Conference One Day

     

     

    Character Animation

    Saturday, 06 December

    11:00 - 12:45

    Jasmine Hall


    Locomotion Control for Many-Muscle Humanoids

    We present a biped locomotion controller for humanoid models actuated by more than a hundred Hill-type muscles. Our controller can faithfully reproduce a variety of realistic biped gaits and adapt the gaits to varying conditions (e.g., muscle weakness, tightness, and joint dislocation) and goals (e.g., pain reduction).


    Yoonsang Lee, Seoul National University
    Moon Seok Park, Seoul National University
    Taesoo Kwon, Hanyang University
    Jehee Lee, Seoul National University

    Generating and Ranking Diverse Multi-Character Interactions

    Our novel 'generate-and-rank' approach rapidly and semi-automatically animates data-driven scenes with multi-character interactions from high-level descriptions. Many plausible scenes are generated and efficiently ranked, so that a small, diverse and high-quality selection can be presented to the user and iteratively refined.


    Jungdam Won, Seoul National University
    Kyungho Lee, Seoul National University
    Carol O'Sullivan, Disney Research
    Jessica K. Hodgins, Disney Research Pittsburgh
    Jehee Lee, Seoul National University

    MoSh: Motion and Shape Capture from Sparse Markers

    We demonstrate a new approach called MoSh (Motion and Shape capture), that extracts bodies from markers. MoSh needs only sparse mocap marker data to create animations with a level of lifelike realism that is difficult to achieve with standard skeleton-based methods.


    Matthew Loper, Max Planck Institute for Intelligent Systems
    Naureen Mahmood, Max Planck Institute for Intelligent Systems
    Michael Black, Max Planck Institute for Intelligent Systems

    Leveraging Depth Cameras and Wearable Pressure Sensors for Full-body Kinematics and Dynamics Capture

    We present a new method for full-body motion capture that uses input data captured by three depth cameras and a pair of pressure sensing shoes. Our system is appealing because it is low-cost, non-intrusive and fully automatic, and can accurately capture full-body kinematics and dynamics.


    Peizhao Zhang, Texas A&M University
    Kristin Siu, Georgia Institute of Technology
    Jianjie Zhang, Texas A&M University
    Karen Liu, Georgia Institute of Technology
    Jinxiang Chai, Texas A&M University

    Back to Top
    Go Back