Skip to content

omimo/PyMO

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyMO

A library for using motion capture data for machine learning

This library is currently highly experimental and everything is subject to change :)

Roadmap

  • Mocap Data Parsers and Writers
  • Common mocap pre-processing algorithms
  • Feature extraction library
  • Visualization tools

Current Features

Read BVH Files

from pymo.parsers import BVHParser parser = BVHParser() parsed_data = parser.parse('demos/data/AV_8Walk_Meredith_HVHA_Rep1.bvh')

Get Skeleton Info

from pymo.viz_tools import * print_skel(parsed_data)

Will print the skeleton hierarchy:

- Hips (None) | | - RightUpLeg (Hips) | | - RightLeg (RightUpLeg) | | - RightFoot (RightLeg) | | - RightToeBase (RightFoot) | | - RightToeBase_Nub (RightToeBase) | - LeftUpLeg (Hips) | - LeftLeg (LeftUpLeg) | - LeftFoot (LeftLeg) | - LeftToeBase (LeftFoot) | - LeftToeBase_Nub (LeftToeBase) - Spine (Hips) | | - RightShoulder (Spine) | | - RightArm (RightShoulder) | | - RightForeArm (RightArm) | | - RightHand (RightForeArm) | | | - RightHand_End (RightHand) | | | - RightHand_End_Nub (RightHand_End) | | - RightHandThumb1 (RightHand) | | - RightHandThumb1_Nub (RightHandThumb1) | - LeftShoulder (Spine) | - LeftArm (LeftShoulder) | - LeftForeArm (LeftArm) | - LeftHand (LeftForeArm) | | - LeftHand_End (LeftHand) | | - LeftHand_End_Nub (LeftHand_End) | - LeftHandThumb1 (LeftHand) | - LeftHandThumb1_Nub (LeftHandThumb1) - Head (Spine) - Head_Nub (Head) 

scikit-learn Pipeline API

from pymo.preprocessing import * from sklearn.pipeline import Pipeline data_pipe = Pipeline([ ('param', MocapParameterizer('position')), ('rcpn', RootCentricPositionNormalizer()), ('delta', RootTransformer('abdolute_translation_deltas')), ('const', ConstantsRemover()), ('np', Numpyfier()), ('down', DownSampler(2)), ('stdscale', ListStandardScaler()) ]) piped_data = data_pipe.fit_transform([parsed_data])

Convert to Positions

mp = MocapParameterizer('position') positions = mp.fit_transform([parsed_data])

Visualize a single 2D Frame

draw_stickfigure(positions[0], frame=10)

2D Skeleton Viz

Animate in 3D (inside a Jupyter Notebook)

nb_play_mocap(positions[0], 'pos', scale=2, camera_z=800, frame_time=1/120, base_url='pymo/mocapplayer/playBuffer.html')

Mocap Player

Foot/Ground Contact Detector

from pymo.features import * plot_foot_up_down(positions[0], 'RightFoot_Yposition')

Foot Contact

signal = create_foot_contact_signal(positions[0], 'RightFoot_Yposition') plt.figure(figsize=(12,5)) plt.plot(signal, 'r') plt.plot(positions[0].values['RightFoot_Yposition'].values, 'g')

Foot Contact Signal

Feedback, Bugs, and Questions

For any questions, feedback, and bug reports, please use the Github Issues.

Credits

Created by: Omid Alemi

Contributors: Ethan Soutar-Rau, Jérôme Parent-Lévesque, JEONG HYEONJIN, Ozge Nilay Yalcin

License

This code is available under the MIT license.

About

A library for machine learning research on motion capture data

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •