Moving Pose

Project’s GitHub Repo

Our final presentation from December 2020

About The Project

As our final project for the Fall 2020 Introduction to Machine Learning (CSCI470) course at the Colorado School of Mines, our team, consisting of Andrew Darling, Eric Hayes, and myself (Mehmet), implemented the “Moving Pose” algorithm.

The goal was to take a skeletal dataset captured by a depth sensor and classify human actions. We not only implemented the core algorithm but also developed a simple user interface to demonstrate its capabilities.

The Moving Pose algorithm, originally proposed by Mihai Zanfir, Marius Leordeanu, and Cristian Sminchisescu, is a powerful method for recognizing and understanding human actions quickly and accurately from 3D skeletal data.

The Paper

Our implementation is based on the paper The Moving Pose: An Efficient 3D Kinematics Descriptor for Low-Latency Action Recognition and Detection (PDF) by Mihai Zanfir, Marius Leordeanu, and Cristian Sminchisescu.

Dataset

Our model was trained and tested on the MSR DailyActivity 3D Dataset. We focused on the following actions from the dataset:

Action IDs from the MSR DailyActivity 3D Dataset

UI Preview

We built a simple GUI to visualize the algorithm’s performance in real-time. For more details on the GUI and the hardware used, please see the README.md file in the /movingpose/gui/ directory of the project’s repository.