SMART-MOVE: A Spatiotemporal Annotated Human Activity Repository for Advanced Motion Recognition and Analysis Research

Accurate human motion tracking and activity recognition are important in supporting numerous areas of computer science and engineering research, ranging from user modeling and human robot interaction to graphics and animation. A good resource of annotated datasets and annotation tools is particularly important in research involving physical human limitations, persons with chronic disabilities, such as post-stroke, ALS, Rheumatoid Arthritis, Cerebral Palsy, and others. Currently, a comprehensive community CRI with human activity data and analysis tools is not in place. The development of new methods and algorithms that will improve and enable real-world motion tracking applications is hampered by an inherent difficulty: the lack of large sets of training and testing data. This planning activity brings together experts in computer vision, machine learning, data mining, user interface design, assistive environments, human robot interaction, databases research, big data involving real time human activity, therapists, clinicians, device makers and sensor developers in order to identify the specific human activity data that should be collected and processed when building a repository of automatically and accurately annotated video data of human motion.

Specifically, a repository of automatically and accurately annotated video data of human motion has the potential to impact the research of several disciplines using human motion tracking and recognition by significantly improving the ability to recognize behavioral markers for assessing the confluence of environment, drugs, human psychology and thus extend clinical and psychological areas. However, due to big data issues (volume, velocity, variety, veracity, arising from multisensory visual input of human behavior), the tasks of automating data collection and especially annotation are non-trivial. Traditionally, video data of human motion are collected and annotated manually. Due to the large amount of data generated by video sensors, this task is very tedious and error prone. For this reason, most existing datasets are very small and specialized to the problem for which they were collected. This project will gather input from the research user communities on how to best develop such a CRI repository of human activity. A workshop will be organized with invited experts who will help identify the specific human activity data that should be collected and processed. The input solicited during the workshop will help define the features and facilities of the proposed Smart-Move repository, a spatiotemporal annotated human motion repository by focusing on the following:

  1. Determining the specific activities to be captured and the methods of multisensory data collection.
  2. Tools for accurate real-time annotation and storage of the collected data.
  3. Search facilities to enable researchers to contribute and provide feedback.
  4. Event recognition and data summarization tools to enable researchers to analyze their own new big data automatically.
  5. Feasibility of the proposed data collection and analysis using a strategically chosen set of multi-stream data to keep the overall project cost low.
  6. Limitations when modeling heterogeneous types of human activity data collected by different sensors or of different sample frequencies.

Workshops


July 1-3, 2015 @ Corfu Holiday Palace, Corfu Island, Greece

A panel session will be run alongside the 8th Annual PETRA Conference.


May 1-2, 2015 @ Hilton Arlington, Arlington, Texas
Day 1
Day 2

Leadership

Datasets

Sponsoring Lab.: Heracleia Human-Centered Computing Laboratory

Contact us at: heracleia.cri at gmail