Context-based 3D Animations of Vehicles, Human and Animal Figurines
Description
The goal of this thesis is to animate 3D objects such as vehicles, humans, and animals based on multimodal contextual information.
A simple example: real-world 3D trajectory data of the object can be used to classify whether a given object is moving or idle. Based on the classification result, the corresponding animation is played on the object -- a breathing animation if the object is idle, and a walking/running animation if the object is in motion.
This idea can be extended further to produce more complex animations. For example, if a dog gets wet due to rain in an evolving story, the subsequent animation produced should be "shaking off water from the body".
Possible steps include:
- Using our Large Language Model based system to generate novel animations.
- Designing and evaluating a novel Machine Learning model that decides which animation to play based on 3D trajectory of the objects, semantic and geometric configuration of the 3D scenegraph, user input, and the context of an evolving story. The 3D trajectory can be obtained from our already operational pose tracking system.
Prerequisites
- Working knowledge of Blender
- Python
- Initial experience in training and evaluation of Machine Learning models
Supervisor:
3D Scene Navigation Using Free-hand Gestures
3D, Blender, Python, hand tracking, gesture recognition
Description
The goal of this bachelor thesis project is to design and evaluate a 3D scene navigation system based on free-hand gestures.
Possible steps include:
- Modeling a 3D world in Blender (an existing pre-desgined world may also be used e.g. from sketchfab)
- Designing a distinct set of hand gestures that allows comprehensive navigation of the 3D world (i.e. to control camera translation and rotation based on hand gestures). It should be possible for the user to navigate to any place in the 3D world quickly, efficiently, and intuitively.
- The Google mediapipe framework can be used to detect and track hand keypoints. On top of that, a novel gesture recognition model should be trained and evaluated.
- Comparing, contrasting, and benchmarking the performance of this system against the standard keyboard+mouse-based navigation capabilities offered by Blender.
Start date: 01.04.2025
Prerequisites
- Working knowledge of Blender and python
- Interest in 3D worlds and human-computer interaction