Gaussian Process Dynamical Models for Small-Data Human Motion Synthesis: From Hierarchical Models to Mixture-of-Experts Frameworks

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/169523
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1695236
Dokumentart: Dissertation
Erscheinungsdatum: 2025-08-26
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Informatik
Gutachter: Giese, Martin (Prof. Dr.)
Tag der mündl. Prüfung: 2025-07-16
DDC-Klassifikation: 004 - Informatik
Schlagworte: Machine learning, probability, movement, robotics
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=en
Zur Langanzeige

Abstract:

This thesis details our development of probabilistic architectures based on Gaussian process dynamical models (GPDMs) to address fundamental challenges in human motion modeling. We present a systematic progression from hierarchical GPDMs to the novel Gaussian Process Dynamical Mixture Model (GPDMM), demonstrating how probabilistic approaches can effectively model complex human movements while maintaining interpretability and computational efficiency. We first establish the effectiveness of hierarchical GPDMs in prosthetic control, showing how these models combine EMG signals with kinematic data to predict hand movements. This work reveals both the potential and limitations of single-class approaches, motivating the development of more sophisticated frameworks for handling multiple movement classes. Building on these insights, we introduce novel methods for latent space optimization, including the Gaussian process (GP) back-constraint and a GP-based information bottleneck feature selection approach. We then demonstrate that careful initialization with appropriate geometric features can achieve superior performance while simplifying model architecture, challenging conventional approaches that emphasize explicit topological constraints. The culmination of this work is the GPDMM, which integrates multiple GPDMs in a probabilistic mixture-of-experts framework, enabling both the classification and generation of movement sequences. We demonstrate the GPDMM's viability on these tasks during single-example learning, comparing it with some of the most widely-used deep learning approaches (transformers, VAEs, and LSTMs), which struggled in our data-constrained settings. Our work establishes that principled probabilistic approaches to motion modeling, like GPDM-based models, can achieve state-of-the-art performance when designed deliberately, maintaining their strengths in interpretability and small-data efficiency. These contributions advance both theoretical understanding and practical implementation in fields ranging from computer animation to prosthetic control.

Das Dokument erscheint in: