Towards Disentangled Representation Learning in Practice

DSpace Repository


Dateien:

URI: http://hdl.handle.net/10900/170059
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1700593
http://dx.doi.org/10.15496/publikation-111386
Dokumentart: PhDThesis
Date: 2025-09-08
Language: English
Faculty: 7 Mathematisch-Naturwissenschaftliche Fakultät
Department: Informatik
Advisor: Brendel, Wieland (Prof. Dr.)
Day of Oral Examination: 2024-09-16
DDC Classifikation: 004 - Data processing and computer science
Other Keywords:
representation learning
disentanglement
self-supervised learning
unsupervised learning
concept learning
License: http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=en
Show full item record

Abstract:

While the success of deep learning is underpinned by learning representations of data, what information the learned representations extract remains a mystery. In our first contribution (C1), we show that state-of-the-art approaches to self-supervised visual representation learning extract the aspects, or factors of variation (FoVs), of the data that are invariant to data augmentations applied during training, discarding the variant FoVs. In studying augmentations used in practice, we find that while object class is left invariant, position, hue, and rotation information tend to be discarded, which is problematic for tasks outside of object recognition, e.g. object localization. In our second contribution (C2), we show that such approaches can yield \emph{disentangled} representations, where all FoVs are extracted separately in the representation, if all FoVs are variant to the augmentations, an assumption that notably isn't met by augmentations used in practice. In our third contribution (C3), we show evidence that this assumption can be met in natural video, where FoVs undergo transitions that are typically small in magnitude with occasional large jumps, characteristic of a temporally sparse distribution. While challenges remain for real-world disentanglement, our contributions provide guidance to the field in the pursuit of progress in representation learning.

This item appears in the following Collection(s)