Unsupervised Representation Learning for Object-Centric and Neuronal Morphology Modeling

DSpace Repository


Dateien:

URI: http://hdl.handle.net/10900/149902
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1499022
http://dx.doi.org/10.15496/publikation-91242
Dokumentart: PhDThesis
Date: 2024-01-26
Language: English
Faculty: 7 Mathematisch-Naturwissenschaftliche Fakultät
Department: Informatik
Advisor: Ecker, Alexander (Prof. Dr.)
Day of Oral Examination: 2023-12-19
DDC Classifikation: 004 - Data processing and computer science
500 - Natural sciences and mathematics
570 - Life sciences; biology
Keywords: Maschinelles Lernen , Deep learning , Unüberwachtes Lernen , Neuronales Netz , Künstliche Intelligenz , Neurowissenschaften
Other Keywords:
Machine Learning
Representation Learning
Unsupervised Learning
License: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Order a printed copy: Print-on-Demand
Show full item record

Abstract:

A key feature of intelligent systems is the ability to generalize beyond one’s experiences. The process of deriving general rules from limited observations is the principle of induction. Generalizing based on a finite number of observations to future samples requires a priori assumptions. In machine learning, the set of assumptions made by the learning algorithm are called inductive biases. To design successful learning systems, it is essential to ensure that the inductive biases of the system align well with the structure of the data. Conversely, to understand why learning systems fail in a particular way it is integral to understand their inherent assumptions and biases. In this dissertation, we study unsupervised representation learning in two different application domains. We look through the lens of evaluation to unmask inductive biases in object-centric models in computer vision as well as show how to successfully employ inductive biases to integrate domain knowledge for modeling neuronal morphologies. First, we establish a benchmark for object-centric video representations to analyze the strengths and weaknesses of current models. Our results demonstrate that the examined object-centric models encode strong inductive biases such as a tendency to perform mostly color segmentation that work well for synthetic data but fail to generalize to real-world videos. Second, we propose a self-supervised model that captures the essence of neuronal morphologies. We demonstrate that by encoding domain knowledge about neuronal morphologies in the form of the appropriate inductive biases into our model, it can learn useful representations from limited data and outperform both previous models and expert-defined features on downstream tasks such as cell type classification. Third, we employ our model for neuronal morphologies to a large-scale dataset of neurons from the mouse visual cortex and prove its utility for analyzing biological data. We demonstrate that our learned representations capture the morphological diversity of cortical pyramidal cells and enable data analysis of neuronal morphologies on an unprecedented scale. We use the learned embeddings to describe the organization of neuronal morphologies in the mouse visual cortex, as well as discover a new cell type and analyze cortical area and layer differences. Taken together, our findings indicate that identifying the implicit inductive biases in object-centric models is necessary for understanding their failure modes. Conversely, tailored inductive biases, that take the intricacies of the domain into account, enable the successful design of machine learning models for neuronal morphologies.

This item appears in the following Collection(s)