Uncertainties of Latent Representations in Computer Vision

DSpace Repositorium (Manakin basiert)

Zur Kurzanzeige

dc.contributor.advisor Kasneci, Enkelejda (Prof. Dr.)
dc.contributor.author Kirchhof, Michael
dc.date.accessioned 2024-08-14T07:56:17Z
dc.date.available 2024-08-14T07:56:17Z
dc.date.issued 2024-08-14
dc.identifier.uri http://hdl.handle.net/10900/156771
dc.identifier.uri http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1567713 de_DE
dc.identifier.uri http://dx.doi.org/10.15496/publikation-98103
dc.description.abstract Uncertainty quantification is a key pillar of trustworthy machine learning. It enables safe reactions under unsafe inputs, like predicting only when the machine learning model detects sufficient evidence, discarding anomalous data, or emitting warnings when an error is likely to be inbound. This is particularly crucial in safety-critical areas like medical image classification or self-driving cars. Despite the plethora of proposed uncertainty quantification methods achieving increasingly higher scores on performance benchmarks, uncertainty estimates are often shied away from in practice. Many machine learning projects start from pretrained latent representations that come without uncertainty estimates. Uncertainties would need to be trained by practitioners on their own, which is notoriously difficult and resource-intense. This thesis makes uncertainty estimates easily accessible by adding them to the latent representation vectors of pretrained computer vision models. Besides proposing approaches rooted in probability and decision theory, such as Monte-Carlo InfoNCE (MCInfoNCE) and loss prediction, we delve into both theoretical and empirical questions. We show that these unobservable uncertainties about unobservable latent representations are indeed provably correct. We also provide an uncertainty-aware representation learning (URL) benchmark to compare these unobservables against observable ground-truths. Finally, we compile our findings to pretrain lightweight representation uncertainties on large-scale computer vision models that transfer to unseen datasets in a zero-shot manner. Our findings do not only advance the current theoretical understanding of uncertainties over latent variables, but also facilitate the access to uncertainty quantification for future researchers inside and outside the field. As downloadable starting points, our pretrained representation uncertainties enable a range of novel practical tasks for straightforward but trustworthy machine learning. en
dc.language.iso en de_DE
dc.publisher Universität Tübingen de_DE
dc.rights ubt-podok de_DE
dc.rights.uri http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de de_DE
dc.rights.uri http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en en
dc.subject.classification Maschinelles Lernen , Wahrscheinlichkeitstheorie , Maschinelles Sehen de_DE
dc.subject.ddc 004 de_DE
dc.subject.ddc 510 de_DE
dc.subject.other Probabilistic Embeddings en
dc.subject.other Latent Representations en
dc.subject.other Embeddings en
dc.title Uncertainties of Latent Representations in Computer Vision en
dc.type PhDThesis de_DE
dcterms.dateAccepted 2024-06-17
utue.publikation.fachbereich Informatik de_DE
utue.publikation.fakultaet 7 Mathematisch-Naturwissenschaftliche Fakultät de_DE
utue.publikation.noppn yes de_DE

Dateien:

Das Dokument erscheint in:

Zur Kurzanzeige