Imprecise Probabilities in Machine Learning: Structure and Semantics

DSpace Repositorium (Manakin basiert)

Zur Kurzanzeige

dc.contributor.advisor Williamson, Robert C. (Prof. PhD)
dc.contributor.author Fröhlich, Christian
dc.date.accessioned 2025-10-13T14:21:17Z
dc.date.available 2025-10-13T14:21:17Z
dc.date.issued 2025-10-13
dc.identifier.uri http://hdl.handle.net/10900/171001
dc.identifier.uri http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1710018 de_DE
dc.description.abstract Motivated by problems in machine learning, this dissertation advances the theory of imprecise probabilities, which offers a more flexible framework for representing uncertainty compared to traditional precise probabilities. In this work, we investigate the mathematical structure of imprecise probabilities, while considering a plurality of semantics for them. Although semantics and mathematics may initially appear independent, we observe that the choice of semantics shapes the mathematical framework that emerges. With the goal of expressing risk aversion and ambiguity aversion, Part I investigates imprecise probabilities primarily in the form of law invariant coherent risk measures. In machine learning, these can help to reduce the tail risk in a distribution, as well as to guard against distributional shifts. Drawing on insights from the rearrangement invariant Banach function spaces literature, we investigate the structure of law invariant coherent risk measures. In particular, we study in depth the tail sensitivity of such risk measures and show how this yields a stratification of the law invariant coherent risk measures. In Part II, we move beyond law invariance and conduct a fundamental investigation of imprecise probabilities from a generalized frequentist viewpoint. Here, we challenge the assumption that a precise probability always suffices to capture the aggregate regularity of a data sequence, and show how imprecise probability naturally arises in the general case. To this end, we study imprecision under various data models. We furthermore propose a general framework for the evaluation of imprecise forecasts under such data models. Specifically, we develop viable notions of proper scoring rules and calibration for imprecise probabilities, generalizing their traditional correspondences. Conceptually, our viewpoint on uncertainty may be of broader interest and also yields insights into the precise case. Our focus on two key ingredients, data models and decision problems, proves fruitful. Moreover, we illustrate how looking to insurance can aid a better understanding of uncertainty in general. In Part III, we exemplify this perspective by establishing bridges between fairness concepts in insurance and machine learning. en
dc.language.iso en de_DE
dc.publisher Universität Tübingen de_DE
dc.rights ubt-podno de_DE
dc.rights.uri http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=de de_DE
dc.rights.uri http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=en en
dc.subject.classification Wahrscheinlichkeit , Unsicherheit , Maschinelles Lernen , Risiko de_DE
dc.subject.ddc 500 de_DE
dc.subject.ddc 510 de_DE
dc.subject.other imprecise probabilitiy en
dc.subject.other coherent risk measures en
dc.subject.other machine learning en
dc.title Imprecise Probabilities in Machine Learning: Structure and Semantics en
dc.type PhDThesis de_DE
dcterms.dateAccepted 2025-07-23
utue.publikation.fachbereich Informatik de_DE
utue.publikation.fakultaet 7 Mathematisch-Naturwissenschaftliche Fakultät de_DE
utue.publikation.noppn yes de_DE

Dateien:

Das Dokument erscheint in:

Zur Kurzanzeige