Imprecise Probabilities in Machine Learning: Structure and Semantics

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/171001
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1710018
Dokumentart: Dissertation
Erscheinungsdatum: 2025-10-13
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Informatik
Gutachter: Williamson, Robert C. (Prof. PhD)
Tag der mündl. Prüfung: 2025-07-23
DDC-Klassifikation: 500 - Naturwissenschaften
510 - Mathematik
Schlagworte: Wahrscheinlichkeit , Unsicherheit , Maschinelles Lernen , Risiko
Freie Schlagwörter:
imprecise probabilitiy
coherent risk measures
machine learning
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=en
Zur Langanzeige

Abstract:

Motivated by problems in machine learning, this dissertation advances the theory of imprecise probabilities, which offers a more flexible framework for representing uncertainty compared to traditional precise probabilities. In this work, we investigate the mathematical structure of imprecise probabilities, while considering a plurality of semantics for them. Although semantics and mathematics may initially appear independent, we observe that the choice of semantics shapes the mathematical framework that emerges. With the goal of expressing risk aversion and ambiguity aversion, Part I investigates imprecise probabilities primarily in the form of law invariant coherent risk measures. In machine learning, these can help to reduce the tail risk in a distribution, as well as to guard against distributional shifts. Drawing on insights from the rearrangement invariant Banach function spaces literature, we investigate the structure of law invariant coherent risk measures. In particular, we study in depth the tail sensitivity of such risk measures and show how this yields a stratification of the law invariant coherent risk measures. In Part II, we move beyond law invariance and conduct a fundamental investigation of imprecise probabilities from a generalized frequentist viewpoint. Here, we challenge the assumption that a precise probability always suffices to capture the aggregate regularity of a data sequence, and show how imprecise probability naturally arises in the general case. To this end, we study imprecision under various data models. We furthermore propose a general framework for the evaluation of imprecise forecasts under such data models. Specifically, we develop viable notions of proper scoring rules and calibration for imprecise probabilities, generalizing their traditional correspondences. Conceptually, our viewpoint on uncertainty may be of broader interest and also yields insights into the precise case. Our focus on two key ingredients, data models and decision problems, proves fruitful. Moreover, we illustrate how looking to insurance can aid a better understanding of uncertainty in general. In Part III, we exemplify this perspective by establishing bridges between fairness concepts in insurance and machine learning.

Das Dokument erscheint in: