Abstract:
Motivated by problems in machine learning, this dissertation advances the theory of imprecise probabilities, which offers a more flexible framework for representing uncertainty
compared to traditional precise probabilities. In this work, we investigate the mathematical
structure of imprecise probabilities, while considering a plurality of semantics for them.
Although semantics and mathematics may initially appear independent, we observe that
the choice of semantics shapes the mathematical framework that emerges.
With the goal of expressing risk aversion and ambiguity aversion, Part I investigates imprecise probabilities primarily in the form of law invariant coherent risk measures. In machine learning, these can help to reduce the tail risk in a distribution, as well as to guard against distributional shifts. Drawing on insights from the rearrangement invariant Banach function spaces literature, we investigate the structure of law invariant coherent risk measures. In particular, we study in depth the tail sensitivity of such risk measures and show how this yields a stratification of the law invariant coherent risk measures.
In Part II, we move beyond law invariance and conduct a fundamental investigation of
imprecise probabilities from a generalized frequentist viewpoint. Here, we challenge the
assumption that a precise probability always suffices to capture the aggregate regularity
of a data sequence, and show how imprecise probability naturally arises in the general
case. To this end, we study imprecision under various data models. We furthermore propose
a general framework for the evaluation of imprecise forecasts under such data models.
Specifically, we develop viable notions of proper scoring rules and calibration for imprecise
probabilities, generalizing their traditional correspondences.
Conceptually, our viewpoint on uncertainty may be of broader interest and also yields
insights into the precise case. Our focus on two key ingredients, data models and decision
problems, proves fruitful. Moreover, we illustrate how looking to insurance can aid a better
understanding of uncertainty in general. In Part III, we exemplify this perspective by
establishing bridges between fairness concepts in insurance and machine learning.