Distribution-Dissimilarities in Machine Learning

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/87256
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-872561
http://dx.doi.org/10.15496/publikation-28642
Dokumentart: Dissertation
Erscheinungsdatum: 2019-03-27
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Informatik
Gutachter: Schölkopf, Bernhard (Prof. Dr.)
Tag der mündl. Prüfung: 2018-12-17
DDC-Klassifikation: 004 - Informatik
500 - Naturwissenschaften
Schlagworte: Maschinelles Lernen , Künstliche Intelligenz , Maschinelles Sehen , Lerntheorie , Statistik , Wahrscheinlichkeitsrechnung , Hilbert-Raum
Freie Schlagwörter: Generative Algorithmen
Gegnerische Beispiele
Divergenzen
Distanzen über Wahrscheinlichkeitsmaße
Distances for Probability Distributions
Divergences
Generative Algorithms
Adversarial Examples
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Gedruckte Kopie bestellen: Print-on-Demand
Zur Langanzeige

Abstract:

Any binary classifier (or score-function) can be used to define a dissimilarity between two distributions. Many well-known distribution-dissimilarities are actually classifier-based: total variation, KL- or JS-divergence, Hellinger distance, etc. And many recent popular generative modeling algorithms compute or approximate these distribution-dissimilarities by explicitly training a classifier: e.g. generative adversarial networks (GAN) and their variants. This thesis introduces and studies such classifier-based distribution-dissimilarities. After a general introduction, the first part analyzes the influence of the classifiers' capacity on the dissimilarity's strength for the special case of maximum mean discrepancies (MMD) and provides applications. The second part studies applications of classifier-based distribution-dissimilarities in the context of generative modeling and presents two new algorithms: Wasserstein Auto-Encoders (WAE) and AdaGAN. The third and final part focuses on adversarial examples, i.e. targeted but imperceptible input-perturbations that lead to drastically different predictions of an artificial classifier. It shows that adversarial vulnerability of neural network based classifiers typically increases with the input-dimension, independently of the network topology.

Das Dokument erscheint in: