dc.contributor.advisor |
Kasneci, Gjergji (Prof. Dr.) |
|
dc.contributor.author |
Pawelczyk, Martin |
|
dc.date.accessioned |
2024-01-19T09:35:44Z |
|
dc.date.available |
2024-01-19T09:35:44Z |
|
dc.date.issued |
2024-01-19 |
|
dc.identifier.uri |
http://hdl.handle.net/10900/149388 |
|
dc.identifier.uri |
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1493882 |
de_DE |
dc.identifier.uri |
http://dx.doi.org/10.15496/publikation-90728 |
|
dc.description.abstract |
This recent widespread deployment of machine learning algorithms presents many new challenges. Machine learning algorithms are usually opaque and can be particularly difficult to interpret. When humans are involved, algorithmic and automated decisions can negatively impact people’s lives. Therefore, end users would like to be insured against potential harm. One popular way to achieve this is to provide end users access to algorithmic recourse, which gives end users negatively affected by algorithmic decisions the opportunity to reverse unfavorable decisions, e.g., from a loan denial to a loan acceptance. In this thesis, we design recourse algorithms to meet various end user needs. First, we propose methods for the generation of realistic recourses. We use generative models to suggest recourses likely to occur under the data distribution. To this end, we shift the recourse action from the input space to the generative model’s latent space, allowing to generate counterfactuals that lie in regions with data support. Second, we observe that small changes applied to the recourses prescribed to end users likely invalidate the suggested recourse after being nosily implemented in practice. Motivated by this observation, we design methods for the generation of robust recourses and for assessing the robustness of recourse algorithms to data deletion requests. Third, the lack of a commonly used code-base for counterfactual explanation and algorithmic recourse algorithms and the vast array of evaluation measures in literature make it difficult to compare the per formance of different algorithms. To solve this problem, we provide an open source benchmarking library that streamlines the evaluation process and can be used for benchmarking, rapidly developing new methods, and setting up new
experiments. In summary, our work contributes to a more reliable interaction of end users and machine learned models by covering fundamental aspects of the recourse process and suggests new solutions towards generating realistic and robust counterfactual explanations for algorithmic recourse. |
en |
dc.language.iso |
en |
de_DE |
dc.publisher |
Universität Tübingen |
de_DE |
dc.rights |
ubt-podok |
de_DE |
dc.rights.uri |
http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de |
de_DE |
dc.rights.uri |
http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en |
en |
dc.subject.ddc |
004 |
de_DE |
dc.subject.other |
interpretability |
en |
dc.subject.other |
xai |
en |
dc.subject.other |
counterfactuals |
en |
dc.subject.other |
counterfactual explanations |
en |
dc.subject.other |
Algorithmic recourse |
en |
dc.title |
On the Generation of Realistic and Robust Counterfactual Explanations for Algorithmic Recourse |
en |
dc.type |
PhDThesis |
de_DE |
dcterms.dateAccepted |
2023-06-19 |
|
utue.publikation.fachbereich |
Informatik |
de_DE |
utue.publikation.fakultaet |
7 Mathematisch-Naturwissenschaftliche Fakultät |
de_DE |
utue.publikation.noppn |
yes |
de_DE |