Computer Vision Understanding of Narrative Strategies On Greek Vases

DSpace Repositorium (Manakin basiert)

Zitierfähiger Link (URI): http://hdl.handle.net/10900/173709
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1737099
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1737099
http://dx.doi.org/10.15496/publikation-115034
Dokumentart: Konferenzpaper
Erscheinungsdatum: 2026-03
Sprache: Englisch
Fakultät: 9 Sonstige / Externe
9 Sonstige / Externe
Fachbereich: Evangelisch-Theologische Fakultät
DDC-Klassifikation: 930 - Alte Geschichte, Archäologie
Schlagworte: Archäologie , Maschinelles Sehen
Freie Schlagwörter:
Attic Black-Figure Vase Painting
Schemata
Object Detection
Pose Estimation
Computer Vision
Zur Langanzeige

Abstract:

This paper tackles the problem of analysing image narration on artworks of Classical Antiquity through computer vision (CV) techniques, using machine learning. The challenge of understanding the semantics in images beyond the automated recognition of individual objects, is approached by computationally identifying general formal parameters in the depiction of human interactions. In the first part, a case study of a popular formal scheme in Attic black-figure vase painting of the 6th century BCE is introduced. It consists of an interaction between a warrior and a woman, whose relationship is variously outlined by significant postures and gestures. Besides presenting a new interpretation of the scheme and exemplifying its transfer in other image contexts, we focus on the formal diversity and the question of how some parameters in the depiction of human interactions (posture of the figures, orientation of head and feet and body contact) contribute to the image narration. In the second part, these parameters, as well as the challenges of dealing with images of ancient vase paintings, are tackled from a CV perspective. Here, we introduce a method to mitigate the domain shift. This helps to improve pose estimation of the figures and subsequently aids in an effective retrieval of figures with similar postures. We demonstrate the usefulness of our method with a web-based tool.

Das Dokument erscheint in: