Hot Topics Surrounding Acceptability Judgement Tasks

DSpace Repository


Dateien:

URI: http://hdl.handle.net/10900/77638
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-776386
http://dx.doi.org/10.15496/publikation-19039
Dokumentart: ConferenceObject
Date: 2017-08
Language: English
Faculty: 5 Philosophische Fakultät
Department: Allgemeine u. vergleichende Sprachwissenschaft
DDC Classifikation: 400 - Language and Linguistics
Keywords: Linguistik
Other Keywords:
Experimental syntax
Acceptability ratings
Crowdsourcing
Non-cooperative behaviour
Gradience
License: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Order a printed copy: Print-on-Demand
Show full item record

Abstract:

This paper discusses various "hot topics" concerning methodological issues in experimental syntax, with a focus on acceptability judgement tasks. We first review the literature on the question whether formal methods are necessary at all and argue that this is indeed the case. We then address questions concerning running experiments, with a focus on running experiments via the internet and dealing with non-cooperative behaviour. We review strategies to fend-off and to detect non-cooperative behaviour. Strategies based on response times can be used effectively to do so, already during the actual experiment. We show how quick clicking through an experiment can be prevented by giving a warning when response times fall below a predefined threshold. Sometimes participants counterbalance extremely short response times by pausing. Therefore, median response times rather than mean response times should be used for excluding participants post-experiment. In the final section, we present some thoughts on gradience and argue that recent findings make a case that the observed gradience is not just a by-product, but comes from the grammar itself and should be modelled as such.

This item appears in the following Collection(s)