(1) Overview

Repository location

The data set is available on DANS at https://doi.org/10.17026/dans-zqk-zmqs.

Context

The data were collected in the context of thesis work done within a research project funded by NWO (Netherlands Organization for Scientific Research; Grant Number Vidi-276-89-007), entitled ‘A girl fallen from the sky’: Effects of sensori-motor and emotional simulation on aesthetic appreciation of literary narratives. In this context, simulation can be defined as “… the re-enactment of perceptual, motor, and introspective states acquired during experience with the world, body, and mind” (). The findings from this study are described in detail in Mak and Willems (). So far, results based on these data have been published in four publications (; ; ; ).

When comparing the corpus to existing reading time corpora, the Eyelit data set can be best compared to the GECO corpus (), which is also a set of eye-tracking data of participants reading longer stretches of naturalistic, contextualized text (existing literary texts). For the GECO corpus a full novel was read, rather than literary short stories.

(2) Method

A full description of the methods used for collecting this data set can be read in Mak and Willems (). Here, we will give a summary of the key methods used (i.e., eye-tracking and questionnaires).

Eye-tracking and questionnaire data

The study was approved by the local ethics committee (approval code 8976). Participants were paid €10 or course credit for their participation. 109 participants were recruited from the participant database of the Radboud University. Data from seven participants had to be excluded from the data set based on poor quality of the data, or poor performance on a comprehension check. Data for these participants are not included in this data set. The resulting data set consists of eye-tracking data of 102 participants (81 females; Mage = 23 years), who each read three Dutch literary short stories written by acclaimed writers (see Table 1) while their eye movements were being recorded. All stories were divided into 30 sections for stimulus presentation. This resulted in about 30-45 minutes of eye-tracking data per participant. Participants read the stories in counterbalanced order.

Table 1

Title, author, year of publication, word count, and language of the experimental stories.


TITLEAUTHORYEAR OF PUBLICATIONWORD COUNTLANGUAGE

Story ADe mensen die alles lieten bezorgen(The people that had everything delivered)20142988Dutch (original)

Story BDe Chinese bruiloft(The Chinese wedding)20122659Dutch (original)

Story CSignalen en symbolen(Symbols and signs)1948/20032143Dutch (translation)

The stimuli were presented using SR Research’s Experiment Builder software (SR Research, Ottowa, Canada), and eye movement data were collected using a monocular desktop-mounted EyeLink1000Plus system at a sampling frequency of 500Hz. Head movements were minimized using a head stabilizer.

After reading, participants filled out questionnaires regarding their reading experiences, namely the Story World Absorption Scale () and a story appreciation questionnaire (). Additionally, more general questions about reading-related personality characteristics were measured using the Fantasy and Perspective taking subscales of the Interpersonal Reactivity Index (), print exposure was measure using the Author Recognition Test (; Dutch translation ), and reading habits in daily life were measured using a short self-report scale (). Additionally, participants answered nine multiple-choice questions on a comprehension check.

The eye-tracking data set was pre-processed in SR Research’s Data Viewer software (SR Research, Ottowa, Canada). If necessary and possible, fixations were horizontally aligned to the interest areas. If manual alignment was impossible, individual sections (and the corresponding interest areas) were removed from that specific data set. This resulted in the removal of at least one, but no more than six sections per story for 40 participants. If more than six of the 30 sections of a story needed to be removed, the entire story was removed from that participant’s data set. This was the case for four participants. Additionally, the answers on the comprehension check were examined to exclude participants who did not pay enough attention to one or more stories. Eight participants answered more than one question incorrectly for one of the three stories, resulting in the removal of that story from that participant’s data. In total, this resulted in 4.68% data loss.

After pre-processing, SR Research’s Data Viewer was used to calculate a sample report, fixation report, interest area report, saccade report, and trial report. In the data repository, the fixation, interest area, saccade and trial reports contain the combined data of all participants. Because of the size of the sample reports, these are provided separately for each participant.

The sample report contains a row of data for each sample collected by the eye tracker. As the eye tracker collected data at 500Hz, this is one row of data per 2 milliseconds. Examples of data that can be found in the sample report are x and y coordinates of the gaze, pupil diameter and the acceleration of the eye movements for every 2-second period (see ).

The fixation report contains a row of data for each fixation measured by the eye tracker. Examples of data that can be found in the fixation report are start times, end times and durations of each fixation, as well as the interest areas (words) the fixations were on (see ; ).

The interest area report contains a row of data for each interest area in the experiment. Examples of data that can be found in the interest area report are fixation durations on a word, eye regressions made toward or from a word, and whether the word was skipped during initial reading (see ; ).

The saccade report contains a row for each saccade (eye movement) measured by the eye tracker. Examples of data that can be found in the saccade report are start times, end times and durations of each saccade, as well as the interest areas the saccades started from or ended at (see ; ).

The trial report contains a row for each trial (i.e., screen or page) in the experiment. Each story was presented divided over 30 pages, resulting in 90 trials per participant. For some participants, some trials will be missing, because of bad quality of the eye-tracking data for those trials. Examples of data that can be found in the trial report are the number and average duration of the fixations in a trial, the overall duration of the trial, or the order in which the words in the trial were fixated (see ).

Some important variables in these reports are outlined in the codebook included in the repository. However, this is not an exhaustive list of all the variables in the data set. The full list can be found in the EyeLink Data Viewer User Manual, which can be downloaded from the SR Research Support Website at https://www.sr-support.com/.

Story annotation

All three stories were annotated on the word level for positional data (position of a word in the sentence/on the page/in the story), perplexity, surprisal value, word length and frequency. Perplexity was calculated using a 3-gram model trained by the SRI Language Modelling Toolkit () on 1 million sentences from the NLCOW2012 corpus (), and surprisal values were derived from the perplexity scores (perplexity = 10 to the power of negative surprisal). Both text frequency, being the frequency with which a word occurred within a story, and lexical frequency, the logarithm of the frequency with which a word appeared in the SUBTLEX-NL database () are included in the data set.

Additionally, scores are included that indicate whether the word was part of a motor, perceptual, or mental event description. These scores were obtained in a pre-test described in Mak and Willems (), where all words in all stories were rated by 30 participants on whether these words were part of a motor description, a perceptual description, or a mental event description. Motor descriptions are “concrete acts or actions performed by a person or object,” such as “They reached the bus-stop shelter” (Story C: Symbols & Signs). Perceptual descriptions are “things that are perceivable with the senses,” such as “A tiny unfledged bird” (Story C: Symbols & Signs). Mental event descriptions are “explicit descriptions of the thoughts, feelings and opinions of a character” and/or “reflection[s] by a character on his own or someone else’s thoughts, feelings or behaviour”, for example,“She thought of the recurrent waves of pain” (Story C: Symbols & Signs). This resulted in scores ranging from 0-30 per word, per type of description.

The file with the word characteristics also contains all words. This makes it possible to calculate other word characteristics if this would be of interest. Unique identifiers per word (as well as the eye-tracking data files) make it easy to combine data sets at the word level.

(3) Data set Description

Object name

The data set consists of 318 files, divided over 4 folders containing (1) raw eye-tracking data per participant; (2) combined pre-processed eye-tracking data; (3) self-report questionnaire data; and (4) word characteristics. More information on all 318 files in the data set can be found in the README.txt in the main folder of the repository, and in the Codebook.

Format names and versions

The data set consists of the first version of all files. All files are tab-delimited CSV files, except for the raw eye-tracking data (Packages Data Viewer session (to be recognized by the extension “.dvz”) and raw EyeLink EDF files), the README (TXT-file), the codebook (PDF-file), and the full description of the methods (PDF-file). A zipped version of Folder 1 enables easier downloading of the raw data.

Creation dates

The data were collected between 2017-01-12 and 2017-09-19.

Dataset creators

The data set was collected and created by Marloes Mak. Perplexity and surprisal values were calculated by Alessandro Lopopolo (Centre for Language Studies, Radboud University, Nijmegen, The Netherlands).

Language

The variable names are in English. The variable names of the eye-tracking data are the standard variable names used in SR Research’s Data Viewer (SR Research, Ottowa, Canada). As the stories were in Dutch, the interest area labels are Dutch words.

License

CC-BY 4.0

Repository name

DANS-EASY

Publication date

2021-03-13

(4) Reuse Potential

The data in this data set are of potential interest to researchers studying eye movements during (natural) reading. So far, the data have been used for studies of mental simulation during literary reading (), metaphor processing (), word skipping (), and individual differences in the sensitivity to word characteristics (). Other research questions that could be answered with this data set include (psycho)linguistic questions regarding how eye-tracking measures relate to each other, how eye-tracking measures relate to subjective reading experiences, or how word characteristics relate to eye-tracking measures. The data are also potentially of interest to literary scientists aiming to empirically assess the cognitive processes taking place during literary reading.

Apart from research, this data set could also be useful in education. The data set would be suitable for instruction on a large number of statistical analyses, including linear mixed effects regression, factor analyses, and Bayesian modelling.

A limitation of the data set is the limited number of stories read by the participants. A higher number of stories would make it easier to study story effects, or control for differences between stories in analyses. However, the data set makes up for this limitation with the large number of participants that were included (N=102).