Although Twitter could be understood as a massive corpus of texts in which many distant reading methodologies are deployed, interest in it as a resource for Digital Humanities (DH) projects has not been widespread. Its most popular use to date has been related to the movement “Twitter for scholarly networking”,1 in which digital humanists analysed how the DH community grew and established their particular networks (Grandjean, 2016; Williams, Terras, & Warwick, 2013; Quan-Haase, Martin, & McCay-Peet, 2015). However, we can highlight relevant DH studies that relate Twitter data to Natural Language Processing (NLP) (Sinclair & Rockwell, 2016; Jockers & Underwood, 2016; Gelfgren, 2016) and the DHNow initiative that has been offering open tools and resources to work with Twitter.2 Moreover, we note other responses closely related to DH that performed qualitative and quantitative analysis on Twitter (Chew and Eysenbach, 2010; Fu, Liang, Saroha, Tse, & Fung, 2016).
Digital Narratives of COVID-19 (DHCovid)3 is a digital humanities project funded by the University of Miami (FL) and developed in collaboration with the National Scientific and Technical Research Council (CONICET, Argentina) that investigates the sociolinguistic and geographical trends and topics in Twitter conversations surrounding the COVID-19 pandemic. Since April 2020, this project has been collecting COVID-19 related tweets in Spanish as well as tweets in English and Spanish in specific geographic locations: Argentina, Colombia, Ecuador, Mexico, Peru, Spain, and South Florida.
To assemble the Twitter corpus, a PHP programming language script mines the Twitter data streaming through Twitter’s Application Programming Interface (API) and recovers a series of specific tweet identifiers (IDs). Our data mining sampling strategy consists of four main variables: language, keywords, region, and date.
The corpus is available through three repositories:
- GitHub. Tweet IDs are stored in a MySQL relational database where they are “hydrated,” that is, all metadata associated with the tweets is recovered, including its body text. Then, an additional script organizes the tweet IDs in the database by day, language, and region, and creates a plaintext file for each combination with a list of corresponding tweet IDs. The script generates these files daily and organizes them into folders, where each directory represents one day. These are uploaded directly to our public GitHub repository (Table 1).4 The data collection began on April 24th, 2020, and new tweets are automatically uploaded daily, until May 2021.
- Project website endpoint. A free access endpoint5 for query and download of “hydrated” tweets can be accessed from DHCovid website. An additional script queries the database and recovers body text of tweets (see Quality Control section). The access to a tidied and structured Twitter corpus for on-demand querying is one of the most meaningful contributions of our project for data reuse and text mining activities.
- Zenodo. A first stable version of the dataset, published on May 13, 2020, was released through Zenodo as a compressed ZIP file containing folders of daily tweets made between April 24th, 2020 and May 12th, 2020. A second and final version will be uploaded by the end of the project in May 2021 with the complete collection of tweet IDs.
|dhcovid_YEAR_MONTH_en_fl.txt||Tweets in English in Florida|
|dhcovid_YEAR_MONTH_es.txt||All Spanish tweets|
|dhcovid_YEAR_MONTH_es_ar.txt||Tweets in Spanish in Argentina|
|dhcovid_YEAR_MONTH_es_co.txt||Tweets in Spanish in Colombia|
|dhcovid_YEAR_MONTH_es_ec.txt||Tweets in Spanish in Ecuador|
|dhcovid_YEAR_MONTH_es_es.txt||Tweets in Spanish in Spain|
|dhcovid_YEAR_MONTH_es_fl.txt||Tweets in Spanish in Florida|
|dhcovid_YEAR_MONTH_es_mx.txt||Tweets in Spanish in Mexico|
|dhcovid_YEAR_MONTH_es_pe.txt||Tweets in Spanish in Peru|
The recovery of tweets by language and keywords is straightforward: we only query tweets written in Spanish and English that contain one of the words from our user-defined keywords list. We delimited two lists of keywords in English and Spanish related to the COVID-19 pandemic. Consequently, only tweets with one of these words and/or hashtags are selected. The English keywords only apply to Miami area and include terms such as “covid,” “coronavirus,” “pandemic,” “quarantine,” “#stayathome,” “outbreak,” “lockdown,” and “#socialdistancing.” The Spanish keywords include “covid,” “coronavirus,” “pandemia,” “cuarentena,” “confinamiento,” “#quedateencasa,” “desescalada,” and “#distanciamientosocial (Allés-Torrent, 2020).
Our strategy is also shaped by Twitter API policies. First, in its free version, the API did not offer the possibility for querying tweets older than seven days. Second, Twitter allows users to publish georeferenced tweets (with exact location) but retrieving geotagged tweets is complicated due to the absence of a facility for querying by geographic region. A pragmatic approach led us to define “country” as a circle surrounding the area of interest, e.g., “Mexico” is defined as latitude 21.295658, longitude -100.291341, and a radius of 450 miles. Indeed, political and national borders will not always follow our selection criteria, so our area-specific corpus can sometimes contain tweets from a neighbouring country, e.g., a query for Argentina is conflated with parts of Uruguay, and Colombia with parts of Ecuador.
The “hydration” of the collected tweet IDs undergoes an additional data tidying process before any body text data is returned to the user. We apply a set of rules to the tweet body text: enforce that all words are in lowercase, remove accents, punctuations, mention of users (@users) to protect privacy, and replace all links with a general “URL” term. While enforcing all text to be in UTF-8 encoding, a particular challenge unique to the Spanish corpus is accents and graphemes, such as the “ñ”, that can be difficult to process and preserve. Most of those cases were resolved through a script by detecting special entity codes and replacing them with the correct character (e.g. ñ as ñ). We have also transliterated emojis into its corresponding UTF-8 charset and eliminated them from our experiments as of now. This processing facilitates the application of NLP techniques.
(3) Dataset Description
Format names and versions
2020-04-24 to 2021-05-31
Susanna Allés-Torrent: Conceptualization; Funding Acquisition; Project administration; Supervision; Writing
Gimena del Rio Riande: Conceptualization; Writing
Jerry Bonnell: Data curation; Software; Visualization
Dieyun Song: Data curation; Writing
Nidia Hernández: Data curation; Visualization; Writing
Spanish, English (See Table 2).
Creative Commons license Attribution 4.0 International (CC BY 4.0).
GitHub for the continuously daily updated version. Zenodo for the stable DOI.
First published to the GitHub repository on 2020-04-24. Afterwards it was released to Zenodo on 2020-05-13.
(4) Reuse Potential
The COVID-19 pandemic has motivated a plethora of ambitious digital research focusing on Twitter, including the role and importance of automated Twitter accounts, also known as bots (Ferrara, 2020), the increase of politically radical discourse in social media (Jiang, Chen, Yan, Lerman & Ferrara, 2020), and the general public perception (Abdo, Alghonaim & Essam, 2020). Most of the efforts, however, mine and analyze tweets written in English (Banda et al., 2021; Kerchner & Wrubel, 2020; Lamsal, 2020). DHCovid bridges the gap by bringing Spanish Twitter narratives into the conversation.
We use a CC BY license to give full reuse of our dataset to the public as text or to apply combined technical and humanistic processing techniques, such as statistics, NLP, topic modelling, and sentiment analysis. Our datasets can be meaningful for teaching, research, and activism resources to be used across language, disciplinary, and professional boundaries. Scholars interested in the human experience of the pandemic, health professionals, policy makers, funding agencies, journalists, and active citizens are all welcome to engage with our project.