/covid19_twitter

Covid-19 Twitter dataset for non-commercial research use and pre-processing scripts - under active development

Primary LanguagePython

Latest Updates:

8/2/20 Version 21 of the dataset, we have refactored the full_dataset.tsv and full_dataset_clean.tsv files (since version 20) to include two additional columns: language and place country code (when available). This change now includes language and country code for ALL the tweets in the dataset, not only clean tweets. With this change we have removed the clean_place_country.tar.gz and clean_languages.tar.gz files. With our refactoring of the dataset generating code we also found a small bug that made some of the retweets not be counted properly, hence the extra increase on tweets available. Dailies have been added for 8/1, 7/31 and 7/30.

7/30/20 Daily data (under the /dailies/ folder) has been added for 7/29 and 7/28, note that some tweets will bleed into the following day due to different timezones captured.

7/28/20 Daily data (under the /dailies/ folder) has been added for 7/27 and 7/26, note that some tweets will bleed into the following day due to different timezones captured.

7/26/20 Celebrating version 20 of the dataset, we have refactored the full_dataset.tsv and full_dataset_clean.tsv files to include two additional columns: language and place country code (when available). This change now includes language and country code for ALL the tweets in the dataset, not only clean tweets. With this change we have removed the clean_place_country.tar.gz and clean_languages.tar.gz files. With our refactoring of the dataset generating code we also found a small bug that made some of the retweets not be counted properly, hence the extra increase on tweets available.

7/23/20 Daily data (under the /dailies/ folder) has been added for 7/22 and 7/21, note that some tweets will bleed into the following day due to different timezones captured.

7/21/20 Daily data (under the /dailies/ folder) has been added for 7/20 and 7/19, note that some tweets will bleed into the following day due to different timezones captured.

7/19/20 - Version 19 of the dataset has been released. It can be found in: https://doi.org/10.5281/zenodo.3723939. This incorporates all the dailies until 7/18 and version 18.0 of the dataset. Dailies have been added for 7/18, 7/17, and 7/16 in the dailies folder. We made it to 513 Million tweets in this version of the dataset. NEW in Version 19: Besides our regular update, we now have included the tweet identifiers and their respective tweet location place country code for the clean version of the dataset. This is found on the clean_place_country.tar.gz file, each file is identified by the two-character ISO country code as the file suffix.

7/16/20 Daily data (under the /dailies/ folder) has been added for 7/15 and 7/14, note that some tweets will bleed into the following day due to different timezones captured.

7/14/20 Daily data (under the /dailies/ folder) has been added for 7/13 and 7/12, note that some tweets will bleed into the following day due to different timezones captured.

7/12/20 - Version 18 of the dataset has been released. It can be found in: https://doi.org/10.5281/zenodo.3723939. This incorporates all the dailies until 7/11 and version 17.0 of the dataset. Dailies have been added for 7/11, 7/10, and 7/09 in the dailies folder. We made it to 490 Million tweets in this version of the dataset. NEW in Version 18: Besides our regular update, we now have included the tweet identifiers and their respective tweet location place country code for the clean version of the dataset. This is found on the clean_place_country.tar.gz file, each file is identified by the two-character ISO country code as the file suffix.

7/09/20 Daily data (under the /dailies/ folder) has been added for 7/8 and 7/7, note that some tweets will bleed into the following day due to different timezones captured.

7/07/20 Daily data (under the /dailies/ folder) has been added for 7/6 and 7/5, note that some tweets will bleed into the following day due to different timezones captured.

7/05/20 - Version 17 of the dataset has been released. It can be found in: https://doi.org/10.5281/zenodo.3723939. This incorporates all the dailies until 7/4 and version 16.0 of the dataset. Dailies have been added for 7/4, 7/3, and 7/2 in the dailies folder. We made it to 468 Million tweets in this version of the dataset. NEW in Version 17: Besides our regular update, we now have included the tweet identifiers and their respective tweet location place country code for the clean version of the dataset. This is found on the clean_place_country.tar.gz file, each file is identified by the two-character ISO country code as the file suffix.

7/02/20 Daily data (under the /dailies/ folder) has been added for 7/01 and 6/29, note that some tweets will bleed into the following day due to different timezones captured.

6/30/20 Daily data (under the /dailies/ folder) has been added for 6/29 and 6/28, note that some tweets will bleed into the following day due to different timezones captured.

Covid-19 Twitter chatter dataset for scientific use

Due to the relevance of the COVID-19 global pandemic, we are releasing our dataset of tweets acquired from the Twitter Stream related to COVID-19 chatter. The first 9 weeks of data (from January 1st, 2020 to March 11th, 2020) contain very low tweet counts as we filtered other data we were collecting for other research purposes, however, one can see the dramatic increase as the awareness for the virus spread. Dedicated data gathering started from March 11th yielding over 4 million tweets a day.

The data collected from the stream captures all languages, but the higher prevalence are: English, Spanish, and French. We release all tweets and retweets on the full dataset, and a cleaned version with no retweets. There are several practical reasons for us to leave the retweets, tracing important tweets and their dissemination is one of them. For NLP tasks we provide the top 1000 frequent terms, the top 1000 bigrams, and the top 1000 trigrams. Some general statistics per day are included for both datasets.

We will continue to update the dataset every two days here and weekly in Zenodo.

For more information on processing and visualizations please visit: www.panacealab.org/covid19

Usage

All tweets ids found in full_dataset.tsv and full_dataset-clean.tsv need to be hydrated using a tool like get_metada.py from the Social Media Toolkit (SMMT) released by our lab or Twarc.

Note: All the code in the /processing_code folder is provided as-is, it was used to generate the provided files from the source Tweet JSON files. Documentation will be gradually added for these scripts.

Mainted by:

Panacea Lab - Georgia State University - Juan M. Banda, Ramya Tekumalla, and Gerardo Chowell-Puente. Additional data provided by: Guanyu Wang (Missouri school of journalism, University of Missouri), Jingyuan Yu (Department of social psychology, Universitat Autònoma de Barcelona), Tuo Liu (Department of psychology, Carl von Ossietzky Universität Oldenburg), Yuning Ding (Language technology lab, Universität Duisburg-Essen), Katya Artemova (NRU HSE) and Elena Tutubalina (KFU)

Version 21.0 release notes

DOI

Version 21 of the dataset, we have refactored the full_dataset.tsv and full_dataset_clean.tsv files (since version 20) to include two additional columns: language and place country code (when available). This change now includes language and country code for ALL the tweets in the dataset, not only clean tweets. With this change we have removed the clean_place_country.tar.gz and clean_languages.tar.gz files. With our refactoring of the dataset generating code we also found a small bug that made some of the retweets not be counted properly, hence the extra increase on tweets available.

Version 20.0 release notes

DOI

Celebrating version 20 of the dataset, we have refactored the full_dataset.tsv and full_dataset_clean.tsv files to include two additional columns: language and place country code (when available). This change now includes language and country code for ALL the tweets in the dataset, not only clean tweets. With this change we have removed the clean_place_country.tar.gz and clean_languages.tar.gz files. With our refactoring of the dataset generating code we also found a small bug that made some of the retweets not be counted properly, hence the extra increase on tweets available.

Version 19.0 release notes

DOI

NEW in Version 19: Besides our regular update, we now have included the tweet identifiers and their respective tweet location place country code for the clean version of the dataset. This is found on the clean_place_country.tar.gz file, each file is identified by the two-character ISO country code as the file suffix.

Version 18.0 release notes

DOI

NEW in Version 18: Besides our regular update, we now have included the tweet identifiers and their gesgective tweet location place country code for the clean version of the dataset. This is found on the clean_place_country.tar.gz file, each file is identified by the two-character ISO country code as the file suffix.

Version 17.0 release notes

DOI

NEW in Version 17: Besides our regular update, we now have included the tweet identifiers and their respective tweet location place country code for the clean version of the dataset. This is found on the clean_place_country.tar.gz file, each file is identified by the two-character ISO country code as the file suffix.

Version 16.0 release notes

DOI

NEW in Version 16: NEW in Version 16: Besides our regular update, we now have included the tweet identifiers and their respective tweet location place country code for the clean version of the dataset. This is found on the clean_place_country.tar.gz file, each file is identified by the two-character ISO country code as the file suffix. Version 16.0 of the dataset has been released. It can be found in: https://doi.org/10.5281/zenodo.3723939. This incorporates all the dailies until 6/27 and version 15.0 of the dataset. Dailies have been added for 6/27, 6/26, and 6/25 in the dailies folder. We made it to 446 Million tweets in this version of the dataset.

Version 15.0 release notes

DOI

NEW in Version 15: Besides our regular update, we now have included the tweet identifiers and their respective language for the clean version of the dataset. This is found on the clean_languages.tar.gz file, each file is identified by the two-character language code as the file suffix. Version 14.0 of the dataset has been released. It can be found in: https://doi.org/10.5281/zenodo.3723939. This incorporates all the dailies until 6/20 and version 14.0 of the dataset. Dailies have been added for 6/20, 6/19, and 6/18 in the dailies folder. We made it to 424 Million tweets in this version of the dataset.

How to cite this dataset:

Version 20.0

@dataset{banda_juan_m_2020_3757272,
  author       = {Banda, Juan M. and
                  Tekumalla, Ramya and
                  Wang, Guanyu and
                  Yu, Jingyuan and
                  Liu, Tuo and
                  Ding, Yuning and
                  Artemova, Katya and
                  Tutubalinа, Elena and
                  Chowell, Gerardo},
  title        = {{A large-scale COVID-19 Twitter chatter dataset for 
                   open scientific research - an international
                   collaboration}},
  month        = may,
  year         = 2020,
  note         = {{This dataset will be updated bi-weekly at least 
                   with additional tweets, look at the github repo
                   for these updates. Release: We have standardized
                   the name of the resource to match our pre-print
                   manuscript and to not have to update it every
                   week.}},
  publisher    = {Zenodo},
  version      = {20.0},
  doi          = {10.5281/zenodo.3723939},
  url          = {https://doi.org/10.5281/zenodo.3723939}
}