/rdhs

API Client and Data Munging for the Demographic and Health Survey Data

Primary LanguageHTMLOtherNOASSERTION

rdhs

Project Status: WIP - Initial development is in progress, but there has not yet been a stable, usable release suitable for the public. Travis-CI Build Status codecov.io Documentation via pkgdown

Motivation

The Demographic and Health Surveys (DHS) Program has collected population survey data from over 90 countries for over 30 years. In many countries, DHS provide the key data that mark progress towards targets such as the Sustainable Development Goals (SDGs) and inform health policy. Though standard health indicators are routinely published in survey final reports, much of the value of DHS is derived from the ability to download and analyse standardized microdata datasets for subgroup analysis, pooled multi-country analysis, and extended research studies. The suite of tools within rdhs improves the accessibility of these datasets for statistical analysis with R, with aim to support reproducible global health research and simplify common analytical pipelines.


rdhs is a package for management and analysis of Demographic and Health Survey (DHS) data. This includes functionality to:

  1. Access standard indicator data (i.e. DHS STATcompiler) in R via the DHS API.
  2. Identify surveys and datasets relevant to a particular analysis.
  3. Download survey datasets from the DHS website.
  4. Load datasets and associated metadata into R.
  5. Extract variables and combining datasets for pooled multi-survey analyses.

Installation

Install rdhs from github with devtools. If you have not got a working development environment then please see the toolkit vignette

# install.packages("devtools")
# devtools::install_github("OJWatson/rdhs")
library(rdhs)

Getting started

  • Full functionality is described in the tutorial here.

  • An example workflow using rdhs to calculate trends in anemia prevalence is available here.

  • To run the full test suite (including tests for downloading datasets) see tutorial here.

Basic Functionality

Query the DHS API.

Obtain survey esimates for Malaria prevalence among children from the Democratic Republic of Congo and Tanzania in the last 5 years (since 2013) that included rapid diagnostic tests (RDTs).

dhs_indicators(indicatorIds = "ML_PMAL_C_RDT", returnFields=c("IndicatorId", "ShortName"))
#>                             ShortName   IndicatorId
#> 1 Malaria prevalence according to RDT ML_PMAL_C_RDT

dhs_data(countryIds = c("CD","TZ"), indicatorIds = "ML_PMAL_C_RDT", surveyYearStart = 2013,
       returnFields=c("Indicator", "SurveyId", "Value", "SurveyYearLabel", "CountryName"))
#>                             Indicator  SurveyId SurveyYearLabel Value
#> 1 Malaria prevalence according to RDT CD2013DHS         2013-14  30.8
#> 2 Malaria prevalence according to RDT TZ2015DHS         2015-16  14.4
#>                 CountryName
#> 1 Congo Democratic Republic
#> 2                  Tanzania

Identify survey datasets

Now, obtain survey microdatasets to analyze these same indicators. Query the surveyCharacteristics endpoint to identify the survey characteristic ID for malaria RDT testing.

## call with no arguments to return all characterstics
sc <- dhs_survey_characteristics()
sc[grepl("Malaria", sc$SurveyCharacteristicName), ]
#>    SurveyCharacteristicID SurveyCharacteristicName
#> 57                     96            Malaria - DBS
#> 58                     90     Malaria - Microscopy
#> 59                     89            Malaria - RDT
#> 60                     57          Malaria module 
#> 61                      8 Malaria/bednet questions

Use dhs_surveys() identify surveys for the countries and years of interest.

## what are the countryIds - we can find that using this API request
ids <- dhs_countries(returnFields=c("CountryName", "DHS_CountryCode"))

## find all the surveys that match the search criteria
survs <- dhs_surveys(surveyCharacteristicIds = 89, countryIds = c("CD","TZ"), surveyYearStart = 2013)

Lastly, identify the datasets required for download. By default, the recommended option is to download either the spss (.sav), fileFormat = "SV", or the flat file (.dat), fileFormat = "FL" datasets. The flat is quicker, but there are still one or two very old datasets that don't read correctly, whereas the .sav files are slower to read in but so far no datasets have been found that don't read in correctly. The household member recode (PR) reports the RDT status for children under five.

datasets <- dhs_datasets(surveyIds = survs$SurveyId, fileFormat = "FL", fileType = "PR")
str(datasets)
#> 'data.frame':    2 obs. of  13 variables:
#>  $ FileFormat          : chr  "Flat ASCII data (.dat)" "Flat ASCII data (.dat)"
#>  $ FileSize            : int  6595349 6622108
#>  $ DatasetType         : chr  "Survey Datasets" "Survey Datasets"
#>  $ SurveyNum           : int  421 485
#>  $ SurveyId            : chr  "CD2013DHS" "TZ2015DHS"
#>  $ FileType            : chr  "Household Member Recode" "Household Member Recode"
#>  $ FileDateLastModified: chr  "September, 19 2016 09:58:23" "December, 09 2016 14:10:59"
#>  $ SurveyYearLabel     : chr  "2013-14" "2015-16"
#>  $ SurveyType          : chr  "DHS" "DHS"
#>  $ SurveyYear          : int  2013 2015
#>  $ DHS_CountryCode     : chr  "CD" "TZ"
#>  $ FileName            : chr  "CDPR61FL.ZIP" "TZPR7HFL.ZIP"
#>  $ CountryName         : chr  "Congo Democratic Republic" "Tanzania"

Download datasets

To download datasets we need to first log in to our DHS account, by providing our credentials and setting up our configuration using set_rdhs_config(). This will require providing as arguments your email and project for which you want to download datasets from. You will then be prompted for your password. You can also specify a directory for datasets and API calls to be cached to using cache_path. In order to comply with CRAN, this function will also ask you for your permission to write to files outside your temporary directory, and you must type out the filename for the config_path - "rdhs.json". (See introduction vignette for specific format for config, or ?set_rdhs_config).

## login
set_rdhs_config(email = "rdhs.tester@gmail.com",
                project = "Testing Malaria Investigations",
                config_path = "rdhs.json",
                global=FALSE)
#> Please enter password in TK window (Alt+Tab)
#> Writing your configuration to:
#>    -> rdhs.json
#> If you are using git, be sure to add this to your .gitignore

The path to your config is saved between sessions so you only have to set this once. With your credentials set, all API requests will be cached within the cache_path directory provided so that these can be returned when working remotely or with a poor internet connection.

# the first time this will take a few seconds 
microbenchmark::microbenchmark(dhs_datasets(surveyYearStart = 1973),times = 1)
#> Unit: seconds
#>                                  expr     min      lq    mean  median
#>  dhs_datasets(surveyYearStart = 1973) 4.87256 4.87256 4.87256 4.87256
#>       uq     max neval
#>  4.87256 4.87256     1

# after caching, results will be available instantly
microbenchmark::microbenchmark(dhs_datasets(surveyYearStart = 1973),times = 1)
#> Unit: milliseconds
#>                                  expr      min       lq     mean   median
#>  dhs_datasets(surveyYearStart = 1973) 3.369217 3.369217 3.369217 3.369217
#>        uq      max neval
#>  3.369217 3.369217     1

Now download datasets by providing a list of desired dataset filenames.

# download datasets
downloads <- get_datasets(datasets$FileName)

str(downloads)
#> List of 2
#>  $ CDPR61FL: chr "C:\\Users\\Oliver\\AppData\\Local\\Oliver\\rdhs\\Cache/datasets/CDPR61FL.rds"
#>  $ TZPR7HFL: chr "C:\\Users\\Oliver\\AppData\\Local\\Oliver\\rdhs\\Cache/datasets/TZPR7HFL.rds"
#>  - attr(*, "reformat")= logi FALSE

Load datasets into R

The get_datasets() function returns a vector with a file path to the saved location of the downloaded datasets. These are read using readRDS():

# read in first dataset
cdpr <- readRDS(downloads$CDPR61FL)

Value labels are stored as attributes to each of the columns of the data frame using the labelled class (see haven::labelled or our introduction vignette for more details). Variable labels are stored in the label attribute.

Extract variables and pool datasets

The client also caches all variable labels to quickly query variables in each survey without loading the datasets.

# rapid diagnostic test search
vars <- search_variable_labels(datasets$FileName, search_terms = "malaria rapid test")

Then extract these variables from the datasets. Optionally, geographic data may be added.

# and now extract the data
extract <- extract_dhs(vars, add_geo = FALSE)
#> Starting Survey 1 out of 2 surveys:CDPR61FL
#> Starting Survey 2 out of 2 surveys:TZPR7HFL

The returned object is a list of extracted datasets.

Dataset extracts can alternate be specified by providing a vector of surveys and vector of variable names:

# and grab the questions from this now utilising the survey variables
vars <- search_variables(datasets$FileName, variables = c("hv024","hml35"))

# and now extract the data
extract <- extract_dhs(vars, add_geo = FALSE)
#> Starting Survey 1 out of 2 surveys:CDPR61FL
#> Starting Survey 2 out of 2 surveys:TZPR7HFL

Finally, the two datasets are pooled using the function rbind_labelled(). This function works specifically with our lists of labelled data.frames. Labels are specified for each variable: for hv024 all labels are retained (concatenate) but for hml35 labels across both datasets to be "Neg" and "Pos".

# now let's try our second extraction
extract <- rbind_labelled(extract,
                          labels = list("hv024" = "concatenate",
                                        "hml35" = c("Neg"=0, "Pos"=1)))

There is also an option to process downloaded datasets with labelled variables coded as strings, rather than labelled variables. This is specifed by the argument reformat=TRUE.

# identify questions but specifying the reformat argument
questions <- search_variables(datasets$FileName, variables = c("hv024", "hml35"),
                                     reformat=TRUE)

# and now extract the data
extract <- extract_dhs(questions, add_geo = FALSE)
#> Starting Survey 1 out of 2 surveys:CDPR61FL
#> Starting Survey 2 out of 2 surveys:TZPR7HFL

# group our results
extract <- rbind_labelled(extract)

# our hv024 variable is now just character strings, so you can decide when/how to factor/label it later
str(extract)
#> Classes 'dhs_dataset' and 'data.frame':  160829 obs. of  4 variables:
#>  $ hv024   : chr  "equateur" "equateur" "equateur" "equateur" ...
#>   ..- attr(*, "label")= chr "Province"
#>  $ hml35   : chr  NA NA NA NA ...
#>   ..- attr(*, "label")= chr "Result of malaria rapid test"
#>  $ SurveyId: chr  "CD2013DHS" "CD2013DHS" "CD2013DHS" "CD2013DHS" ...
#>  $ DATASET : chr  "CDPR61FL" "CDPR61FL" "CDPR61FL" "CDPR61FL" ...