Version 1.2 > prediction model was updated January 14 2019; Github pages updated March 16 2021
Qoala-T is developed and created by Lara Wierenga, PhD and Eduard Klapwijk, PhD in the Brain and development research center.
Qoala-T is a supervised learning tool that asseses accuracy of manual quality control of T1 imaging scans and their automated neuroanatomical labeling processed in FreeSurfer. It is particularly intended to use in developmental datasets. This package contains data and R code as described in Klapwijk et al., (2019) see https://doi.org/10.1016/j.neuroimage.2019.01.014. The protocol of our in house developed manual QC procedure can be found here.
We have also developed an app using R Shiny by which the Qoala-T model can be run without having R installed, see the Qoala-T app (source code to run locally can be found here).
- To be able to run the Qoala-T model, T1 MRI images should be processed in FreeSurfer. The models used in the present version are developped for FreeSurfer V6.0. We have tested this for version FreeSrufer V7.1.0 as well, see more details below.
- Use the following script to extract the necessary information needed in order to perform Qoala-T: for FreeSurfer v6.0 use Stats2Table.R for FreeSurfer v7.1.1 use Stats2Table_fs7.R
Note: the Stats2Table.R script replaces extraction of necessary txt files using the fswiki script or stats2table_bash_qoala_t.sh, which had to be merged using this R script.
-
With this R script Qoala-T scores for a dataset are estimated using a supervised- learning model. This model is based on 784 T1-weighted imaging scans of subjects aged between 8 and 25 years old (53% females). The manual quality assessment is described in the Qoala-T manual Manual quality control procedure for structural T1 scans, also available in the supplemental material of Klapwijk et al. (2019).
-
To run the model-based Qoala-T option open Qoala_T_A_model_based_github.R and follow the instructions. Alternatively you can run this option without having R installed, see the Qoala-T app (source code here).
-
An example output table (left) and output graph (right) showing the Qoala-T score of each scan are displayed below. The figure shows the number of included and excluded predictions. The grey area represents the scans that are recommended for manual quality assesment.
- NEW: Using this Qoala-T Jupyter Notebook is the easiest way to get from your directory with FreeSurfer-processed data to Qoala-T predictions based on the BrainTime model. Only prerequisite is you can run Jupyter Notebooks in R, for example by installing Anaconda and then follow these instructions.
B. Predicting scan Qoala-T score by rating a subset of your data (FreeSurfer v6.0 and FreeSurfer v7.1.0)
- With this R script an in-house developed manual QC protocol can be applied on a subset of the dataset (e.g. 10%, the larger the set, the more reliable the results).
- To run the subset-based Qoala-T option open Qoala_T_B_subset_based_github.R and follow the instructions.
A flowchart of these processes can be observed in A and B below.
- NEW: Using this Qoala-T Jupyter Notebook - subset-based is the easiest way to get from your directory with FreeSurfer-processed data to Qoala-T predictions onde you have manually rated a subset of your data. Only prerequisite is you can run Jupyter Notebooks in R, for example by installing Anaconda and then follow these instructions.
- When using Qoala-T within the longitudinal FreeSurfer stream, the QC predictions should be run within the first step of the processing pipeline (Step 1. the cross-sectional processing of the timepoints). It will not work with the output from the longitudinal stream, since the longitudinal processing does not provide the number of surface holes, which is needed for prediction.
- When running Qoala-T right after cross-sectional processing, bad quality scans/segmentations can be removed before running step 2 where the template from all time points is created. In this way the template will not be affected by a poor quality timepoint.
In order to continuously evaluate the performance of the Qoala-T tool, we will report predictive accuracies for different datasets on this page. We invite researchers who performed both manual QC and used Qoala-T to share their performance metrics and some basic information about their sample. This can be done by creating a pull request for this Github page or by e-mailing to e.klapwijk@essb.eur.nl. The table below reports predictive accuracies in new datasets when using the BrainTime model (i.e., option A that can be run using the Shiny app).
General information | Qoala-T predictions | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Sample name or lab name | Institute | Author name(s) | Group characteristics (e.g., developmental, patient group, elderly) | Total N | Age range (years) | Field strength | T1 sequence type (e.g., MPRAGE, T13D), field of view, dimensions of voxels | doi | Qoala-T version used (current = v1.2) | Accuracy | Specificity | Sensitivity | Manual QC protocol used (e.g., Qoala-T protocol, in-house) | Manual QC distribution (i.e., N per quality category) |
BESD | Leiden University | Moji Aghajani, Eduard Klapwijk et al. | Adolescents with conduct disorder, autism spectrum disorder, and typically developing | 112 | 15-19 | 3T | T1 3D, FOV 224x177x168, voxel size 0.875 x 0.875 x 1.2 mm | https://doi.org/10.1111/jcpp.12498; https://doi.org/10.1016/j.biopsych.2016.05.017 | v1.2 | 0.893 | 0.978 | 0.524 | Qoala-T protocol | excellent=19, good=51, doubtful=21, failed=21 |
ABIDE (subset) | NITRC | Di Martino et al. | autism spectrum disorders, typically developing controls | 760 | 6-39 | 3T | site-specific, see http://fcon_1000.projects.nitrc.org/indi/abide/abide_I.html | https://doi.org/10.1038/mp.2013.78 | v1.2 | 0.809 | 0.815 | 0.783 | from MRIQC project: T1 images were rated aided by FreeSurfer surface reconstructions | good/accept=608, doubtful=14, failed/exclude=138 |
MCN Basel | University of Basel | David Coynel | healthy young adults | 1773 | 18-35 | 3T | MPRAGE, 256x256x176, 1mm3 | http://dx.doi.org/10.1523/ENEURO.0222-17.2018 | v1.1 | 0.963 | 0.985 | 0.524 | in-house visual inspection of raw data | good/excellent: N=1691; doubtful/bad: N=82 |
We have assessed the preformance of the Qoala-T tool on the latest FreeSurfer v7.1.0 release. We have tested this using a 10 fold cross validation to see if we could replicate the results of FreeSurfer v6.0 as published in paragraph 3.3 of Klapwijk et al., (2019). Results are highly similar, yet sensitivity is a little lower and shows larger variation. This indicates that FreeSurfer vs7.1.0 gives more conservative results, as some scans that would be rater as include, are now flagged as manual check or exclude. Note that the random forest model paramaters were identical to the ones used in the publication of Klapwijk et al. (2019). In addition, we used the manual quality ratings based on the v6.0 output. So potentially the accuracy of the segmentatios between the two FreeSurfer versions may differ, which we did not assess here. We would recommand to use the subset-based Qoala-T option for data processed in FreeSurfer v7.1.0 Qoala_T_B_subset_based_github.R rather than the model based Qoala-T option.
Fold | AUC | Accuracy | Sensitivity | Specificity |
---|---|---|---|---|
1 | 0.977 | 0.976 | 0.806 | 0.985 |
2 | 0.989 | 0.976 | 0.871 | 0.982 |
3 | 0.974 | 0.970 | 0.750 | 0.982 |
4 | 0.970 | 0.975 | 0.813 | 0.983 |
5 | 0.968 | 0.971 | 0.710 | 0.985 |
6 | 0.980 | 0.970 | 0.906 | 0.973 |
7 | 0.980 | 0.976 | 0.935 | 0.978 |
8 | 0.971 | 0.973 | 0.844 | 0.980 |
9 | 0.967 | 0.973 | 0.813 | 0.982 |
10 | 0.973 | 0.973 | 0.871 | 0.978 |
Mean | 0.975 | 0.973 | 0.832 | 0.981 |
SD | 0.007 | 0.002 | 0.069 | 0.004 |
If you have any question or suggestion don't hesitate to get in touch. Please leave a message at the Issues page.
When using Qoala-T please include the following citation:
Klapwijk, E.T., van de Kamp, F., van der Meulen, M., Peters, S. and Wierenga, L.M. (2019). Qoala-T: A supervised-learning tool for quality control of FreeSurfer segmented MRI data. NeuroImage, 189, 116-129. https://doi.org/10.1016/j.neuroimage.2019.01.014
Eduard T. Klapwijk, Ferdi van de Kamp, Mara van der Meulen, Sabine Peters, and Lara M. Wierenga