About CTGate

About Page

Clinical studies are often conducted under idealized and rigorously controlled conditions to improve their internal validity and success rates, but compromise their external validity (i.e., generalizability to the target populations). These idealized conditions are sometimes exaggerated and reflected as overly restrictive eligibility criteria. Certain population subgroups are often excluded with unjustified criteria and are subsequently underrepresented. Older adults have been especially underrepresented in cancer studies. The underrepresentation of these population subgroups reduces the treatment effects and increases the likelihood of adverse outcomes in diverse populations when the interventions were moved into clinical practice. It is imperative to rigorously assess the generalizability of a clinical study, so that stakeholders including pharmaceutical companies, policymakers, providers, and patients would be able to understand and anticipate the possible effects of the interventions in the real world. In the past two decades, a large number of studies have assessed generalizability, but mostly were after the fact, ad hoc, not systematic, and focused on specific diseases and sets of trials without a formalized approach. So far, there is a significant knowledge gap between the available methods for generalizability assessment and their adoption in research practice. Most generalizability assessments have been conducted as an ad hoc auditing effort by a third party after the fact. We believe the key barriers are two-fold: (1) the lack of evidence to demonstrate their validity, which also leads to the lack of consensus on the best practice for generalizability assessments; and (2) the lack of readily available, well-vetted statistical and informatics tools. Motivated to fill this gap, we systematically reviewed the extant methods for generalizability assessments. We also developed an open-source Clinical Trial Generalizability Assessment Toolbox (ctGATE) with its accompanying documentations and tutorials.

The detailed process of the review process of the related literature is described in detail in

He Z, Tang X, Yang X, Guo Y, George TJ, Charness N, Quan Hem KB, Hogan W, Bian J. Clinical Trial Generalizability Assessment in the Big Data Era: A Review. Clinical and Translational Science. 2020 Feb 14. https://doi.org/10.1111/cts.12764

We describe the process briefly as follows:

We performed the literature search over the following 4 databases: MEDLINE, Cochrane, PychINFO, and CINAHL. Following the Institute of Medicine’s standards for systematic review and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), we conducted the scoping review in the following six steps: 1) gaining an initial understanding about clinical trial generalizability assessment, population representativeness, internal validity, and external validity, 2) identifying relevant keywords, 3) formulating four search queries to identify relevant articles in the 4 databases, 4) screening the articles by reviewing titles and abstracts, 5) reviewing articles’ full-text to further filter out irrelevant ones based on inclusion and exclusion criteria, and 6) coding the articles for data extraction.

 

Study selection and screening process

We used an iterative process to identify and refine the search keywords and search strategies. We identified 5,352 articles as of February 2019 from MEDLINE, CINAHL, PychINFO, and Cochrane. After removing duplicates, 3,569 records were assessed for relevancy by two researchers (ZH and XT) through reviewing the titles and abstracts against the inclusion and exclusion criteria. Conflicts were resolved with a third reviewer (JB). During the screening process, we also iteratively refined the inclusion and exclusion criteria. Out of the 3,569 articles, 3,275 were excluded through the title and abstract screening process. Subsequently, we reviewed the full texts of 294 articles, among which 106 articles were further excluded based on the exclusion criteria. The inter-rater reliability of the full-text review between the two annotators is 0.901 (i.e., Cohen’s kappa, p < .001). 187 articles were included in the final scoping review.

 

Data extraction and reporting

We coded and extracted data from the 187 eligible articles according to the following aspects: (1) whether the study performed an a priori generalizability assessment or a posteriori generalizability assessment or both; (2) the compared populations and the conclusions of the assessment; (3) the outputs of the results (e.g., generalizability scores, descriptive comparison); (4) whether the study focused on a specific disease. If so, we extracted the disease and disease category; (5) whether the study focused on a particular population subgroup (e.g., elderly). If so, we extracted the specific population subgroup; (6) the type(s) of the real-world patient data used to profile the target population (i.e., trial data, hospital data, regional data, national data, and international data). Note that trial data can also be regional, national, or even international, depending on the scale of the trial. Regardless, we considered them in the category of “trial data” as the study population of a trial is typically small compared to observational cohorts or real-world data. For observational cohorts or real-world data (e.g., EHRs), we extracted the specific scale of the database (i.e., regional, national, and international). For the studies that compared the characteristics of different populations to indicate generalizability issues, we further coded the populations that were compared (e.g., enrolled patients, eligible patients, general population, ineligible patients), and the types of characteristics that were compared (i.e., demographic information, clinical attributes and comorbidities, treatment outcomes, and adverse events). We also identified the statistical/informatics method used for generalizability assessment.

 

Clinical Trial Generalizability Assessment Toolbox (ctGATE)

In this ctGATE tool, the user will be able to search relevant clinical trial generalizability assessment papers using the data source, disease category, types of generalizability assessment methods (score/non-score output, a priori / a posteriori generalizability assessment), PMID, and title. Then the filtered papers will be displayed in a table. The user can (1) click on the PMID to view the entry of the article in PubMed; (2) click on the title of the paper to view all the coded information about the study; and (3) view the R/Python tutorial for the generalizability assessment.

 

Related Publication

He Z, Tang X, Yang X, Guo Y, George TJ, Charness N, Quan Hem KB, Hogan W, Bian J. Clinical Trial Generalizability Assessment in the Big Data Era: A Review. Clinical and Translational Science. 2020 Feb 14. https://doi.org/10.1111/cts.12764

Related Data

He, Zhe et al. (2020), Clinical trial generalizability assessment in the big data era: a review, v6, Dryad, Dataset, https://doi.org/10.5061/dryad.hmgqnk9bq

Funding

This work was supported by National Institute on Aging Grant R21AG061431 (PI: He). The work was partially supported by UF-FSU Clinical and Translational Science Award UL1TR001427 funded by National Center for Advancing Translational Sciences.