Quantitative evidence is generated by research based on traditional scientific methods that generate numerical data. The methods associated with quantitative research in healthcare have developed out of the study of natural and social sciences. It was suggested that quantitative evidence in medicine originated in eighteenth century Britain, when surgeons and physicians started using statistical methods to assess the effectiveness of therapies for scurvy, dropsy, fevers, palsies, syphilis, and different methods of amputation and lithotomy (Trohler 2000). Since these beginnings, quantitative research has expanded to encompass aspects other than effectiveness, such as incidence, prevalence, etiology of disease, psychometric properties, and measurement of physical characteristics, quality of life, and satisfaction with care.

JBI quantitative reviews focusing on evidence of effectiveness examine the extent to which an intervention, when used appropriately, achieves the intended effect. Evidence about the effects of interventions may come from three main categories of studies: experimental studies, quasi-experimental studies and observational studies. Ideally, evidence about the effectiveness of interventions should come from good quality randomized controlled trials (RCTs) that explore final clinical end points (or patient important outcomes) such as morbidity, mortality, and quality of life (not surrogate end points which may include laboratory tests for example) (Brignardello-Petersen et al 2015). Good empirical evidence exists to indicate that RCTs that explored final clinical end points frequently contradicted (refuted) clinical studies that explored surrogate end points and also the results of observational studies (Brignardello-Petersen et al 2015). Some authors have claimed that results from RCTs and observational studies provide consistent results. Thus, the issue of the agreement of the results from RCTs and observational studies remains controversial (Brignardello-Petersen et al 2015).

Although high quality RCTs exploring final clinical end points are considered the “reference standard” (Brignardello-Petersen et al 2015), reviewers should be aware that results from any single RCT cannot be considered as “final” because results from new RCTs may contradict results from previous RCTs (Brignardello-Petersen et al 2015).

Reviewers should be aware that there is no unique universally accepted terminology for the quantitative study designs. Also, there is no unique comprehensive set of descriptions for the different study designs considered here.

Experimental studies meet three conditions: manipulation, control and random assignment. Specifically, the researchers manipulate the intervention of interest and the control condition and they randomly allocate the participants to the intervention or control group (Shadish et al 2002). Random allocation refers to an authentically random process such as the toss of a coin or use of a table of random numbers (Shadish et al 2002). Randomized controlled trials with different designs (parallel design, cross-over design, cluster design) are examples of experimental studies. There are also existing experimental studies (the intervention of interest and the control condition are manipulated by the researchers) where the allocation may not use an authentically random process. For example, if investigators use alternate group allocation like even and odd dates, they cannot ensure that each participant has an equal chance of landing in either group. Experimental studies without authentic random allocation but using systematic alternate group allocation methods mentioned above are experimental studies with pseudo-randomization, or pseudo-RCTs. Quasi-experimental studies are studies where the intervention of interest and the control condition are controlled (manipulated) by the researchers, however, the allocation of participants is not a random, systematic or pseudo-random allocation (Shadish et al 2002). Frequently, participants self-select into groups or the researchers decide which persons should get the intervention and which persons should get the control (Shadish et al 2002).

Observational studies are studies where the intervention of interest and the control condition are not controlled (manipulated) by the researchers and where researchers only observe the presence or absence of the intervention of interest and of the outcome of interest. There are diverse types of observational studies, which can be broadly categorized into analytical observational studies (cohort studies, case-control studies, and analytical cross-sectional studies) and descriptive observational studies (case reports and case series). In a cohort study, investigators select participants based on presence or absence of exposure to an intervention of interest and compare prospectively for the occurrence of the outcome of interest. In a case-control study, researchers select “case” participants or those with the outcome of interest and “control” participants, without the outcome of interest, to compare groups for past exposure or absence of exposure to the intervention. In an analytical cross-sectional study, investigators select participants without reference to the intervention or the presence of the outcome of interest. They then simultaneously examine the groups for the presence or absence of exposure to the intervention of interest and the presence or absence of the outcome of interest. In case reports and case series researchers simply describe the characteristics of participants and the outcomes of interventions.

2020 © Joanna Briggs Institute. All Rights Reserved