Glossary of some terms used in clinical research and clinical trials

 

Definitions of terminology often used in clinical research can be found at

Here we elaborate on some of these terms to facilitate comprehension and the conduct of clinical research.

 

Blinded Studies

When a study is “blinded”, subjects do not know which group they are randomly assigned to. Blinded study designs are meant to eliminate bias toward or against the therapy or placebo. In a double-blind study, neither the physician nor subject knows to what treatment they are assigned. In a single-blind study, only the physician knows what treatment the subject is receiving.

 

Case-Control Study

A Case-Control study is an observational epidemiological investigation. This type of study compares a group of subjects with a particular disease or disorder (“cases”) with a group of subjects without (“controls”). The proportion of each group having a history of a particular exposure or characteristic of interest is then compared. This type of study is generally retrospective and demonstrates association, but not cause and effect. It has been used, for example, to demonstrate the association between the use of oral contraceptives and thromboembolism.

 

Case reports and Case series

Case reports and case series are usually used to provide detailed characterisation of a clinical syndrome, including sometimes to highlight unusual clinical presentations.

A case report usually describes the experience of a single patient. If more than one patient is described then it is referred to as a case series. (ref: Dekkers, Olaf M., et al. "Distinguishing case series from cohort studies." Annals of internal medicine 156.1_Part_1 (2012): 37-40)

A case series is usually a coherent and consecutive description of a set of cases with the condition or disease of interest. Observations can be made retrospectively or prospectively. Sometimes the distinction between a case series and a cohort study is blurred particularly if we have a single arm cohort study or a prospective case series. (ref: Murad MH, Sultan S, Haffar S, et al. Methodological quality and synthesis of case series and case reports BMJ Evidence-Based Medicine 2018;23:60-63.http://dx.doi.org/10.1136/bmjebm-2017-110853). Some perspectives on the unclear definition or cut off between case series and cohorts are discussed in: -

  1. Mathes, T., Pieper, D. Clarifying the distinction between case series and cohort studies in systematic reviews of comparative studies: potential impact on body of evidence and workload. BMC Med Res Methodol 17, 107 (2017). https://doi.org/10.1186/s12874-017-0391-8)
  2. Schünemann HJ, Cook D, Guyatt G. Methodology for antithrombotic and thrombolytic therapy guideline development: American College of Chest Physicians Evidence-based Clinical Practice Guidelines (8th Edition). Chest. 2008 Jun;133(6 Suppl):113S-122S. doi: 10.1378/chest.08-0666. Erratum in: Chest. 2008 Aug;134(2):473. PMID: 18574261.

 

Case Study

This type of study relies on literature review or uses a physician’s clinical cases to evaluate the possibility of an association between an observed effect and a specific environmental exposure.  This type of study is useful when the disease is uncommon and when it is caused exclusively or almost exclusively by a single kind of exposure.

  

Clinical Trial Designs

Jadad A (ref: Jadad, A. R. A user’s guide. 1998) provides a useful guide on the classification of clinical trials based on

  • the aspects of the intervention being evaluated, that is as either
    • phase I, II, III or IV (see types of phases of clinical trials above >>>provide book mark>>>)
    • efficacy or effectiveness
  • the objective or hypothesis of the study
    • Superiority, non-inferiority, equivalence or feasibility
  • how participants are exposed to or receive the interventions
    • Parallel
    • Cluster
    • Factorial
    • Crossover
    • Adaptive
  • the number of participants
    • N-of-1 trials
    • Fixed size
    • Sequential trials
  • whether investigators and/or participants know which intervention is being studied.
    • Open trials
    • Single blind trials
    • Double blind trials
    • Triple and quadruple-blind trials

The following materials may be useful to understand different clinical trial designs and their uses

  1. Jadad, A. R. A user’s guide. 1998, available from https://www1.cgmh.org.tw/intr/intr5/c6700/OBGYN/F/Randomized%20tial/chapter3.html,  accessed December 7, 2021
  2. Stolberg HO, Norman G, Trop I. Randomized controlled trials. American Journal of Roentgenology. 2004 Dec;183(6):1539-44., https://doi.org/10.2214/ajr.183.6.01831539
  3. Spieth PM, Kubasch AS, Penzlin AI, Illigens BM, Barlinn K, Siepmann T. Randomized controlled trials - a matter of design. Neuropsychiatr Dis Treat. 2016;12:1341-1349. Published 2016 Jun 10. doi:10.2147/NDT.S101938, PMID: 27354804

Below we elaborate on some clinical trial designs.

Simple or Parallel trials:  Here, two or more groups of patients allocated to different treatments (usually control and experimental intervention) with the arms running simultaneously (in parallel). This is the commonest design for an RCT. It uses the most elementary form of randomisation and this is often explained to participants as ‘like tossing a coin’. However, as this process would not be reproducible and cannot be checked it is preferable to use a table of random numbers or a computer-generated randomisation list. The disadvantage of simple randomization is that it may result in markedly unequal numbers of participants being allocated to each group. Also, simple randomisation may lead to skewed composition of factors that may affect the outcome of the trial. For instance, in a trial involving both sexes, there may be too many subjects of the same sex in one arm. This is particularly true in small studies.

Factorial trials can be used to improve efficiency in intervention trials by testing two or more hypotheses simultaneously. Some factorial studies are more complex involving a third or fourth level. The study design is such that subjects are first randomised to intervention A or B to address one hypothesis and then within each intervention, there is a further randomisation to intervention C and D to evaluate a second question. The advantage of this design is its ability to answer more than one trial question in a single trial. It also allows the researcher to assess the interactions between interventions which cannot be achieved by single factor studies.

Crossover trialsas the name suggests, is where each subject acts as its own control by receiving at least two interventions. Subject A receives a test and standard intervention or placebo during a period in the trial and then the order of receiving the intervention is alternated for different participants. A cross over design is not limited to two interventions but researchers can design cross over studies involving 3 interventions - 2 treatments and a control arm. The order in which each individual receives the intervention should be determined by random allocation and there should be a wash out period before the next intervention is administered to avoid any “carry over” effects. The design therefore is only suitable where the interventions have no long-term effect or where the study drug has a short half-life. Since each subject acts as its own control, the study design eliminates inter subject variability and therefore fewer subjects are required. Cross over studies are consequently used in early phase studies such as pharmacokinetic studies.

Cluster trials This is where an intervention is allocated to groups of people or clusters against a control. Sometimes this is done by geographical area, community or health centre and mainly used to improve public health concerns. An example can be testing the effect of education versus a control in reducing deaths in subjects who have suffered a heart attack.

Adaptive design is sometimes referred to as a “flexible design” and it is a design that allows pre-defined adaptations to a trial design and/or statistical procedures after its initiation without undermining the validity and integrity of the trial. An adaptive trial design allows for modification of the study as data accrues. The purpose is not only to efficiently identify clinical benefits of the test treatment but also to increase the probability of success of clinical development.

Some of the benefits of adaptive designs are that it reflects medical practice in the real world. It is ethical with respect to both efficacy and safety of the test treatment under investigation and therefore efficient in the early and late phases of clinical development. The main draw backs however, is a concern whether the p-value or confidence interval regarding the treatment obtained after the modification is reliable or correct. In addition, the use of adaptive design methods may lead to a totally different trial that is unable to address the scientific/medical questions that the trial set out to answer. There is also the risk of introducing bias in subject selection or in the way the results are evaluated. In practice, commonly seen adaptations include, but are not limited to - a change in sample size or allocation to treatments, the deletion, addition, or change in treatment arms, shift in target patient population such as changes in inclusion/exclusion criteria, change in study endpoints, and change in study objectives such as the switch from a superiority to a non-inferiority trial.

Prior to adopting an adaptive study, it is prudent to discuss with the regulators to ensure that the study addresses issues such as the level of modifications that will be acceptable to them as well as understand the regulatory requirements for review and approval. Adaptive trial design can be used in rare life-threatening disease with unmet medical needs as it speeds up the clinical development process without compromising safety and efficacy. Commonly considered strategies in adaptive design methods include adaptive seamless phase1/II studies. This is where several doses or schedules are run at the same time whilst dropping schedules or doses that are ineffective or toxic. Similar approaches can be used for seamless phase II/III.

An adaptive study design may be appropriate where interim analysis of data can be used to make predefined changes to aspects of the study design, such as stopping the study early for superiority, inferiority or futility, or dropping arms or adjusting doses.  This may involve analysis of biomarkers or genetic typing to define responders. If any interim analysis is planned this must be explained in the study protocol, along with any statistical reasoning for adapting the original protocol plan.   (ref ICH Topic E9 Statistical Principles for Clinical Trials, CPMP/ICH/363/96, Sep 1998)

Equivalence trial is where a new treatment or intervention is tested to see if it is equivalent to the current treatment. It is now becoming difficult to demonstrate that a particular intervention is better than an existing control; particularly in therapeutic areas where there has been vast improvement in the drug development process. The goal of an equivalent study is to show that the intervention is not worse, less toxic, less evasive or have some other benefit than an existing treatment. It is important however to ensure that the active control selected is an established standard treatment for the indication being studied and must be with the dose and formulation proven to be effective. Studies conducted to demonstrate benefit of the control against placebo must be sufficiently recent; such that there are no important medical advances or other changes that have occurred. Also, the populations where the control was tested should be similar to those planned for the new trial and the researcher must be able to specify what they mean by equivalence at the start of the study.

Non-inferiority trial is where a new treatment or intervention is tested to see whether or not it is non-inferior to the current gold standard. The requirements for a non-inferiority study are similar to equivalence studies. There should be similarities in the populations, concomitant therapy and dosage of the interventions. It is difficult to show statistically that two therapies are identical as an infinite sample size would be required. Therefore, if the intervention falls sufficiently close to the standard as defined by reasonable boundaries, the intervention is deemed no worse than the control.

 

Cohort Study

A Cohort study is a longitudinal investigation in which subjects with differing exposures to a suspected factor are identified and then repeated observations of the same variables conducted to identify health effects of interest over some period, commonly years rather than weeks or months. The occurrence rates of the disease of interest are measured and related to estimated exposure levels. Cohort studies can either be performed prospectively or retrospectively from historical records. This type of study also demonstrates association, but not cause and effect. This is often a type of observational study, although they can also be structured as longitudinal randomized experiments. In medicine, the design is often used to uncover predictors of certain diseases.

 

Contract Research Organisation (CRO)

This is a company hired by another company or research centre to take over certain parts of running a clinical trial. The company may design, manage, and monitor the trial, and analyse the results (ref:https://www.cancer.gov/publications/dictionaries/cancer-terms/def/contract-research-organization).

 

Phases of Biomedical Clinical Trials

(ref: NIH Clinical Research Trials: The Basics, https://www.nih.gov/health-information/nih-clinical-research-trials-you/basics, Accessed November 30, 2021 and WHO clinical trials https://www.who.int/health-topics/clinical-trials, accessed November 30 2021)

Biomedical clinical trials are conducted in a series of steps called “phases.” Each phase has a different purpose and helps researchers answer different questions.

  • Phase I trials: Researchers test a drug or vaccine* in a small group of people (20–80) for the first time to evaluate safety (safe dosage range) and identify side effects.

*For vaccine trials this phase looks at safety, strength of immune responses and optimal dosage

  • Phase II trials: Drugs found to be safe in phase I are given to a larger group of people (100–300) to determine effectiveness and to further study its safety that is to monitor for any adverse effects.

* For vaccine trials this phase looks at side effects, strength of immune response and dosage

  • Phase III trials: The new drug or treatment is given to large groups of people (1,000–3,000) to confirm its effectiveness, monitor side effects, compare it with standard or similar treatments, and collect information that will allow the new drug or treatment to be used safely.

*For vaccine trials this phase looks at effectiveness of immune responses and further monitors for safety in a much larger population

  • Phase IV trials: After a drug/vaccine is approved by the FDA and made available to the public, researchers track its safety in the general population, seeking more information about a drug/vaccine’s benefits, and optimal use.

 

Power

It is important that an intervention study is able to detect the anticipated effect of the intervention with a high probability. To this end, the necessary sample size needs to be determined such that the poweris high enough. In clinical trials, the minimal value nowadays to demonstrate adequate power equals 0.80. This means that the researcher is accepting that one in five times (that is 20%) they will miss a real difference.

This false negative rate is the proportion of positive instances that were erroneously reported as negative and is referred to in statistics by the letter β. The “power” of the study is then equal to (1 –β) and is the probability of failing to detect a difference when actually there is a difference. Sometimes for pivotal or large studies, the power is occasionally set at 90% to reduce to 10% the possibility of a “false negative” result.

 

Qualitative data 

Non-numerical (words or text) it is not usually measurable (ref: https://www.nihlibrary.nih.gov/resources/subject-guides/health-data-resources/common-data-types-public-health-research). Qualitative data tends to be descriptive.

 

Quantitative data 

Measurable data which usually involves counting of people or events and can be used for making comparisons, associations, and inferences, the data as such uses numbers (ref: Wang 2013 as cited in https://www.nihlibrary.nih.gov/resources/subject-guides/health-data-resources/common-data-types-public-health-research)

 

Randomisation

Whatever the method of randomisation used, blinding is also key. Firstly, it is important that the person recording the allocation to different groups does not know which allocation comes next as they may consciously or unconsciously choose which patients to randomise based on which the next group would be. An easy way to avoid this is to make sure that the random allocation is set in sealed, sequentially numbered envelopes and the envelope is only opened after each patient has been recruited, in the order of the number sequence. This reduces biases in allocation, but will not completely eliminate this as the person doing the allocation may try to read the allocation through the envelope, by holding it up to the light, for example. If this is a concern, then a telephonic or internet-based randomisation service can be used, where the person enrolling the patient calls a number/visits a website to receive the allocation at the point at which randomisation should be done. The randomisation should also be double-blinded where practicable, where neither the research team nor the participant knows the group to which the participant is assigned, including those analysing the data.  This can reduce any inequality in administering the intervention or in performing assessments/ measuring outcomes; further removing any effects of bias.  (Ref: www.blackwellpublishing.com, Bias in randomized controlled trials).

Where simple randomisation in small trials is likely to lead to unequal distributions in small studies, the participants might be randomised in smaller blocks of, for example, four participants where there is an equal number of control and intervention allocations (in this case two of each) randomly assigned in the blocks. This means that you will not end up with a significantly unequal allocation in the study overall.

In addition to allocation concealment and blinding, the type of randomisation should be considered; simple, block, stratified or minimized.   If stratification is required, to ensure that the randomised groups remain equal, how blinding will be maintained should be considered.

The allocation ratio should also be specified, unless it is a simple randomisation with a 1:1 allocation ratio, equivalent to a coin toss: two interventions are allocated in the same proportions.  Any other ratio results in one intervention being allocated more often that the other. In some trials, participants are intentionally allocated in unequal numbers to each intervention: for example, to gain more experience with a new procedure or to limit costs of the trial. (ref Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG (2010). "CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials". Br Med J 340: c869)

Patients in randomised controlled trials should be analysed within the group to which they were allocated, irrespective of whether they experienced the intended intervention (intention to treat analysis).  This maintains the advantages of random allocation, which may be lost if subjects are excluded from analysis through, for example, withdrawal or failure to comply. (Ref: www.blackwellpublishing.com, Bias in randomized controlled trials).

The analysis of randomised controlled trials should be focused on estimating the size of the difference in predefined outcomes between the different intervention groups.

Randomisation offers a robust method of preventing selection bias but may be unnecessary and other designs preferable; however, the conditions under which non-randomised designs can yield reliable estimates are very limited. Non-randomised studies are most useful where the effects of the intervention are large or where the effects of selection, allocation and other biases are relatively small. They may be used for studying rare adverse events, which a trial would have to be implausibly large to detect.

There are several design options for a randomised controlled study and these are described in the section “some types of clinical trial designs” (>>>provide bookmark>>).

For more information on randomisation, visit: http://www.bmj.com/content/316/7126/201

 

Randomised Controlled Trial

In this type of study, subjects are randomly assigned into the control group or the investigational group. This study type is generally considered the most rigorous study design. 

  • Active controlled: The control group receives the typically used or approved treatment while the investigational group receives the treatment or intervention being studied.
  • Placebo controlled: The control group receives an inactive product (placebo) while the investigational group receives the treatment or intervention being studied.

Randomised controlled trials are the most rigorous way of determining whether there is an association between an intervention and an outcome; other study designs cannot rule out the possibility that the association was caused by another factor. Therefore, for interventional studies randomisation should be performed, unless limited by ethical concerns or if infeasible due to practical issues.  Also, if the intervention relies on skills, such as counselling or surgical procedures, it should be considered if these are well enough developed to permit evaluation. (ref: BMJ 1998;316:201)

 

Sample Size

Sample size, simply put, is the number of participants in a study. It is a basic statistical principle that sample size be defined before starting a clinical study so as to avoid bias in the interpretation of the results. If there are too few subjects in a study, the results cannot be generalized to the population as this sample will not represent the size of the target population. Further, the study may not be able to detect the difference between test groups, potentially resulting in a false conclusion. Exposing participants to an intervention but where the trail is invalidated by too small a dataset would render the trial unethical.

On the other hand, if more subjects than required are enrolled in a study, we put more individuals at risk of the intervention, so making the study unethical as well as wasting precious resources. The attribute of sample size is that every individual in the chosen population should have an equal chance to be included in the sample. Also, the choice of one participant should not affect the chance of another and hence the reason for random sample selection.

The calculation of an adequate sample size thus becomes crucial in any clinical study and is the process by which we calculate the optimum number of participants required to arrive at an ethically and scientifically valid result. Factors to be considered while calculating the final sample size include the expected withdrawal rate, an unequal allocation ratio, and the objective and design of the study. The sample size always has to be calculated before initiating a study and as far as possible should not be changed during the course of a study.

 

Types of Clinical Trials

(ref: NIH Clinical Research Trials: The Basics, https://www.nih.gov/health-information/nih-clinical-research-trials-you/basics, Accessed November 30, 2021)

  • Prevention trials look for better ways to prevent a disease in people who have never had the disease or to prevent the disease from returning. Approaches may include medicines, vaccines, or lifestyle changes.
  • Screening trials test new ways for detecting diseases or health conditions.
  • Diagnostic trials study or compare tests or procedures for diagnosing a particular disease or condition.
  • Treatment trials test new treatments, new combinations of drugs, or new approaches to surgery or radiation therapy.
  • Behavioural trials evaluate or compare ways to promote behavioural changes designed to improve health.
  • Quality of life trials (or supportive care trials) explore and measure ways to improve the comfort and quality of life of people with conditions or illnesses.

 

References

  1. Lesaffre, E. and Verbeke, G. (2005). Clinical Trials and Intervention Studies. In Wiley StatsRef: Statistics Reference Online (eds N. Balakrishnan, T. Colton, B. Everitt, W. Piegorsch, F. Ruggeri and J.L. Teugels). https://doi.org/10.1002/9781118445112.stat06670
  2. Medical Research Council “Developing and evaluating complex interventions: new guidance” http://www.sphsu.mrc.ac.uk/Complex_interventions_guidance.pdf
  3. Chow S and Chang Mark “Adaptive design methods in clinical trials – a review” 2008 (3) 11. Orphanet Journal of Rare Diseases http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2422839/pdf/1750-1172-3-11.pdf
  4. Kadam P and Bhalerao S “sample size calculation” Int. Journal of Ayurveda research 2010 1 (1) 55 – 57 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876926/
  5. Fleming T.R “Surrogate Endpoints And FDA’s Accelerated Approval Process” Health affairs 2005 24 (1) 67 – 78; PMID: 15647217
  6. Lawrence M. Friedman, Curt D. Furberg, David L. DeMets “Study Population” Fundamentals of Clinical Trials 2010 4th edition chapter p. 455 - 65
  7. Jadad, A. R. A user’s guide. 1998, available from https://www1.cgmh.org.tw/intr/intr5/c6700/OBGYN/F/Randomized%20tial/chapter3.html,  accessed December 7, 2021
  8. Stolberg HO, Norman G, Trop I. Randomized controlled trials. American Journal of Roentgenology. 2004 Dec;183(6):1539-44., https://doi.org/10.2214/ajr.183.6.01831539
  9. Bhide, A, Shah, PS, Acharya, G. A simplified guide to randomized controlled trials. Acta Obstet Gynecol Scand 2018; 97: 380– 387, https://doi.org/10.1111/aogs.13309