Research can be either observational or interventional. Interventional studies involve making a change – or intervening - in order to study the outcome of what has been changed. An intervention is introduced immediately after the baseline period with the aim of affecting an outcome.
The intervention itself is the aspect that is being manipulated in your research. Not all research has an intervention: for example, epidemiological studies are observational, and may simply be monitoring data that is already being collected. Or, an observational cohort study might be following different groups to see what happens, such as following a group of people that drink alcohol, and another group who do not drink and then assessing them during the follow period to see whether any differences can be measured between the two groups.
However, if you are introducing an intervention (whether that means that you are giving your participants a drug or vaccine, giving training, sending them health information in the form of a text message, a type of surgery, counselling or anything else), then you have an interventional study. A randomised controlled clinical trial is a specific type of intervention study where two or more groups are given differing interventions.
What is the definition of a trial?
The WHO definition of a clinical trial is given as follows:
‘For the purposes of registration, a clinical trial is any research study that prospectively assigns human participants or groups of humans to one or more health-related interventions to evaluate the effects on health outcomes. Clinical trials may also be referred to as interventional trials. Interventions include but are not restricted to drugs, cells and other biological products, surgical procedures, radiologic procedures, devices, behavioural treatments, process-of-care changes, preventive care, etc. This definition includes Phase I to Phase IV trials.’
The intervention could be distant from the actual patients. For example, you could randomise groups of community health workers in Africa into two groups, and then train one with a new method to teach mothers how to give antimalarial treatments, and not provide the specialised training to the other groups of nurses: this is still a trial because patients are seeing nurses who are assigned to one group or another.
Ensuring Consistency and Quality of the Invention
Whether your intervention is a drug or a type of counselling, or anything else, it needs to be the same throughout the trial – and this needs careful consideration during trial design.
For example, if your trial is to test whether text messages are successful as an intervention to remind patients to take medication, your intervention is the text message. Here, it will be very easy to ensure consistency for all participants in the research: they will either receive or not receive the text message, and you can ensure that the text is the same every single time.
However, if your intervention is a type of counselling, it would be much harder to ensure consistency across all subjects, so you would probably need to create a framework so that the main elements can be consistently applied. For example, you would want to ensure that all participants had the same number of sessions and that each session was the same length, and that all counsellors in your trial were working together to ensure consistency in their approach. It may be that you would prepare specific options, such as specific applications of Cognitive Behavioural Therapy, to ensure that the participant’s experiences were as similar as possible to one another.
If your intervention is a drug, there are other things to consider. Perhaps you would like to compare two common pain relief drugs, A and B. Even if they are commonly available, you’d still need to ensure that your entire intervention supply was the same throughout the trial. The drug would also need to be correctly stored, accounted for, and managed (for example, perhaps the drug should not be exposed to temperatures of +20 degrees – how will you transport it, and ensure that temperature is maintained)? How will you ensure that the right amount of the intervention reaches your trial sites and is correctly stored there?
Interventional Trials
There are two types of intervention studies, namely randomised controlled trials and non-randomised or quasi-experimental trials.
An interventional trial quite loosely involves selecting subjects with a particular characteristics and splitting those subjects into those receiving an intervention and those that receive no intervention (control group). Here, participants (volunteers) are assigned to exposures purely based on chance.The comparison of the outcomes of the two groups at the end of the study period is an evaluation of the intervention.
Intervention studies are not limited to clinical trials but are broadly used in many research studies such as sociological, epidemiological and psychological studies as well as public health research.
Aside from the ability to remove bias, another advantage of randomised trials is that, if they are conducted properly, it is likely to determine small to moderate effects of the intervention. This is something that is difficult to establish reliably from observational studies. They also eliminate confounding bias; as such studies tends to create groups that are comparable for all factors that influence outcome that are known, unknown or difficult to measure so that the only difference between the two groups is the intervention.
They can also be used to establish the safety, cost-effectiveness and acceptability of an intervention. Some disadvantages of randomised clinical trials are that they are not always ethical as the sample size can be too small. This wastes time and patients are included in a trial that is of no benefit to them or others. The trials can also be statistically significant but clinically unimportant and lastly, the results may not be able to be generalised to the broader community since those who volunteer tend to be different from those who do not.
Double blind randomised controlled trials are considered the gold standard of clinical research because they are one of the best ways of removing bias in clinical trials. If both the participants and the researchers are blinded as to the exposure the participant is receiving, it is known as a “double-blinded” study.
Characteristics of an Intervention Study
Target Population
The first step in any intervention is to specify the target population, which is the population to which the findings of the trial should be extrapolated. This requires a specific definition of the subjects prior to selection as defined by the inclusion and exclusion criteria. The exclusion criteria specify the type of patients who will need to be excluded on the basis of reasons which would confound your results – for example, they are either old or young (which may affect how the drug is working), they are pregnant and you are not yet sure if the drug is safe for pregnancy, they are in another trial at the moment, they have another medical condition that might affect their involvement – or any other reason which affect their participation. Inclusion criteria clarify who should be in the trial: for example, males and females between the age of 18-50 who have X disease…. And so on.
Those who are eventually found to be both eligible and willing to enrol in the trial compose the actual “study population” and are often relatively a selected subgroup of the experimental population.
Participants in an intervention study are very likely to differ from non-participants in many ways. The fact that the subgroup of participants is representative of the entire experimental population will not affect the validity of the trial but may affect the ability to generalise those results to the target population.
It is important to obtain baseline data and/or to ascertain outcomes for subjects who are eligible but unwilling to participate. Such information is extremely valuable to assess the presence and extent of differences between participants and non-participants in a trial. This will help in judging whether the results among trial participants can be generalised to the target population.
Sample Size
Sample size, simply put, is the number of participants in a study. It is a basic statistical principle that sample size be defined before starting a clinical study so as to avoid bias in the interpretation of the results. If there are too few subjects in a study, the results cannot be generalised to the population as this sample will not represent the size of the target population. Further, the study may not be able to detect the difference between test groups, making the study unethical.
On the other hand, if more subjects than required are enrolled in a study, we put more individuals at risk of the intervention, also making the study unethical as well as wasting precious resources. The attribute of sample size is that every individual in the chosen population should have an equal chance to be included in the sample. Also, the choice of one participant should not affect the chance of another and hence the reason for random sample selection.
The calculation of an adequate sample size thus becomes crucial in any clinical study and is the process by which we calculate the optimum number of participants required to arrive at an ethically and scientifically valid result. Factors to be considered while calculating the final sample size include the expected drop-out rate, an unequal allocation ratio, and the objective and design of the study. The sample size always has to be calculated before initiating a study and as far as possible should not be changed during the course of a study.
Power
It is important that an intervention study is able to detect the anticipated effect of the intervention with a high probability. To this end, the necessary sample size needs to be determined such that the power is high enough. In clinical trials, the minimal value nowadays to demonstrate adequate power equals 0.80. This means that the researcher is accepting that one in five times (that is 20%) they will miss a real difference.
This false negative rate is the proportion of positive instances that were erroneously reported as negative and is referred to in statistics by the letter β. The “power” of the study is then equal to (1 –β) and is the probability of failing to detect a difference when actually there is a difference.
Sometimes for pivotal or large studies, the power is occasionally set at 90% to reduce to 10% the possibility of a “false negative” result
Study end points and outcome measures
To evaluate the effect of the intervention, a specific outcome needs to be chosen. In the context of clinical trials, this outcome is called the endpoint. It is advisable to choose one endpoint, the primary endpoint, to make the likelihood of measuring this accurately as high as possible. The study might also measure other outcomes, and these are secondary endpoints. Once the primary endpoint has been decided, then deciding how the outcome that provides this endpoint is measured is the central focus of the study design and operation.
The choice of the primary endpoint is critical in the design of the study. Where the trial is intended to provide pivotal evidence for regulatory approval for marketing of drugs, biologics, or devices, the primary goal typically is to obtain definitive evidence regarding the benefit-to-risk profile of the experimental intervention relative to a placebo or an existing standard-of-care treatment. One of the most challenging and controversial issues in designing such trials relates to the choice of the primary-efficacy endpoint or outcome measure used to assess benefit. Given that such trials should provide reliable evidence about benefit as well as risk, the primary-efficacy endpoints preferably should be clinical efficacy that measures unequivocally tangible benefit to patients. For example, for life-threatening diseases, one would like to determine the effect of the intervention on mortality or on a clinically significant measure of quality of life, such as relief of disease-related symptoms, improvement in ability to carry out normal activities, or reduced hospitalisation time.
In many instances, it may be possible to propose alternative endpoints (that is, “surrogates” or surrogate markers) to reduce the duration and size of the trials. A common approach has been to identify a biological marker that is “correlated” with the clinical efficacy endpoint (meaning that patients having better results for the biological marker tend to have better results for the clinical efficacy endpoint) and then to document the treatment’s effect on this biomarker. In oncology, for example, one might attempt to show that the experimental treatment regimen induces tumor shrinkage, delays tumor growth in some patients, or improves levels of biomarkers such as carcinoembryonic antigen (CEA) in colorectal cancer or prostate-specific antigen (PSA) in prostate cancer. Although these effects do not prove that the patient will derive symptom relief or prolongation of survival, such effects on the biomarker are of interest because it is well known that patients with worsening levels of these biological markers have greater risk for disease-related symptoms or death. However demonstrating treatment effects on these biological “surrogate” endpoints, while clearly establishing biological activity, may not provide reliable evidence about the effects of the intervention on clinical efficacy.
In the illustration above using biomarkers for cancer treatment, if the biomarker does not lie in the pathway by which the disease process actually influences the occurrence of the clinical endpoint, then affecting the biomarker might not, in fact, affect the clinical endpoint. Also, there may be multiple pathways through which the disease process influences the risk of the clinical-efficacy endpoints. If the proposed surrogate endpoint lies in only one of these pathways and if the intervention does not actually affect all pathways, then the effect of treatment on clinical efficacy endpoints could be over- or underestimated by the effect on the proposed surrogate.
In summary, a well designed trial will have one primary endpoint and possibly several secondary endpoint. The power of the study is designed to answer the question that is being measured with the outcome for the primary endpoint. Measuring this outcome needs to be standardised and its importance well understood by everyone on the study team. A well designed and set up trial is able to measure this primary outcome measure accurately and consistently between staff members, between points in time (so the same way on the first visit as on the last visit 12 months later) and also between different sites in multi-centre studies.
Randomisation
Randomisation offers a robust method of preventing selection bias but may be unnecessary and other designs preferable; however the conditions under which non-randomised designs can yield reliable estimates are very limited. Non randomised studies are most useful where the effects of the intervention are large or where the effects of selection, allocation and other biases are relatively small. They may be used for studying rare adverse events, which a trial would have to be implausibly large to detect.
Where simple randomisation in small trials is likely to lead to unequal distributions in small studies, the participants might be randomised in smaller blocks of, for example, four participants where there is an equal number of control and intervention allocations (in this case two of each) randomly assigned in the blocks. This means that you will not end up with a significantly unequal allocation in the study overall.
For more information on Randomisation, visit: http://www.bmj.com/content/316/7126/201
Ethical Considerations
There are clear ethical consideration regarding the sample size as discussed above however, whether a study is considered to be ethical or unethical is a subjective judgement based on cultural norms, which vary from society to society and over time.
Ethical considerations are more important in intervention studies than in any other type of epidemiological study.
For instance in trials involving an intervention, it will be unethical to use a placebo as a comparator if there is already an established treatment of proven value. It would also be unethical to enrol more participants than are needed to answer the question set by the trial. Conversely it would also be unethical to recruit too few participants so that the trial could not answer the question.
To be ethical a trial also needs to have equipoise – this means that the trial is answering a real question and so it is scientifically justified. This means that there’s no evidence for the intervention yet in the specific circumstances, so nobody truly knows whether it has an effect. For example, you would not be in equipoise if you were assessing paracetamol as a pain relief drug against a placebo; there is already information suggesting that paracetamol is an acceptable pain reliever for low level pain, so this research would be unethical because some patients would be given a placebo when a perfectly viable alternative is known. In this case, it might be preferable to test a new compound pain relief against paracetamol in patients with low level pain.
Therefore intervention trials are ethically justified only in a situation of uncertainty, when there is genuine doubt concerning the value of a new intervention in terms of its benefits and risks. The researcher must have some evidence that the intervention may be of benefit, for instance, from laboratory and animal studies, or from observational epidemiological studies. Otherwise, there would be no justification for conducting a trial.
Evaluating an Intervention
Best practice is to develop interventions systematically, using the best available evidence and appropriate theory, then to test them using a carefully phased approach, starting with a series of pilot studies targeted at each of the key uncertainties in the design, and moving on to an exploratory and then a definitive evaluation. The results should be disseminated as widely and persuasively as possible, with further research to assist and monitor the process of implementation.
In practice, evaluation takes place in a wide range of settings that constrain researchers’ choice of interventions to evaluate and their choice of evaluation methods. Ideas for complex interventions emerge from various sources, including: past practice, existing evidence, policy makers or practitioners, new technology, or commercial interests. The source may have a significant impact on how much leeway the investigator has to modify the intervention or to choose an ideal evaluation design. In evaluating an intervention it is important not to rush into making a decision as strong evidence may be ignored or weak evidence rapidly taken up, depending on its political acceptability or fit with other ideas about what works.
One should be cognizance of ‘blanket’ statements about what designs are suitable for what kind of intervention (e.g. ‘randomised trials are inappropriate for community-based interventions, psychiatry, surgery, etc.’). A design may rarely be used in a particular field, but that does not mean it cannot be used but the researcher will need to make a decision on the basis of specific characteristics of their study, such as expected effect size and likelihood of selection and other biases.
A crucial aspect to evaluating an intervention is the choice of outcomes from the trial. The researcher will need to determine which outcomes are most important, and which are secondary as well as how to deal with multiple outcomes in the analysis. A single primary outcome, and a small number of secondary outcomes, is the most straightforward from the point of view of the statistical analysis. However, this may not represent the best use of the data. A good theoretical understanding of the intervention, derived from careful development work is key to choosing suitable outcome measures.
It is equally important that a researcher remains alert to the possibility of unintended and possibly adverse consequences. Consideration should also be given to the sources of variation in outcomes and a sub group analysis may be required.
As much as possible it is important to bear in mind the decision makers i.e. national or local policy-makers, opinion leaders, practitioners, patients, the public, etc and whether it is likely to be persuasive especially if it conflicts with deeply entrenched values.
An economic evaluation should be included if at all possible, as this will make the results far more useful for decision-makers. Ideally, economic considerations should be taken fully into account in the design of the evaluation, to ensure that the cost of the study is justified by the potential benefit of the evidence it will generate.
Types of Randomised Clinical Designs
Simple or Parallel trials is the most elementary form of randomisation and can be achieved by merely tossing a coin. However this should be discouraged in clinical studies as it cannot be reproducible or checked.
The alternative is to use a table of random numbers or a computer generated randomisation list. The disadvantage of simple randomization is that it may result in markedly unequal number of subjects being allocated to each group. Also simple randomisation may lead to skewed composition of factors that may affect the outcome of the trial. For instance in a trial involving both sexes, there may be too many subjects of the same sex in one arm. This is particularly true in small studies.
Factorial trials can be used to improve efficiency in intervention trials by testing two or more hypotheses simultaneously. Some factorial studies are more complex involving a third or fourth level. The study design is such that subjects are first randomised to intervention A or B to address one hypothesis and then within each intervention, there is a further randomisation to intervention C and D to evaluate a second question. The advantage of this design is its ability to answer more than one trial question in a single trial. It also allows the researcher assess the interactions between interventions which cannot be achieved by single factor studies.
Crossover trials as the name suggests, is where each subject acts as its own control by receiving at least two interventions. Subject A receives a test and standard intervention or placebo during a period in the trial and then the order of receiving the intervention is alternated. Cross over design is not limited to two interventions but researchers can design cross over studies involving 3 interventions - 2 treatments and a control arm. The order in which each individual receives the intervention should be determined by random allocation and there should be a wash out period before the next intervention is administered to avoid any “carry over” effects. The design therefore is only suitable where the interventions have no long term effect or where the study drug has a short shelf life. Since each subject acts as its own control, the study design eliminates inter subject variability and therefore only fewer subjects are required. Cross over studies are consequently used in early phase studies such as pharmacokinetic studies.
Cluster trials This is where an intervention is allocated to groups of people or clusters against a control. Sometimes this is done by geographical area, community or health centre and mainly used to improve public health concerns. An example can be testing the effect of education versus a control in reducing deaths in subjects who have suffered a heart attack.
Adaptive design is sometimes referred to as a “flexible design” and it is a design that allows adaptations to trials and/or statistical procedures after its initiation without undermining the validity and integrity of the trial. Adaptive trial design allows for modification of the study design as data accrues. The purpose is not only to efficiently identify clinical benefits of the test treatment but also to increase the probability of success of clinical development.
Some of the benefits of adaptive designs are that it reflects medical practice in the real world. It is ethical with respect to both efficacy and safety of the test treatment under investigation and therefore efficient in the early and late phases of clinical development. The main draw backs however, is a concern whether the p-value or confidence interval regarding the treatment obtained after the modification is reliable or correct. In addition, the use of adaptive design methods may lead to a totally different trial that is unable to address the scientific/medical questions that the trial sets out to answer. There is also the risk of introducing bias in subject selection or in the method the results are evaluated. In practice, commonly seen adaptations include, but are not limited to - a change in sample size or allocation to treatments, the deletion, addition, or change in treatment arms, shift in target patient population such as changes in inclusion/exclusion criteria, change in study endpoints, and change in study objectives such as the switch from a superiority to a non-inferiority trial.
Prior to adopting an adaptive study, it is prudent to discuss with the regulators to ensure that the study addresses issues such as the level of modifications that will be acceptable to them as well as understand the regulatory requirements for review and approval.
Adaptive trial design can be used in rare life threatening disease with unmet medical needs as it speeds up the clinical development process without compromising on safety and efficacy. Commonly considered strategies in adaptive design methods include adaptive seamless phase1/II studies. This is where several doses or schedules are run at the same time whilst dropping schedules or doses that are ineffective or toxic. Similar approaches can be used for seamless phase II/III.
Equivalence trial is where a new treatment or intervention is tested to see if it is equivalent to the current treatment. It is now becoming difficult to demonstrate that a particular intervention is better than an existing control; particularly in therapeutic areas where there has been vast improvement in the drug development process. The goal of an equivalent study is to show that the intervention is not worse, less toxic, less evasive or have some other benefit than an existing treatment.
It is important however to ensure that the active control selected is an established standard treatment for the indication being studied and must be with the dose and formulation proven to be effective.
Studies conducted to demonstrate benefit of the control against placebo must be sufficiently recent; such that there are no important medical advances or other changes that have occurred. Also the populations where the control was tested should be similar to those planned for the new trial and the researcher must be able to specify what they mean by equivalence at the start of the study.
Non-inferiority trial is where a new treatment or intervention is tested to see whether or not it is non-inferior to the current gold standard. The requirements for an equivalence study are similar to non inferiority studies. There should be similarities in the populations, concomitant therapy and dosage of the interventions.
It is difficult to show statistically that two therapies are identical as an infinite sample size would be required. Therefore if the intervention falls sufficiently close to the standard as defined by reasonable boundaries, the intervention is deemed no worse than the control.
References
Emmanuel G and Geert V “Clinical Trials and Intervention Studies”
http://www.wiley.com/legacy/wileychi/eosbs/pdfs/bsa099.pdf
Intervention trials
http://www.iarc.fr/en/publications/pdfs-online/epi/cancerepi/CancerEpi-7.pdf
Intervention studies
http://www.drcath.net/toolkit/intervention-studies
Medical Research Council “Developing and evaluating complex interventions:
new guidance”
http://www.sphsu.mrc.ac.uk/Complex_interventions_guidance.pdf
Chow S and Chang Mark “Adaptive design methods in clinical trials – a review” 2008 (3) 11. Orphanet Journal of Rare Diseases
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2422839/pdf/1750-1172-3-11.pdf
Kadam P and Bhalerao S “sample size calculation” Int. Journal of Ayurveda research 2010 1 (1) 55 - 57
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876926/
Fleming T.R “Surrogate Endpoints And FDA’s Accelerated Approval Process” Health affairs 2005 24 (1) 67 - 78
http://content.healthaffairs.org/content/24/1/67.full
Lawrence M. Friedman, Curt D. Furberg, David L. DeMets “Study Population” Fundamentals of Clinical Trials 2010 4th edition chapter 4 55 - 65
Print all information
Intervention and laboratory supply
You should consider what intervention supply you require and what tests and laboratory assays will be used to answer your research question. It is important to ...
Intervention and laboratory supply
You should consider what intervention supply you require and what tests and laboratory assays will be used to answer your research question. It is important to ...
Professor Lucy Yardley will deliver a webinar, titled “The Person-Based Approach for developing and optimising interventions”, on Tuesday 15th January 2019 from 12-1pm (GMT).
This seminar will describe how to ...
Grand Challenges China is focusing on calls for innovative concepts for effective and affordable interventions e.g vaccines and therapeutics which have the potential to protect against the progression or transmission ...
Grand Challenges China is focusing on calls for innovative concepts for effective and affordable interventions e.g vaccines and therapeutics which have the potential to protect against the progression or transmission ...
Develope pharmacovigilance / safety reporting plan
If your study is using an investigational medical product (IMP) you should develop a plan for reporting of adverse events and reactions. Details of ...
Dr Bláthín Casey and Prof Seán Dinneen delivered a QUESTS webinar on “Supporting patient-public involvement (PPI) contributors in the use of qualitative methodology: An example from the D1 Now intervention” ...
In this live Q&A, Blessing Silaigwana will be answering any questions related to ethical conduct of research during pandemics. Please post your questions and get instant answers:
#ASK ANYTHING ON ...
Hi Colleagues, Midwifery students had noted during clinical experience in maternity labour ward and postnatal that there was poor record keeping and documentation. I had devised a simple tool just ...
This blog is closed to new posts due to inactivity. The post remains here as part of the network’s archive ...