Veuillez noter que le congrès s’est déroulé entièrement en anglais. Les contenus offerts sont donc uniquement en anglais.
Visualiser la séance en entier (1 h 44 minutes)
- Menée par Chris Noone, PhD
Pilot vs feasibility studies: how to tell them apart? (6:40)
Conducting and analyzing pilot and feasibility studies (26:11)
Pilot Trials in Health-Related Behavioral Intervention Research: Problems, Solutions, and Recommendations (49:03)
Nos conférenciers ont eu la gentillesse de répondre à des questions du public qui avaient été laissées sans réponses pendant la période de discussion.
Veuillez noter que, sauf indication contraire, le contenu des réponses offertes par les conférenciers pour cette séance a été regroupé en une seule réponse.
- The current CONSORT guideline on PAFS is focused on randomised studies, what is the role of non-randomised or simple pre-post designs in the PAFS world?
There are several types of PAFS—some randomized, non-randomized, surveys, qualitative, etc—all focusing on evaluation of feasibility. Pre-post designs are a form of non-randomized designs assessing which may also be considered PAFS if their primary focus is on feasibility. Whilst, as the name indicates, the CONSORT extension for randomised pilot and feasibility trials, is designed as a checklist for randomised designs many of the items are relevant to non randomised studies. The important thing to remember is that the objectives are all about uncertainty and the design, outcome measures, and analyses and reporting must all focus on these.
- Do all pilot studies have to be randomised?
No, not all pilot studies have to be randomized. For example, one could do a pilot study of large observational cohort study to assess whether recruitment would be feasible for such a large cohort study. A randomised pilot study is likely to be most appropriate when many of the basic uncertainties such as specification of the intervention have been clarified and the remaining uncertainties are about logistics such as recruitment and retention and randomization approaches. But you can also test other remaining uncertainties as part of that trial
- If we don’t have a RCT feasibility trial (have a pre and post design for example), can we still use the RCT design for the main study? Or we should conduct another RCT feasibility study first?
The short answer to the first part is “it depends”—if the team think there is information on feasibility from the literature on related studies, then one could proceed to the main study. It also depends on whether the pre-post study provides good indirect evidence on feasibility on those parameters where there’s some uncertainty. In the absence of useful information on feasibility, then conducting a feasibility RCT may actually be necessary to gather information that may enhance the design and ultimately the success of the main trial. In general conducting a pilot trial either as external to the main trial or as the first part of the main trial (known as an internal pilot study) is to be recommended.
- Before doing main study, should we do the pilot study or feasibility study ? What about sample size?
Empirical evidence shows that recruitment challenges is the main reason why most studies stop prematurely. Thus, evidence on feasibility is imperative as part of good design for any study. One could argue that it is ethically, scientifically, and economically important to ensure feasibility of a study prior to doing it. The sample size is an essential element of designing studies. It provides information about the amount of information needed to address the primary question of the study—whether it is on feasibility or effectiveness—so that one has confidence that the results are truthful. There aren o black and white rules about the correct sample size for a pilot trial as it depends on the main objectives. However there is a lot of guidance which can be accessed on our website (www.pilotandfeasibilitystudies.qmul.ac.uk). Remember the analysis of a pilot study should be mainly descriptive and should focus on confidence intervals.
- What is the panel’s opinion on the collection of the main target outcome data (e.g., weight, mortality) in PAFSs?
Collection of data on clinical outcomes is essential in PAFS, but the focus will about whether it is possible to collect the data not the absolute values of the clinical measures. So part of the feasibility assessment may focus on whether such data can be collected from all participants at all time points, and whether it is complete and appears accurate. The absolute values may sometimes be used as secondary outcomes in the PAFS to help provide initial estimates of effect, but remember the analysis of them should be mainly descriptive, should focus on confidence intervals and should not use statistical testing to compare across groups.
- Are there minimum components/elements to consider a study as a feasibility study? Any framework or guide available for designing this type of study, that create a standardization and allow better quality studies? How large or how many elements would you advise that we can include in any one PAFS?
If you look at out our paper on defining feasibility and plot studies (Sandra Mary Eldridge; Gillian Lancaster; Michael Campbell; Lehana Thabane; Sally Hopewell; Claire Coleman; Christine Bond Defining feasibility and pilot studies in preparation for randomised controlled trials: using consensus methods and validation to develop a conceptual framework PLOSONE 2016 11 (3) e0150205 DOI:10.1371/journal.pone.0150205 March 15, 2016), this will help you understand the many uncertainties that need to be clarified before a main trial can be undertaken but think about study design, population size, eligibility criteria, recruitment, retention, follow up period, setting, intervention specification, intervention acceptability, choice of outcomes, randomization, allocation concealment, blinding etc. Basically you need to be confident that everything you write into your main trial protocol can be delivered. It is not possible to say how many elements you can or should include at any one time but it is likely that elements around acceptability and specification can be tested in a non randomized early study, whilst logistical elements associated with conducting a trial such as recruitment and randomization will be tested together in a pilot trial.
- How are the pilot and feasibility studies going to help us conduct a better-optimized main trial? Are pilot studies helping us to have the ethical and funding aspects proceed more smoothly?
Basically you need to be confident that everything you write into your main trial protocol can be delivered. If you have already determined the size of the available eligible population and know the likely consent rate, willingness to be randomised and the retention rate, then your main trial is much more likely to be successfully completed. If you can provide this evidence when you apply for funding, and for ethical approval, these applications will be stronger.
- Most of the pilot studies have no budget allocation. Do you think the main study should have budget estimation and valid analysis for pilot studies?
If possible it is best to seek funding for a pilot study. Often grant giving bodies with limited funds are keen to fund a pilot study that will pump prime a bigger definitive study as this gives added value to what they are contributing. It is better for them to do this than fund a small study of little inherent value. Sometimes experienced researchers can apply for large programme grants that can include a series of work programmes culminating in a final definitive trial.
- Do you have any suggestions around issues related to funding and publishing for PAFS? Especially for more junior career researchers.
Often grant giving bodies with limited funds are keen to fund a pilot study that will pump prime a bigger definitive study as this gives added value to what they are contributing. These small awards are ideal opportunities for more junior researchers to gain experiences as grant holders, supported by experienced researchers as co-applicants. Ideally all research should be published to add to the body of literature on a topic and allow all researchers to learn from it. Journals are increasingly aware of the value of pilot and feasibility work and there is a Journal dedicated to publishing such studies (pilotfeasibilitystudies.biomedcentral.com).
- In case the results of PAFS are not necessarily published in journal articles as a means to publicize and letting others to know about the preliminary findings, what’s the point in conducting them from a more general point of view?
Ideally all research should be published to add to the body of literature on a topic and allow all researchers to learn from it. Journals are increasingly aware of the value of pilot and feasibility work and there is a Journal dedicated to publishing such studies (pilotfeasibilitystudies.biomedcentral.com).
- How do we engage the healthcare team to promote and help in the recruitment in a behavioural study? Its been a challenge in my institution, when the clinical routine is already difficult and sometimes we depend on professionals that are not directly involved in the study.
I think it is important that the clinicians/health care team members can see the value of the behavioural change you are proposing. If you have evidence to confirm the health need, and the acceptabilty of the proposed intervention to all stakeholders this means it is easier to engage them. Also having a member of the team as a co-applicant is and involving them from the starts I very helpful so you understand the different perspectives and try and plan both an intervention and a research design that will fit within their daily routines without adding to their workload will always help. Take time to think about how you promote the study to them, including wording and length of written information. If you can follow with personal visits that is ideal and probably if people don’t seem keen I would say don’t try and persuade them as my experience is that these are the sites that will fail to recruit!
- After identifying the main limitations in the design of an intervention through a pilot trial, is it always necessary to take a step back and make a new pilot trial with the new improved design? Would it be OK to make changes and proceed to the main trial if they are small? What advice would you give investigators who conduct a PAF and it doesn’t work out?
You don’t always need to go back if a pilot trial has identified further limitations or something has not worked. It really depends how much information you have managed to collect. If you are confident that you understand how to address the newly identified issue then it is not necessary to go back but you need to be aware that it might be best in the long run. So one piece of advice is to include in your pilot trial a facility to interview participants to understand what went well and what went less well.
- Should investigators explicitly avoid combining feasibility and plausibility objectives into a single trial? Since they have such different objectives, should they be conducted separately?
The number of elements you can or should include in a single pilot trial has to be decided on a case by case basis but there are some elements such as acceptability of an intervention that could certainly be combined with plausibility as could logistical elements associated with conducting a trial such as recruitment and randomization.
- Is there any move to synthesising (systematic review – meta-synthesis) of Pilot and feasibility studies in particular topic areas?
At the moment there is no accepted consensus on this but my own view is that as long as the trial is clearly described and meets the eligibility criteria for the systematic review, there is no reason why its data should not be used to add to the available body of evidence, and descriptive outcomes incorporated into a meta analysis.
- Participant Comment: What’s being discussed is transformational. Transformation is not comfortable. At the same time, doing things the same way over and over again will result in the same (lack of) progress that the field of BMed has realized.
- Participant Comment: Ken was right in the imperative that we need to educate reviewers, since their time in training has long passed and they only know how to do things the way they were taught – stone tool making.
VEUILLEZ NOTER : Bien que de nombreuses questions aient été soumises par les participants au congrès, seules les questions pour lesquelles nous avons obtenu des réponses sont partagées ici.