Durée : 21 minutes
Présentée par Predrag Klasnja, PhD
Nos conférenciers ont eu la gentillesse de répondre à des questions du public qui avaient été laissées sans réponses pendant la période de discussion.
- Question d’Anne Berman – You mentioned that this kind of testing is practical. What would be the first step? How easy is it to get funding?
The first step is deciding if this style of testing is useful and appropriate for the kinds of questions one is trying to answer. (: Beyond that, we can think of “practical” in different ways. The kind of testing I was describing often does not require any more participants or even resources than a pilot study would. But micro-randomization does require a mechanism for doing sequential randomization and for observing proximal outcomes (and other variables of interest) for those randomizations. This can be done in different ways, from very low-tech approaches (e.g., creating a randomization protocol in Excel and using it to micro-randomize text messages via a service like Bulk SMS) to building the infrastructure for micro-randomization within the apps we want to test. we are working to make the latter style of work easier, but it is possible to do a micro-randomized trial of an interesting intervention concept with no programming at all. Both Eric’s and my students have done it. In terms of funding, funding for micro-randomized trials is growing quickly (an R01 that is purely an MRT just got funded on the first submission!), in part because it follows on the work that Linda Collins, Susan Murphy and others have done to convey the importance of other optimization methods–factorial designs and SMARTs, in particular–to both NIH and the behavioral science community as a whole. So, we are not seeing the same kind of pushback to this method in review as we were seeing when optimization methods first came out.
- Question de Reyhaneh Yousafi – How important is it to have the applied instruments and measuring methods met the criteria for validity and reliability, within the context of digital interventions?
This is an important but difficult question. Given how digital interventions are used–namely, in the midst of people’s daily lives, and often in short bursts of interaction (e.g., on the bus)–one is constantly having to balance the need for rigor in measurement with the need for usability. So, one-question EMA items have become pretty common in digital interventions, because individuals tolerate them a lot better than longer questionnaires, and they can be collected in many different ways. For instance, one can ask such questions via a micro-EMA on a smart watch, or embed them into other interactions (e.g., planning) that individuals do within the intervention.
- Question de Jovana Stojanovic – What are the challenges of implementing the behavioural tech interventions in the different populations and settings, for instance young adults or older individuals, men or women, low to high income setting?
As with any intervention, you want to understand the population you are trying to support as deeply as possible and then design the technology to meet their needs. I am not sure this is that different for technology than for any other class of interventions, other than some of the barriers and constraints that have to be understood will have to do with the comfort with technology, access, attitudes toward technology, and so on. But the process of conducting formative work–whether via CBPR, or user-centered design–is similar to what’s needed for any other style of intervention.
- Question de Jovana Stojanovic – How do you compete with large amount of e-health interventions that are made available from private industries directly to consumers without prior testing in a controlled setting.
We don’t. At least in how I see my work, the point of developing digital interventions in research is to build up an evidence base that can then be used to develop commercial grade interventions. We can never compete with Fitbit, Apple, etc. on the polish of our interventions. But what we can do is build tools that can investigate important questions that will result in evidence that can be used to build better commercial tools. It’s a completely different game than what Apple et al. are playing.
- Question d’Anne Berman – How do you measure proximal behavioral outcomes? By self-report only? Or sensor or camera validation? How important is it to validate self-report in this context?
Depends on the outcome. If passive measurements are possible, they often can be collected with less missingness than self-report. For physical activity interventions, many of our proximal outcomes can be measured in this way, as can physiological metrics such as resting heart rate, etc. For psychosocial constructs (e.g., self-efficacy, etc.) some self-report is usually needed, in which case we try to figure out ways to collect it with minimum participant burden. How much validation is needed will depend on the specifics of the project and what is being measured. It’s hard to answer that part in the abstract.
- Question d’Anne Berman – How do you explain the effect of open-ended planning in comparison to picking a plan? I can think of self-efficacy, behavioral activation, or self-determination mechanisms. What are your thoughts?
My best guess is that the amount of attention that is involved plays a key role. People took substantially longer to write a plan than to pick a plan from a list. It’s possible that that additional attention encoded the plan more robustly, which then had downstream effects on plan execution.
VEUILLEZ NOTER : Bien que de nombreuses questions aient été soumises par les participants au congrès, seules les questions pour lesquelles nous avons obtenu des réponses sont partagées ici.