Search Results for: smart

Adaptive Intervention for Adolescent Marijuana Use

Researchers in this study are developing an adaptive treatment for adolescent marijuana users. They are studying the use and combination of several efficacious treatments, including behavioral therapy, contingency management, behavioral parent training, and working memory training via a SMART trial.

  • PI: Alan J. Budney
  • Location: Dartmouth College
  • Funding: NIDA Project R01DA015186

Adaptive Treatment for Growth Suppression in Children with ADHD

Studies show that the use of stimulants for the control of ADHD in youth leads to a reduction in height gain. This study uses a SMART design to examine the effectiveness of temporary breaks in medicinal treatments and caloric supplementation for the treatment of stimulant-induced weight and growth suppression.

  • PI: James G. Waxmonsky
  • Location: Florida International University
  • Funding: NIMH Project R01MH083692
 

Adaptive Treatment for Pregnant Women Who Abuse Drugs

Researchers have developed an intensive relapse-prevention program for pregnant women who abuse drugs. A SMART design is being used to develop an adaptive intervention where the intensity and scope of the relapse-prevention program is adjusted based on the evolving status of the woman.

—  PIs: Hendrée Jones, Margaret Chisolm
—  Location: Johns Hopkins University
—  Funding: NIDA Project R01DA014979

Recommended Readings for Adaptive Interventions

Introduction to adaptive interventions

Collins, L. M., Murphy, S. A., & Bierman, K. A. (2004). A conceptual framework for adaptive preventive interventions. Prevention Science, 5, 185-196.

Murphy, S. A. & McKay, J. R. (2004). Adaptive treatment strategies: An emerging approach for improving treatment effectiveness. Clinical Science (Newsletter of the American Psychological Association Division 12, Section III: The Society for the Science of Clinical Psychology). Winter 2003/Spring 2004.

Murphy, S. A., Lynch, K. G., McKay, J. R., Oslin, D., & TenHave, T. (2007). Developing adaptive treatment strategies in substance abuse research. Drug and Alcohol Dependence, 88(2), S24-S30.

Murphy, S. A. & Almirall, D. (2009). Dynamic treatment regimes. In M. W. Kattan (Ed.), Encyclopedia of medical decision making (pp. 419-422). Thousand Oaks, CA: Sage.

Murphy, S. A., Collins, L. M., & Rush, A. J. (2007). Customizing treatment to the patient: Adaptive treatment strategies (Editorial). Drug and Alcohol Dependence, 88(2), S1-S72.

Introduction to SMART

The first paper in this list is the one we recommend as an initial introduction to SMARTs.

Lei, H., Nahum-Shani, I., Lynch, K., Oslin, D., & Murphy, S. A. (2012). A “SMART” design for building individualized treatment sequences. Annual Review of Clinical Psychology, 8, 14.1 – 14.28.

Nahum-Shani, I., Qian, M., Almirall, D., Pelham, W., Gnagy, B., Fabiano, G., … Murphy, S. A. (2012). Experimental design and primary data analysis methods for comparing adaptive interventions. Psychological Methods, 17, 457-77.

Almirall, D., Nahum-Shani, I., Sherwood, N. E., & Murphy, S. A. (2014), Introduction to SMART designs for the development of adaptive interventions: With application to weight loss research. Translational Behavioral Medicine, 4(3), 260-274.

Murphy, S. A., Lynch, K. G., McKay, J. R., Oslin, D., & TenHave, T. (2007). Developing adaptive treatment strategies in substance abuse research. Drug and Alcohol Dependence, 88(2), S24-S30.

Murphy, S. A., Collins, L. M., & Rush, A. J. (2007). Customizing treatment to the patient: Adaptive treatment strategies (Editorial). Drug and Alcohol Dependence, 88(2), S1-S72.

Almirall, D., Compton, S. N., Gunlicks-Stoessel, M., Duan, N., & Murphy, S. A. (2012). Designing a pilot sequential multiple assignment randomized trial for developing an adaptive treatment strategy. Statistics in Medicine, 31(17), 1887-1902.

Experimental design methods (including SMART)

Oetting, A. I., Levy, J. A., Weiss, R. D. & Murphy, S. A., (2011). Statistical methodology for a SMART design in the development of adaptive treatment strategies. In P.E. Shrout (Ed.), Causality and psychopathology: Finding the determinants of disorders and their cures (pp.179-205). Arlington, VA: American Psychiatric Publishing.

Murphy, S. A., (2005). An experimental design for the development of adaptive treatment strategies. Statistics in Medicine, 24(10), 1455–1481.

Collins, L. M., Nahum-Shani, I., & Almirall, D. (2014). Optimization of behavioral dynamic treatment regimens based on the sequential, multiple assignment, randomized trial (SMART). Clinical Trials, 11, 426-434.

Almirall, D., Lizotte, D., & Murphy, S. (2012). Comment: SMART design issues and the consideration of opposing outcomes: Discussion of “Evaluation of viable dynamic treatment regimes in a sequentially randomized trial of advanced prostate cancer” by Wang, Rotnitzky, Lin, Millikan, and Thall. Journal of the American Statistical Association, 107, 509-512.

Advanced statistical topics (including statistical inference)

Chakraborty, B., & Moodie, E. E. (2013). Statistical methods for dynamic treatment regimes (pp. 31-52). Springer.

Kosorok, M. R., & Moodie, E. E. (2015). Adaptive treatment strategies in practice: Planning trials and analyzing data for personalized medicine. Philadelphia, PA: SIAM.

Murphy, S. A., Van Der Laan, M. J., Robins, J. M., & The Conduct Problems Prevention Research Group (2001). Marginal mean models for dynamic regimes.Journal of the American Statistical Association, 96, 1410-1423.

Laber, E. B., Lizotte, D. J., Qian, M., Pelham, W. E., & Murphy, S. A. (2014). Dynamic treatment regimes: Technical challenges and applications. Electronic Journal of Statistics. 8(1),1225-1272.

Gunter, L., Zhu, J., & Murphy, S. A. (2011). Variable selection for qualitative interactions. Statistical Methodology, 1(8), 42-55.

Chakraborty, B., & Murphy, S. A. (2009). Inference for nonregular parameters in optimal dynamic treatment regimes. Statistical Methods in Medical Research,19(3), 317-343.

Murphy, S. A., & Bingham, D. (2009). Screening experiments for developing dynamic treatment regimes. Journal of the American Statistical Association, 104(458), 391-408.

Data analysis with SMART

Nahum-Shani, I., Qian, M., Almirall, D., Pelham, W., Gnagy, B., Fabiano, G., … Murphy, S. A. (2012). Experimental design and primary data analysis methods for comparing adaptive interventions. Psychological Methods, 17, 457-77.

Nahum-Shani, I., Qian, M., Almirall, D., Pelham, W., Gnagy, B., Fabiano, G., … Murphy, S. A. (2012). Q-learning: A data analysis method for constructing adaptive interventions. Psychological Methods, 17, 478-494.

Shortreed, S. M., Laber, E., Stroup, T. S., Pineau, J., & Murphy, S. A. (2014). A multiple imputation strategy for sequential multiple assignment randomized trials. Statistics in Medicine, 33(24), 4202-14.

Lu, X., Lynch, K. G., Oslin, D. W., & Murphy, S. A. (2015). Comparing treatment policies with assistance from the structural nested mean model. Biometrics, 72(1), 10-19. doi: 10.1111/biom.12391

Ertefaie, A., Wu, T., Lynch, K. G., & Nahum-Shani, I. (2015). Identifying a set that contains the best dynamic treatment regimes. Biostatistics, 17, 135-148.

Chakraborty, B., & Murphy, S. A., (2014). Dynamic treatment regimes. Annual Review of Statistics and its Application, 1, 447-464.

Adaptive interventions and SMART in the field

Gunlicks-Stoessel, M., Mufson, L., Westervelt, A., Almirall, D., & Murphy, S. (2015). A Pilot SMART for developing an adaptive treatment strategy for adolescent depression. Journal of Clinical Child & Adolescent Psychology, 45(4), 480-494. doi:10.1080/15374416.2015.1015133, 1-15.

Kasari, C., Kaiser, A., Goods, K., Nietfeld, J., Mathy, P., Landa, R., murphy, S. A., & Almirall, D. (2014). Communication interventions for minimally verbal children with autism: A sequential multiple assignment randomized trial. Journal of the American Academy of Child and Adolescent Psychiatry. 53(6), 635-46.

August, G. J., Piehler, T. F., & Bloomquist, M. L. (2014). Being “SMART” about adolescent conduct problems prevention: Executing a SMART pilot study in a juvenile diversion agency. Journal of Clinical Child & Adolescent Psychology, 1-15.

Kilbourne, A. M., Almirall, D., Eisenberg, D., Waxmonsky, J., Goodrich, D. E., Fortney, J. C., … Thomas, M. R. (2014). Protocol: Adaptive implementation of effective programs trial (ADEPT): Cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implementation Science, 9, 132.

Kilbourne, A. M., Almirall, D., Goodrich, D. E., Lai, Z., Abraham, K. M., Nord, K. M., & Bowersox, N. (2014). Enhancing outreach for persons with serious mental illness: 12-month results from a cluster randomized trial of an adaptive implementation strategy. Implementation Science, 9,(1) 163.

 

Last updated: 2019

Conceptual Introduction to TVEM for ILD

Why does TVEM matter?

As data collection technology such as smart phones and pedometers create richer and denser datasets, TVEM will allow researchers to answer new questions and to answer existing questions with greater nuance than was possible just a few years ago. Problem behaviors (e.g., substance use) and their associations with predictors (e.g., craving, mood) change over time, and TVEM helps us understand these changes. This applies to smoking, obesity, substance use, HIV disease course, and any other area of behavioral science that collects ecological momentary assessment (EMA) data or other forms of ILD.

What are ILD and how many data points do I need?

There is no hard, fast rule for exactly what constitutes ILD. Generally, ILD are defined as data with more than 20 or 40 measurements over time. But in truth, it is not the number of observations that matter, it is the relationships you are trying to model and whether you have enough data points to measure the change. As a simple example, consider the graph below. Imagine this represents someone’s craving for food on a scale of 1 to 5 over the course of two days.

The more dynamic and complex the relationship we are trying to model, the more valuable it is to have intensive longitudinal data. So the number of observations matters only in that you have enough measurement occasions to accurately determine the shape of the curve. If the shape of the curve is accurate, then adding 50 more data points will not give you more information, though it will increase your certainty of the curve’s shape. NOTE: data spacing is also important. Large gaps in the measurements can decrease a TVEM’s usefulness.

 

 

If we measured craving two times, we would perceive that the cravings were low and stable for the duration of the two days. If we measured craving five times, we would perceive that the cravings increased steadily, leveled off, and then fell steadily. If we measured more intensively (17 times in this example), we would see that the person’s cravings fluctuated wildly throughout the time span.

If I have enough data points, how many subjects do I need?

 

In the %TVEM SAS macro, time-varying effects are not reported as a p-value; time-varying effects are reported as a curve. As you might expect, the confidence bands on the curve rely on the number of subjects in the study. In the figure to the left, you can see that the effects appear to vary over time. However, because the confidence intervals are so wide, a straight line can be drawn through the curve. We cannot describe the relationship based on this result. Look at the dashed line overlaid on the graph to the left. Because it does not touch the confidence bands at any point, this could be the actual shape of the curve, and a linear effect does not vary with time. Tighter confidence bands are needed to accurately identify the shape of the curve. With a larger study, the curve has been modeled in exactly the same shape, but the confidence bands are much narrower. As you can see, there is no way to put a straight line through the curve within the confidence bands. With this result, we can make statements about the time-varying effects.

If the macro does not provide a p-value, how do I publish my results?

In many journals in the behavioral sciences, researchers report a p-value to express whether an association is significant. However, a time-varying effect is an irregularly shaped curve, made up of an infinite number of points. This means that an overall p-value would not meaningfully demonstrate the association that was modeled. The entire curve is needed to show at what times the association is or is not statistically significant and to show how the association changes.

For a time-invariant parameter, significance can be expressed through either a p-value or a confidence interval (as seen in the figure on the right, where the red dot is the estimate, and the blue dots are the confidence intervals). In order to show variation over time, the p-value is not useful but a dynamic confidence interval is. So, while journals typically print p-values, a confidence interval provides exactly the same information. TVEM results present an estimated coefficient and 95% confidence intervals for the entire curve which can be used to determine statistical significance at any point on the curve. That is why the TVEM macro expresses results graphically.

 

Last updated: May 12, 2020

2018 Summer Institute: Analysis of Ecological Momentary Assessment Data

Topic: Analysis of Ecological Momentary Assessment Data Using Multilevel Modeling and Time-Varying Effect Modeling
Presenters: Stephanie Lanza and Michael Russell
Date: June 28 – 29, 2018
Venue: Penn State, University Park, PA

 

Workshop information

The goal of this two-day workshop was to provide attendees with the theoretical background and applied skills necessary to identify and address innovative and interesting research questions in intensive longitudinal data streams such as daily diary and ecological momentary assessment (EMA) data using multilevel modeling (MLM) and time-varying effect modeling (TVEM). By the end of the workshop, participants fit several multilevel and time-varying effect models in SAS and had the opportunity to fit and interpret preliminary models using their own data. Workshop time was spent in lecture, software demonstrations, computer exercises, and discussion. Participants were provided with a hard copy of all lecture notes, select computer exercises and output, and suggested reading lists for future reference. SAS software was used in the course, including native SAS procedures for analyzing multilevel models (PROC MIXED and PROC GLIMMIX) and the SAS TVEM macro, a downloadable supplement to SAS developed at the Penn State Methodology Center. Participants also applied the concepts learned in class to their own data, and the presenters were available for consultation during that period.

Prerequisites

Basic familiarity with linear and logistic regression and the SAS software will be helpful.

Computer requirements

Participants were strongly encouraged to bring a laptop so that they can conduct the computer exercises and analyze their own data. To conduct analyses at the workshop, SAS Version 9 for Windows needed to be installed on the laptop prior to arrival. In addition, approximately one week prior to the workshop participants were sent an email requesting that they download the TVEM SAS macro suite. Participants needed to verify that any data use agreements permited them to bring their own data to the workshop. Simulated data was made available to those who do not bring their own data.

Topics covered

—  Conceptual introduction to multilevel modeling (MLM) and time-varying effect modeling (TVEM)
—  Two-level MLM for daily diary and ecological momentary assessment (EMA) data
—  Extension to three-level MLM for EMA data
—  TVEM for EMA data: overview and applications (time of day, time relative to event, time since treatment) —  Analyses using participants’ own data, presenters available for consultation

Return to top


How to attend

The workshop is complete. Please check back in 2019. Enrollment was limited to 40 participants to maintain an informal atmosphere and to encourage interaction between and among the presenters and participants. We gave priority to individuals who are involved in drug abuse prevention and treatment research or HIV research, who have the appropriate statistical background to get the most out of the Institute, and for whom the topic was directly and immediately relevant to their current work. We also aimed to maximize geographic and minority representation. The application window has closed. Once accepted, participants were emailed instructions about how to register. The registration fee of $395 for the two-day Institute covered all instruction, program materials, and breakfast and lunch each day. A block of rooms at the Nittany Lion Inn was available for lodging. Participants were encouraged to bring their own laptop computers for conducting exercises. Review our refund, access, and cancellation policies.

Return to top


Presenters

Stephanie Lanza, Ph.D.

Professor of Biobehavioral Health, Director of the Edna Bennett Pierce Prevention Research Center, Principal Investigator at The Methodology Center, Penn State Dr. Lanza has a background in research methods, human development, and substance use and comorbid behaviors, with more than 100 papers appearing in top methodological and applied journals. She is co-author of the book Latent Class and Latent Transition Analysis: With Applications in the Social, Behavioral, and Health Sciences and led the development of PROC LCA & PROC LTA, SAS procedures for fitting latent class and latent transition models. Her methodological research interests include advances in finite mixture modeling and time-varying effect modeling to address innovative research questions in behavioral research, particularly those best addressed using intensive longitudinal data. She is passionate about disseminating these methods to health, behavioral, and social science researchers and has organized many NIH-funded dissemination conferences, taught more than 30 intensive hands-on workshops, and written tutorial articles to enable applied researchers to use the latest methods in their own work.

Michael Russell, Ph.D.

Assistant Professor of Biobehavioral Health, Investigator at The Methodology Center, Penn State Dr. Russell’s research is focused on understanding the connections between stress, affect, and health behaviors in day-to-day life using advanced statistical modeling (multilevel and time-varying effect modeling) and ambulatory assessment methods (daily diaries, ecological momentary assessments (EMA), and wearable biosensors). He is currently leading a data collection effort that combines EMA and wearable biosensors for alcohol intoxication to understand the causes and consequences of young-adult heavy drinking episodes in daily life. Dr. Russell has a strong commitment to teaching and mentoring other health researchers in advanced analytic methods, as evidenced by numerous invited talks and workshops focused on advanced MLM, TVEM, and the analysis of intensive longitudinal data. His work has been published in a variety of top journals, including Annals of Behavioral Medicine, Development and Psychopathology, Journal of Adolescent Health, Prevention Science, Drug and Alcohol Dependence, and Psychology of Addictive Behaviors.

Return to top


Location

The Pennsylvania State University, University Park campus

Return to top


Funding

Funding for this conference was made possible by award number R13 DA020334 from the National Institute on Drug Abuse. The views expressed in written conference materials or publications and by speakers and moderators do not necessarily reflect the official views and/or policies of the Department of Health and Human Services; nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government.

Return to top


Archive

2017 – Statistical Power Analysis for Intensive Longitudinal Studies by Jean-Philippe Laurenceau and Niall Bolger

2016 – Ecological Momentary Assessment (EMA): Investigating Biopsychosocial Processes in Context by Joshua Smyth, Kristin Heron, and Michael Russell

2015 – An Introduction to Time-Varying Effect Modeling by Stephanie T. Lanza and Sara Vasilenko

2014 – Experimental Design and Analysis Methods for Developing Adaptive Interventions: Getting SMART by Daniel Almirall and Inbal Nahum-Shani

2013 – Introduction to Latent Class Analysis by Stephanie Lanza and Bethany Bray

2012 – Causal Inference by Donna Coffman

2011 – The Multiphase Optimization Strategy (MOST) by Linda Collins

2010 – Analysis of Longitudinal Dyadic Data by Niall Bolger and Jean-Philippe Laurenceau

2009 – Latent Class and Latent Transition Analysis by Linda Collins and Stephanie Lanza

2008 – Statistical Mediation Analysis by David MacKinnon

2007 – Mixed Models and Practical Tools for Causal Inference by Donald Hedeker and Joseph Schafer

2006 – Causal Inference by Christopher Winship and Felix Elwert

2005 – Survival Analysis by Paul Allison

2004 – Analyzing Developmental Trajectories by Daniel Nagin

2003 – Modeling Change and Event Occurrence by Judith Singer and John Willett

2002 – Missing Data by Joseph Schafer

2001 – Longitudinal Modeling with MPlus by Bengt Muthén and Linda Muthén

2000 – Integrating Design and Analysis and Mixed-Effect Models by Richard Campbell, Paras Mehta, and Donald Hedeker

1999 – Structural Equation Modeling by John McArdle

1998 – Categorical Data Analysis by David Rindskopf and Linda Collins

1997 – Hierarchical Linear Models and Missing Data Analysis by Stephen Raudenbush and Joseph Schafer

1996 – Analysis of Stage Sequential Development by Linda Collins, Peter Molenaar, and Han van der Maas

Conceptual Introduction to Analyzing ILD

Why do intensive longitudinal data (ILD) matter?

As data collection technology such as smart phones and pedometers create richer and denser datasets, methods for ILD are needed that allow researchers to answer new questions and to answer existing questions with greater nuance than was possible just a few years ago. Problem behaviors (e.g., substance use) and their predictors (e.g., craving, mood) change over time, and methods like TVEM and FHLM can help us understand these changes. This applies to smoking, obesity, substance use, HIV disease course, and any other area of behavioral science that collects ecological momentary assessment (EMA) data or other forms of ILD.

What are ILD?

Generally, ILD are defined as data with more than 30 or 40 measurements over time.

There is no hard, fast rule for exactly what constitutes ILD. Generally, ILD are defined as data with more than 30 or 40 measurements over time. But in truth, it is not the number of observations that matter, it is the constructs and relationships you are trying to model, the speed at which they change, and whether you have sufficiently dense data to capture the change.

How are ILD collected?

The influx of mobile
and wearable technologies
have created a
plethora of possibilities
for the collection of ILD.

Intensive longitudinal data are collected in a variety of ways. These include but are not limited to
—  daily diaries, where participants typically record data 1x/day
—  ecological momentary assessments (EMA), where people are prompted to provide small amounts of data at numerous points throughout the day
—  wearable sensors, which can provide near continuous streaming information throughout the course of one’s day on measures such as heart rate, step counts, and skin conductance.

Although the influx of mobile and wearable technologies have created a plethora of possibilities for the collection of ILD, interest in and collection of ILD are not strictly new endeavors. Prior to the introduction of smartphones and wearable devices, paper-and-pencil diaries, telephone calls, pagers, and palm pilots had been used by many researchers to collect ILD. Currently, smartphones, tablets, and wearable biosensors dominate the landscape of ILD collection. Smartphones and tablets can be used to obtain brief questionnaire responses and global positioning system (GPS) coordinates to better understand shifts in mood, behavior, social interaction, and context as individuals go through their daily lives, and a wide variety of wearable sensors are now available allowing real-time measurement of myriad dynamic biomarkers, including heart rate, motion and body position, step counts, skin conductance – even alcohol intoxication.

What kinds of questions can ILD answer?

Emergent technologies and innovative analytic methods are constantly expanding the set of questions that ILD can answer.

Just as it covers a broad range of data types, ILD can be used to address a broad range of questions, many of which pertain to how contexts, moods, and behaviors change relative to themselves and relative to each other in people’s real everyday lives. Below are a few examples of questions which ILD are uniquely suited to answer.
—  How does nicotine craving change over the course of a day among cigarette smokers?
  How does mood change over the course of the day on days with a lot of stressors compared to days with fewer?
  How does a person’s mood change leading up to — and throughout — a drinking episode?

These questions are valuable for examining change when data are collected intensively over a short time frame (between a few hours or a few days), the hallmark of ILD collection. Of course, these are only a handful of examples of questions that have been and are being asked using ILD. Importantly, emergent technologies and innovative analytic methods are constantly expanding the set of questions that ILD can answer.

How are ILD analyzed?

Articulating your working model of change will allow you to identify the analytic technique best suited to allow that change to emerge.

There are many ways one can analyze ILD. An important first step is to articulate your working model of change, which will then allow you to identify the analytic technique that may be best suited to allow that change to emerge.

For example, suppose you hypothesize (a) a simple, linear increase in drinking behavior on days when an individual is highly stressed compared to herself on days when she is not as highly stressed and (b) that the size of this increase will be larger for some people than it will be for others. Such an analysis may be performed using multilevel modeling, where the within-person association between stress and drinking can be estimated at the daily level, and between-person differences in the size of these associations can be estimated at the person level.

In another example, suppose you are interested in how mood changes in the hours following the onset of a drinking episode, and you hypothesize that the mood will change in a complex, non-linear fashion. Such an analysis could be facilitated using time-varying effect modeling (TVEM), which will allow you to model complex, non-linear trajectories of change in mood throughout the duration of a drinking episode.

These are but two of many possible examples for how ILD can be analyzed. The analysis of ILD is a burgeoning research area – new techniques for answering more nuanced questions about dynamic constructs and their interrelations are continually coming online, and Methodology Center investigators will remain actively engaged in this exciting research area.

Suggestions for Planning a Grant Proposal Involving MOST

Here we offer some informal suggestions intended to help people who want to propose to implement MOST in their work.

We assume that you have a basic familiarity with factorial designs. If not, you may wish to read Chapters 3 and 4 in Collins (2018), which provide an introduction to factorial experiments aimed at those with a background in the RCT. You may also be interested in our “FAQ about factorial experiments.” If you are planning to conduct an optimization trial using a fractional factorial design, you may be interested in Chapter 5. If you are planning to conduct a sequential multiple-assignment randomized trial (SMART), please visit the Data Science For Dynamic Intervention Decision-Making Lab website for more information. If you are planning a microrandomized trial (MRT), please visit Susan Murphy’s site.

Collins, L.M. (2018).  Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST).  New York: Springer.

 

Offer evidence that MOST is better than the traditional approach in the long run

If your aim is to develop more effective and efficient behavioral interventions in a systematic fashion, it seems logical that MOST is a better long-run strategy than the traditional treatment package approach. But to convince the skeptics out there, hard evidence is necessary.

It is difficult to imagine conducting empirical research to compare MOST and the treatment package approach. One scenario would be randomly assigning, say, 100 behavioral scientists to use either MOST or the treatment package approach over a 10-year period, and then comparing the effectiveness and efficiency of the resulting interventions. That is obviously not practical.

Given that an empirical study to test the usefulness of MOST is out of the question, the next best thing is a statistical simulation. Some collaborators working in this area did an extensive statistical simulation to explore whether, when, and how MOST is better than the treatment package approach. You can read about it in

Collins, L. M., Chakraborty, B., Murphy, S. A., & Strecher, V. (2009). Comparison of a phased experimental approach and a single randomized clinical trial for developing multicomponent behavioral interventions. Clinical Trials, 6, 5-15. PMCID: PMC2711350

 

Describe MOST succinctly

MOST is a comprehensive framework, not an off-the-shelf procedure. It is best not to try to cover the broader picture of MOST in your proposal. You have decided what aspects of it you are going to implement in your study, so confine your description to these parts of MOST. It may help to include a figure like the Figure 1.1 in Collins (2018), BUT one developed specifically for YOUR project. For example, if you have already completed the preparation phase, consider graying it out or somehow indicating which phases of MOST are going to be completed in the proposed work.

Collins, L.M. (2018).  Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST).  New York: Springer.

 

Show that a factorial experiment will be more economical than the commonly used alternatives

If you are planning to conduct a factorial or fractional factorial experiment, it may be helpful to make the case in the application that the design you have chosen is the most economical one, compared to viable alternatives. This can be done succinctly by including a table that lists the design alternatives you considered and shows how many subjects and how many experimental conditions each would require. Some experimental designs use a lot of subjects compared to others, but require fewer experimental conditions. Others use fewer subjects but may require more experimental conditions. How economical an experimental design is in your situation depends on the cost associated with each experimental subject relative to the overhead associated with an experimental condition.  Moreover, if you know or can reliably estimate the costs associated with subjects and experimental conditions, you can express this comparison in terms of money.  This is discussed in Chapter 6 in Collins (2018) and illustrated in Tables 6.2 and 6.3.

If you are planning to include this rationale in your grant proposal, it is a good idea to read Chapter 6 in Collins (2018) carefully.  Although the numbers are easy to calculate (you may even be able to do the computations in your head), the rationale behind them is both subtle and complex. It is important to understand this rationale to ensure that you are applying the ideas correctly in your situation. Also, note that the expressions in Chapter 6 apply only to designs with two levels per factor.

The Methodology Center developed a SAS macro and an R package that can help you to select the most economical design from among four alternatives that are potentially useful in intervention science (individual experiments, single factor experiments such as comparative treatment designs, factorial experiments, and fractional factorial experiments)

Planning a factorial experiment when there is a subgroup structure in the data (subjects are clustered within schools, clinics, etc.) is discussed in Dziak, Nahum-Shani, and Collins (2012).

Collins, L.M. (2018).  Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST).  New York: Springer.

Dziak, J. J., Nahum-Shani, I., & Collins, L. M. (2012). Multilevel factorial experiments for developing behavioral interventions: Power, sample size, and resource considerations. Psychological Methods, 17(2), 153. doi: 10.1037/a0026972 PMCID: PMC3351535

 

Justify the use of a fractional factorial experiment

If you are planning to use any type of factorial experiment, it has to be appropriate to address the research questions at hand. The first step, then, is to justify the use of a factorial experiment. Once that is accomplished, the next step is to justify the use of a fractional factorial rather than a complete factorial.

The best justification for the use of a fractional factorial design rather than a complete factorial design is economy. Remember that complete and fractional factorial designs are powered exactly the same, so there will be no savings in terms of subjects. In other words, any fractional factorial design requires the same number of subjects as the corresponding complete factorial design to maintain the same level of power. Fractional factorial designs involve fewer experimental conditions, so you should be able to argue that having fewer experimental conditions will result in savings of money, personnel, equipment, materials, logistical or management difficulties, etc. The more specific you can be about this in a grant proposal, the better.

Even when a complete factorial involves a lot of experimental conditions, a fractional factorial may not be more economical. For example, if you are conducting an internet-based experiment, it may not be that much more difficult to program 32 conditions than to program 16 conditions. (To repeat: Although this may be counterintuitive, the 32-condition experiment will require no more subjects than the 16-condition experiment.)

When writing a proposal it is a good idea to be explicit about the trade-offs associated with using a fractional factorial design, namely aliasing of effects (for an explanation of aliasing, see Collins et al., 2009 or Chapter 5 in Collins, 2018). Whenever you remove conditions from a complete factorial design, aliasing occurs. When you select a fractional factorial design, you are choosing aliasing in a strategic fashion, in other words taking the position that the aliasing is an acceptable trade-off for increased economy.

It might help to remind reviewers that from one perspective, designs such as the familiar comparative treatment, constructive treatment, and dismantling designs also involve aliasing. These designs have properties that are not nearly as attractive as a good fractional factorial; see Collins, Dziak, and Li (2009).

Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs. Psychological Methods, 14, 202-224. PMCID: PMC2796056

Collins, L.M. (2018).  Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST).  New York: Springer.

 

Explain the rationale behind your choice of a fractional factorial design

There are a lot of different fractional factorial designs with different aliasing structures. (If you do not know what the term aliasing means, read Chapter 5 in Collins, 2018.) You can use software like PROC FACTEX in SAS to select a design. We offer a mini-tutorial about doing this in Collins et al. (2009).

You might want to consider a 25-1 design. This design is attractive because it enables examination of 5 factors, but requires only 16 experimental conditions (a complete factorial design would require 32 conditions, so this is a half fraction). The design has the following important characteristics:

  • each effect is aliased with ONE other effect (because the design is a half fraction); and
  • It is Resolution V; in this case this means that each main effect is aliased with one four-way interaction, and each two-way interaction is aliased with one three-way (or higher-order) interaction.

This aliasing seems pretty easy to justify. How many people would argue that four-way interactions are likely to be large and important? If some of the two-way interactions are scientifically important, make sure that they are aliased with three-ways that are likely to be negligible in size.

Collins, L.M. (2018).  Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST).  New York: Springer.

Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs. Psychological Methods, 14, 202-224. PMCID: PMC2796056

 

Convince biostatisticians who may be reviewers that a factorial or fractional factorial experiment is a good idea

Many biostatisticians are steeped in the RCT tradition, but they are usually quick to see that factorial experiments have advantages in some situations when the argument is presented to them. These articles are likely to appeal to “card-carrying” biostatisticians:

Chakraborty, B., Collins, L. M., Strecher, V., and Murphy, S. A. (2009). Developing multicomponent interventions using fractional factorial designs. Statistics in Medicine, 28, 2687-2708.

Nair, V., Strecher, V., Fagerlin, A., Ubel, P., Resnicow, K., Murphy, A., Little, R., Chakraborty, B., & Zhang, A. (2008). Screening experiments and the use of fractional factorial designs in behavioral intervention research. American Journal of Public Health, 98, 1354-1359.

 

Convince reviewers that your factorial experiment has sufficient power, keeping in mind that some reviewers may believe that factorial experiments can never be sufficiently powered

Reviewers who are simply unfamiliar with factorial experiments often come into a review with an open mind. They are likely to be convinced by a clear argument about the appropriateness and economy of a factorial experiment. However, some reviewers ardently believe “facts” about factorial experiments that are wrong. In particular, some reviewers look at a 25 factorial experiment and see a 32-condition RCT. Because it would be nearly impossible to power a 32-condition RCT, they may conclude that it is impossible to power a 25 factorial experiment. Fortunately, as behavioral scientists re-learn the logic behind factorial experiments there are likely to be fewer and fewer reviewers who make this incorrect conclusion.

If you can spare the room in the proposal (sometimes a big “if,”), it may be a good idea to include a brief tutorial covering the following points:

  • Explain that the factors are crossed, and include a table showing the experimental conditions in the design you are proposing to use (it is helpful to number the conditions). It may seem silly to include a table of experimental conditions, but many people are unsure about how to get from a list of factors to a set of experimental conditions. Such people will not be able to visualize your design without the help that a table will provide.
  • Based on the table of experimental conditions, illustrate how to compute a main effect estimate (e.g. “The main effect of Resistance Training is the mean of Experimental Conditions 1,3,5, and 7 minus the mean of Experimental Conditions 2,4,6, and 8”).
  • Point out that the purpose is NOT to compare the means of these conditions to each other directly, as it would be in an RCT. Instead, the experimental conditions will be combined in various ways to estimate main effects and interactions in the ANOVA. Importantly, because of the way conditions are combined, each main effect and interaction is based on the entire set of subjects. This is why factorial experiments make such economical use of subjects (you can cite Collins, Dziak, & Li, 2009 or Collins, 2018).

Here is an example paragraph you can use as a starting point. This is based on a 16-condition factorial experiment, either a complete 24 factorial or a fractional factorial with more than four factors, and assumes you have included a table, Table X, showing all the experimental conditions in the design. Suppose two of the factors are NORM and RESIST, and it has been determined that there will be sufficient power with a total N=300:

We ask the reader to note that the factorial design in Table X should not be considered a 16-arm trial in which each condition is compared in turn to a control condition. Our interest is in tests of standard ANOVA main effects and interaction effects. These tests involve comparison of means computed across aggregates of experimental conditions. For example, the main effect of the NORM component will be tested by comparing the mean of the outcome variable for the 150 subjects who receive NORM (i.e., those in Conditions 1-8 in Table X) versus the mean of the outcome variable for the 150 subjects who do not receive NORM (i.e., those in Conditions 9-16). Similarly, the main effect of RESIST will be tested by comparing the mean of the outcome variable for the 150 subjects who receive RESIST (i.e., Conditions 2,3,5,8,9,12,14,15) versus the mean of the outcome variable for the 150 subjects who do not receive RESIST (i.e., Conditions 1,4,6,7,10,11,13,16). Analysis of data from a 16-arm RCT would require comparison of individual experimental conditions, and therefore would be grossly underpowered. However, because we are conducting a multi-factorial experiment and not an RCT, each effect estimate will involve all 16 of the conditions in Table X, thereby maintaining the power associated with all 300 subjects.

Read a brief, informal webpage that explains factorial experiments for those trained primarily in RCTs. For a VERY brief article on this topic, see Collins, Dziak, Kugler, and Trail (2014). A more extensive introduction can be found in Chapters 3 and 4 in Collins (2018).

Collins, L.M. (2018).  Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST).  New York: Springer.

Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs. Psychological Methods, 14, 202-224. PMCID: PMC2796056

Collins, L. M., Dziak, J. J., Kugler, K. C., & Trail, J. B. (2014). Factorial experiments: Efficient tools for evaluation of intervention components. American Journal of Preventive Medicine, 47, 498-504.

 

Cite the evidence that implementing a complex optimization trial in a field setting can be manageable

With careful planning and organization, thorough training of staff, good management, and creative use of technology, it is possible to implement complex optimization trials, such as factorial experiments with many experimental conditions, in field settings successfully. It is a good idea to rely primarily on professional implementation staff rather than graduate students. Listed below are a few published articles that describe factorial experiments in varying stages of completion.

Bernstein, S.L., Dziura, J., Weiss, J., Miller, T., Vickerman, K.A., Grau, L.E., Pantalon, M.V., Abroms, L., Collins, L.M., & Toll, B. (2018). Tobacco dependence treatment in the emergency department: A randomized trial using the multiphase optimization strategy. Contemporary Clinical Trials, 66, 1-8. PMCID: PMC5851600

Celano, C.M., Albanese, A., Millstein, R.A., Mastromauro, C.A., Chung, W.-J., Legler, S., Park, E.R, Healy, B.C., Collins, L.M., Januzzi, J.L., & Huffman, J.C. (2018). Optimizing a positive psychology intervention to promote health behaviors following an acute coronary syndrome: The Positive Emotions after Acute Coronary Events-III (PEACE-III) randomized factorial trial. Psychosomatic Medicine, 80, 526-534. PMCID: PMC6023730 doi: 10.1097/PSY.0000000000000584

Cook, J. W., Collins, L. M., Fiore, M. C., Smith, S. S., Fraser, D., Bolt, D. M., et al. (2016). Comparative effectiveness of motivation phase intervention components for use with smokers unwilling to quit: a factorial screening experiment. Addiction, 111(1)., 117-28. doi:10.1111/add.13161

Gwadz, M.V., Collins, L.M., Cleland, C.M., Leonard, N.R., Wilton, L., Gandhi, M., Braithwaite, R.S., Perlman, D.C., Kutnick, A., & Ritchie, A.S. (2017). Using the multiphase optimization strategy (MOST) to optimize an HIV care continuum intervention for vulnerable populations: A study protocol. BMC Public Health, 17, 383. PMCID: PMC5418718

Phillips, S.M., Cottrell, A., Lloyd, G.R., Penedo, F.J., Collins, L.M., Cella, D., Courneya, K.S., Ackermann, R.T., Siddique, J., & Spring, B. (2018). Optimization of a technology-supported physical activity intervention for breast cancer survivors: Fit2Thrive study protocol. Contemporary Clinical Trials, 66, 9-19. PMCID: PMC5828903

Schlam, T.R., Fiore, M.C., Smith, S.S., Fraser, S., Bolt, D.M., Collins, L.M., Mermelstein, R., Piper, M.E., Cook, J.W., Jorenby, D.E., Loh, W.-Y., & Baker, T.B. (2016). Comparative effectiveness of intervention components for producing long-term abstinence from smoking: A factorial screening experiment. Addiction, 111, 142-155. PMCID: PMC4692280 doi: 10.1111/add.13153

Strecher, V. J., McClure, J. B., Alexander, G. W., Chakraborty, B., Nair, V. N., Konkel, J. M., Greene, S. M., Collins, L. M., Carlier, C. C., Wiese, C. J., Little, R. J., Pomerleau, C. S., Pomerleau, O. F. (2008). Web-based smoking cessation programs: Results of a randomized trial. American Journal of Preventive Medicine, 34,373-381.

 

Last updated: May 11, 2020

Recommended Reading for Optimizing Behavioral Interventions

Please note that the following books are available for free PDF download through many universities’ libraries.

Collins, L.M. (2018). Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST). New York: Springer.

Collins, L.M., & Kugler, K.C. (2018). Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: Advanced Topics.  New York: Springer.

 

Introduction

Collins, L.M. (2018). Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST). New York: Springer.

Collins, L. M., Kugler, K. C., & Gwadz, M. V. (2015). Optimization of multicomponent behavioral and biobehavioral interventions for the prevention and treatment of HIV/AIDS. Aids Behavior, Supp 1, 197-214.

 

Some Implementations of MOST

These articles describe the thinking behind some implementations of MOST or related approaches in field settings. These examples all describe factorial optimization trials, although other experimental designs are often used for optimization trials.

Baker, T. B., Collins, L. M., Mermelstein, R., Piper, M. E., Schlam, T. R., Cook, J. W., et al. (2016). Enhancing the effectiveness of smoking treatment research: conceptual bases and progress. Addiction, 111(1)., 107-16. doi:10.1111/add.13154

Bernstein, S.L., Dziura, J., Weiss, J., Miller, T., Vickerman, K.A., Grau, L.E., Pantalon, M.V., Abroms, L., Collins, L.M., & Toll, B. (2018). Tobacco dependence treatment in the emergency department: A randomized trial using the multiphase optimization strategy. Contemporary Clinical Trials, 66, 1-8. PMCID: PMC5851600

Celano, C.M., Albanese, A., Millstein, R.A., Mastromauro, C.A., Chung, W.-J., Legler, S., Park, E.R, Healy, B.C., Collins, L.M., Januzzi, J.L., & Huffman, J.C. (2018). Optimizing a positive psychology intervention to promote health behaviors following an acute coronary syndrome: The Positive Emotions after Acute Coronary Events-III (PEACE-III) randomized factorial trial. Psychosomatic Medicine, 80, 526-534. PMCID: PMC6023730

Cook, J. W., Collins, L. M., Fiore, M. C., Smith, S. S., Fraser, D., Bolt, D. M., et al. (2016). Comparative effectiveness of motivation phase intervention components for use with smokers unwilling to quit: a factorial screening experiment. Addiction, 111(1)., 117-28. doi:10.1111/add.13161

Gwadz, M.V., Collins, L.M., Cleland, C.M., Leonard, N.R., Wilton, L., Gandhi, M., Braithwaite, R.S., Perlman, D.C., Kutnick, A., & Ritchie, A.S. (2017). Using the multiphase optimization strategy (MOST) to optimize an HIV care continuum intervention for vulnerable populations: A study protocol. BMC Public Health, 17, 383. PMCID: PMC5418718

Kugler, K. C., Wyrick, D. L., Tanner, A. E., Milroy, J. J., Chambers, B., Ma, A., Guastaferro, K. M., and Collins L. M. (2018). Using the multiphase optimization strategy (MOST) to develop an optimized online STI preventive intervention aimed at college students: Description of conceptual model and iterative approach to optimization.  In Collins, L. M., & Kugler, K. C. Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: Advanced Topics.  New York: Springer, pp. 1—21.

McClure, J. B., Derry, H., Riggs, K. R., Westbrook, E. W., St. John, J., Shortreed, S. M., Bogart, A., & An, L. C. (2012). Questions about quitting (Q(2)): Design and methods of a Multiphase Optimization Strategy (MOST) randomized screening experiment for an online, motivational smoking cessation intervention. Contemporary Clinical Trials, 33(5), 1094-1102. PMCID: PMC3408878

Pellegrini, C.A., Hoffman, S.A., Collins, L.M., & Spring, B. (2014).  Optimization of remotely delivered intensive lifestyle treatment for obesity using the multiphase optimization strategy:  Opt-IN study protocol.  Contemporary Clinical Trials. NOTE: See important corrigendum {Pellegrini, C. A., Hoffman, S. A., Collins, L. M., & Spring, B. (2015). Corrigendum to “Optimization of remotely delivered intensive lifestyle treatment for obesity using the multiphase optimization strategy: Opt-IN study protocol” [Contemp. Clin. Trials 38 (2014) 251–259]. Contemporary Clinical Trials, 45, 468 – 469. doi:10.1016/j.cct.2015.09.001}

Piper, M. E., Fiore, M. C., Smith, S. S., Fraser, D., Bolt, D. M., Collins, L. M., et al. (2016). Identifying effective intervention components for smoking cessation: a factorial screening experiment. Addiction, 111(1)., 129-41. doi:10.1111/add.13162

Schlam, T. R., Fiore, M. C., Smith, S. S., Fraser, D., Bolt, D. M., Collins, L. M., et al. 2016). Comparative effectiveness of intervention components for producing long-term abstinence from smoking: a factorial screening experiment. Addiction, 111(1)., 142-55. doi:10.1111/add.13153

Watkins, E., Newbold, A., Tester-Jones, M., Javaid, M., Cadman, J., Collins, L.M., Graham, J., & Mostazir, M. (2016). Implementing multifactorial psychotherapy research in online virtual environments (IMPROVE-2): Study protocol for a phase III trial of the MOST randomized component selection methods for internet cognitive-behavioural therapy for depression. BMC Psychiatry, 16, 345. PMCID: PMC5054552

 

Rationale for factorial optimization trials

These articles review practical aspects of experimental design relevant to intervention science and attempt to correct some pervasively held misconceptions. These articles and may be useful as citations.

Collins, L.M. (2018). Chapter 3 in Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST). New York: Springer.

Collins, L.M., Dziak, J.J., Kugler, K.C., & Trail, J.B. (2014).  Factorial experiments: Efficient tools for evaluation of intervention components. American Journal of Preventive Medicine47, 498-504.

Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs. Psychological Methods,14(3), 202-224. PMCID: PMC2796056

Advanced topics in factorial optimization trials

Chakraborty, B., Collins, L. M., Strecher, V., & Murphy, S. A. (2009). Developing multicomponent interventions using fractional factorial designs. Statistics in Medicine 28, 2687-2708. PMCID: PMC2746448

Dziak, J. J., Nahum-Shani, I., & Collins, L. M. (2012). Multilevel factorial experiments for developing behavioral interventions:  Power, sample size, and resource considerations. Psychological Methods, 17, 153. PMCID: PMC3351535

Nair, V., Strecher, V., Fagerlin, A., Ubel, P., Resnicow, K., Murphy, S. A., Little, R., Chakraborty, B., & Zhang, A. (2008). Screening experiments and the use of fractional factorial designs in behavioral intervention research. American Journal of Public Health,98, 1354-1359. PMCID: PMC2446451

 

Design of Adaptive Interventions

Collins, L.M., Murphy, S.A., & Bierman, K. (2004). A conceptual framework for adaptive preventive interventions. Prevention Science, 3, 185-196.

Nahum-Shani, I., Smith, S. N., Spring, B.J., Collins, L.M., Witkiewitz, K., Tewari, A., & Murphy, S. A. (2018). Just–in-time adaptive interventions (JITAIs) in mobile health: Key components and design principles for ongoing health behavior support. Annals of Behavioral Medicine, 52, 446-452. PMCID: PMC5364076

Riley, W. T., Serrano, K. J., Nilsen, W., & Atienza, A. A. (2015). Mobile and wireless technologies in health behavior and the potential for intensively adaptive interventions. Current Opinion in Psychology, 5, 67–71.

 

Experimental Designs Useful in Optimization of Adaptive Interventions

Almirall, D., Nahum-Shani, I., Wang, L., & Kasari, C. (2018).  Experimental designs for research on adaptive interventions: Singly and sequentially randomized trials.  In Collins, L. M., & Kugler, K. C. Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: Advanced Topics.  New York: Springer, pp. 89—120.

Collins, L. M., Nahum-Shani, I., & Almirall, D. (2014). Optimization of behavioral dynamic treatment regimens based on the sequential, multiple assignment, randomized trial (SMART). Clinical Trials, 11, 426-434. PMCID: PMC4257903

Dong, Y., Rivera D.E., Downs, D.S., Savage, J.S., Thomas, D.M., & Collins, L.M. (2013). Hybrid model predictive control for optimizing gestational weight gain behavioral interventions. Proceedings from the 2013 American Control Conference. 1970-1975. PMCID: PMC3856197.

Rivera, D. E., Hekler, E. B., Savage, J. S., and Symons Downs D. (2018). Intensively adaptive interventions using control systems engineering: Two illustrative examples.  In Collins, L. M., & Kugler, K. C. Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: Advanced Topics.  New York: Springer, pp. 89—120.

Timms, K.P., Rivera, D.E., Collins, L.M., & Piper, M.E. (2014). Continuous-time system identification of a smoking cessation intervention. International Journal of Control, 87, 1423-1437.

Timms, K. P., Rivera, D. E., Collins, L. M., & Piper, M. E. (2013).  A dynamical systems approach to understand self-regulation in smoking cessation behavior change. Nicotine and Tobacco Research, 16, S159-168.

 

Practical considerations

Gallis, J. A., Bennett, G. G., Steinberg, D. M., Askew, S., & Turner, E. L. (2019). Randomization procedures for multicomponent behavioral intervention factorial trials in the multiphase optimization strategy framework: challenges and recommendations. Translational behavioral medicine, 9(6), 1047-1056.

Huffman, J.C., Millstein, R.A., Celano, C.M., Healy, B.C., Park, E.R., & Collins, L.M. (2019). Developing a psychological-behavioral intervention in cardiac patients using the multiphase optimization strategy: Lessons learned from the field. Annals of Behavioral Medicine. doi: 10.1093/abm/kaz035

Piper, M. E., Schlam, T. R., Fraser, D., Oguss, M., & Cook, J. W. (2018).  Implementing factorial experiments in real-world settings: Lessons learned while engineering an optimized smoking cessation treatment.  In Collins, L. M., & Kugler, K. C. Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: Advanced Topics.  New York: Springer, pp. 23—45.

Wyrick, D.L., Rulison, K.L., Fearnow-Kenney, M., Milroy, J.J., & Collins, L.M. (2014).  Moving beyond the treatment package approach to developing behavioral interventions: Addressing questions that arose during an application of the multiphase optimization strategy (MOST). Translational Behavioral Medicine, 4, 252-259.

 

Last update: 2020

Micro-Randomized Trial FAQ

This page addresses Frequently Asked Questions about MRTs. It has a section for beginners and more in-depth questions for researchers planning an MRT.

Introduction to MRTs

What is the purpose of an MRT?

The purpose of an MRT is to provide data that can be used to construct a multi-component intervention. The MRT helps researchers answer questions including whether or not to include a time-varying component as part of an intervention package, and in which contexts delivering a component is most effective. For examples of the kinds of questions an MRT can be used to answer, see the HeartSteps example below. Importantly, MRTs are not confirmatory studies designed to evaluate an intervention, rather they are focused on selecting and optimizing components to be delivered as part of an intervention package.

What are the elements of an MRT?

  • Intervention components: Anything about the mobile health intervention package that can be separated out for experimentation. Examples include reminders, motivational messages, reinforcement schedules, social support linkages, cognitive messages, type of avatar, delivery mechanism and so on.
  • Intervention options: Different levels of an intervention component. These levels can include multiple active options, or an option to deliver nothing. An MRT often involves investigating multiple components.
  • Distal outcome: The distal outcome in an MRT is the long-term clinical outcome–something you would want as an outcome in a confirmatory clinical trial. A researcher also may also define a distal outcome that is measurable at the end of an MRT study.
  • Proximal outcome: The effect that an intervention component is intended to have in the near term. The proximal outcome is often a shorter-term measurable quantity of a distal outcome, or a potential mediator of a distal outcome. Each intervention component may target a different proximal outcome.
  • Decision points: Pre-determined times at which it might be useful to deliver an intervention component. Decision points can differ by intervention component.
  • Observations of context: These are variables of scientific interest observed at the time of the current decision point as well as summaries of variables observed prior to the decision point. They can include self-report measures, information captured with wearable sensors (location, weather, movement), information captured using device sensors (e.g., wireless scales participants use to weigh themselves), or recordings of the amount and type of interaction with a mobile application.
  • Availability conditions: Restrictions, based on current context, on when the mobile device might deliver an intervention option to an individual. Individuals would not be considered available in contexts where it is unsafe or inappropriate to deliver an intervention. For example, we do not want to send audible notifications in contexts in which an individual is operating a motor vehicle. Availability conditions can also be driven by concerns about overburdening individuals; for example, an individual might be considered unavailable if he/she has recently received a reminder or message.
  • Randomization probabilities: The intervention options are randomized with a pre-specified probability at each decision point at which an individual is available.

What is the role of the distal outcome in an MRT?

The distal outcome in an MRT usually corresponds to a long-term, clinical outcome such as time to relapse or average level of symptoms. One doesn’t have to measure the clinical outcome in every MRT. However, the distal outcome guides the choice of proximal outcomes to be targeted by the intervention components. The goal is that by impacting the proximal outcomes, the intervention components impact the distal outcome.

What is the relationship between the proximal and distal outcomes in an MRT?

In an MRT, intervention components target proximal outcomes that are usually either a short-term version of a distal outcome (e.g., number of steps taken the current day when the distal outcome is the average daily steps over a longer period of time) or hypothesized mediators of the distal outcome (for someone attempting to quit smoking, a relaxation exercise to reduce the proximal outcome of stress over the next 2 hours will hopefully impact the distal outcome of time to relapse).

Why are there repeated randomizations in an MRT?

Another way to ask this question is “What can you learn from the repeated randomizations that are part of an MRT?”” The primary rationale for randomization is that it enhances balance in the distribution of unobserved factors across groups assigned to different treatments. This enhances the ability to assess causal effects; that is, randomization reduces alternative explanations for why the group assigned one treatment has improved outcomes as compared to a group assigned an alternate treatment. The repeated randomizations in an MRT enhance balance in the distribution of unobserved factors between participants/decision points assigned to different intervention options. Thus MRTs can provide data to help answer questions including whether or not delivering an intervention component has the desired effect on the targeted outcome, and whether this effect varies with time, prior dose and the current context of the individual.

What types of intervention components might be investigated via an MRT?

The repeated randomizations in an MRT are appropriate for investigating the effects of time-varying intervention components for which the proximal effect might vary by time or current context of the individual. For example, instead of sending a reminder instructing individuals to self-monitor what they eat every day, it might be more effective and less burdensome to remind a person to self-monitor only when they haven’t recently self-monitored. Another example would be the delivery of a relaxation exercise via a mobile phone. A researcher might want to know in what contexts delivering the relaxation exercise is most effective, for example, whether it is more effective when delivered at times when the individual is stressed.

What types of intervention components would not be investigated with an MRT?

Some intervention components considered for inclusion in an intervention package will not require further investigation. For example, it may not be worth trial resources to investigate a component because it is known to be effective in comparison to other components and the component is not burdensome. Furthermore, the inclusion of the component that requires negligible resources and is not burdensome to individuals might not be investigated.

What types of intervention components would only be randomized at baseline and not repeatedly? 

Some intervention components should not be altered once provided. This might be for either scientific or ethical reasons. In this case these components would only be randomized at baseline. An example of such a component would be a health coach avatar, where it might not make sense to take away a health coach avatar once it is provided to an individual.

What is the role of the observations of an individual’s current context? In particular, what is the role of these observations in an MRT?

A first use of the current context is to inform the design of an intervention option. For example, the language in an activity message might be tailored to the participant’s current location and weather; this would be done to increase the chance that the message is useful for the participant in that context (e.g. location, weather).  Thus one role of observations of an individual’s context is to tailor the content of an intervention component message. In many research settings we don’t have access to a large number of participants for our MRT. It is difficult, with small sample sizes, to detect small differences such as whether the contextually tailored activity message should be tailored to both current weather and current location versus only tailored to current weather. Thus the contextual tailoring of messages/suggestions is frequently informed by current behavioral theory, clinical experience and prior studies.

A second use of the context is to learn if some intervention components are more effective in some contexts, e.g., moderation. An MRT can be used to provide empirical data with respect to whether or not a contextual variable moderates the effectiveness of delivering an intervention component. For example, we may find that delivering a contextually tailored activity suggestion is more effective at encouraging activity than no suggestion in a context in which the weather is good. On the other hand, if the current context includes that the current weather is bad, it may make no difference if we deliver a contextually tailored message or not. In this example the intervention component is the tailored activity message component and there are two intervention options: deliver versus do not deliver. Consider another intervention component: planning of physical activity for tomorrow. This component might have three options, the first is unstructured planning, the second is structured planning and the third is no planning. Here we might use an MRT to learn whether the context, such as the participant’s mood at the time of the planning, moderates the effect of the unstructured versus the structured planning in terms of the next day’s physical activity.

How are MRTs related to N-of-1 trials?

There are three key differences between MRTs and N-of-1 trials. The first is their inferential goals. MRTs are designed to provide data to test marginal causal effects. Marginal causal effects are effects that are averaged over the population (i.e. all individuals who are in recovery support), over a subset of the population (i.e. all young adults in recovery support) or over a subset of the population in a particular context (i.e. young adults in the morning on school days). The associated primary analyses, like most primary analyses in clinical trials involve minimal assumptions. N-of-1 trials, on the other hand, are most often conducted to provide data to ascertain the most effective treatment for a particular individual. Here nuanced assumptions based on behavioral theory are used to conduct the primary analyses.

The second difference has to do with the types of interventions the trials were developed to optimize. MRTs are designed to help decide which of multiple intervention components should be included in a multi-component intervention, where N-of-1 trials were developed for settings in which scientists wish to compare the effect of one treatment to that of another (treatment package A versus treatment package B). Thus the repeated trials within an individual are usually scheduled at time points sufficiently far apart so that the assumption of no carry-over effects is valid. For example, when the individual is provided treatment (A), it is taken away, and then they are provided treatment (B), their previous exposure to treatment (A) does not affect their response to treatment (B). Or if this delayed effect might occur, the associated data analyses adjust for the carry-over effect. This makes eminent sense if the goal, as stated above, is to decide if for this individual it is better to provide treatment A or better to provide treatment B.

Third, in N-of-1 trials the treatments that are considered are usually of the type that the individual level treatment effect is unlikely to vary across time within the individual. That is, during the total duration of the N-of-1 trials the individual’s treatment responsivity should be unlikely to vary over time and across different contexts. In contrast, many intervention components considered in MRTs are likely to have time-varying effects.

What is the role of carry-over effects in an MRT?

Carry-over effects of intervention components may present as moderation effects. That is, the dose of prior intervention might, due to burden/habituation, reduce the effect of an intervention component at a future decision point. A carry-over effect may also simply lead to poorer future proximal outcomes at later decision points. For example, individuals may experience burden due to the intervention and thus delete the mobile application.

Can you provide an example of an MRT to illustrate how one works?

Example: HeartSteps version 1 MRT.

Overview: Physical activity is known to decrease the risk of several health complications, yet only one in five adults in the U.S. meet the guidelines for the number of minutes of physical activity recommended per week. Individuals can still experience health benefits if the required minutes are spread out across several days, and broken into more frequent but smaller amounts of time. The goal of HeartSteps is to develop an intervention to increase overall levels of physical activity in sedentary adults by supporting opportunistic physical activity, in which brief periods of movement or exercise are incorporated into individuals’ daily routines. HeartSteps Version 1 (v1) was a six-week MRT in which the intervention development team aimed to investigate whether contextually tailored activity suggestions, as well as support for planning how to be active, increased participants’ overall physical activity. Below we describe one of the intervention components: the contextually tailored activity suggestion component. The figure below provides a schematic in which each component is labeled.

  1. Intervention component: Contextually tailored activity suggestion. Push notifications sent to participants’ smartphones providing a suggestion for how to be active in the current moment, with each notification tailored to the participant’s current location, weather conditions, time of day, and day of the week
  2. Intervention options: The intervention options were: (A) a suggestion of a walking activity that took 2-5 minutes to complete, (B) a suggestion of an anti-sedentary activity (brief movements) that took 1-2 minutes to complete, or (C) no suggestion.
  3. Distal outcome: The distal outcome is the total step count during the 42-day study.
  4. Proximal outcome: Total number of steps taken in the 30 minutes following a decision point.
  5. Decision Points: There were 5 individual-specific decision points every day: before morning commute, at lunch time, mid-afternoon, after evening commute, and after dinner.
  6. Observations of context: Location, weather, time of day, day of the week (weekday vs. weekend), prior day’s step count, prior 30 minute step count, variation in prior 30 minute step count over past 7 days, time of day, movement, usefulness of prompt, self-reports of physical activity from prior evening.
  7. Availability: Participants were unavailable when sensors on the phone indicated that they might be operating a vehicle or were currently physically active. Participants were also unavailable if they turned had off the activity notifications.
  8. Randomization probabilities: Participants who are available at a decision point are randomized with a 0.3 probability to receive (A) a contextually tailored walking activity, a 0.3 probability of receiving (B) an anti-sedentary activity, and a 0.4 probability of receiving (C) no suggestion.

References

Klasnja, P., Hekler, E. B., Shiffman, S., Boruvka, A., Almirall, D., Tewari, A., & Murphy, S. A. (2015). Microrandomized trials: An experimental design for developing just-in-time adaptive interventions. Health Psychology, 34(S), 1220.

Klasnja, P., Smith, S., Seewald, N. J., Lee, A., Hall, K., Luers, B., Hekler, E. B. and Murphy, S. A., (In press) Efficacy of contextually-tailored suggestions for physical activity: A micro-randomized optimization trial of HeartSteps. Annals of Behavioral Medicine.

 

How can MRTs answer scientific questions about the delivery of contextually tailored activity suggestion?

The HeartSteps v1 MRT focused on whether the interruption of delivering these suggestions was worthwhile – whether they had the intended effect on the proximal outcome. Also mHealth components that are delivered multiple times as individuals go about their daily lives can be burdensome, so it was necessary to understand if the effectiveness of the activity suggestions dissipated over time. The MRT was designed to address questions including:

On average across participants, does pushing the contextually tailored activity suggestion increase physical activity in the 30 minutes after the suggestion is delivered, compared to no suggestion?
If so, does the effect of the contextually tailored activity suggestion deteriorate with time (day in study)?

 

Planning an MRT

The mobile application that is being used in an MRT can include intervention components that are not being randomized. Why do this, and what are the implications?

Some components are not randomized in an MRT because previous scientific evidence has already demonstrated their effectiveness, efficiency, and/or because the cost/participant burden of including them as part of the intervention is negligible. If some components in an mHealth intervention are not randomized and thus not experimented on as part of an MRT, the resulting data cannot provide evidence regarding whether different options of these components (e.g. on/off, high/low) impact the effectiveness of randomized components. If there are scientific questions regarding whether the inclusion of non-randomized components impact the effectiveness of the randomized components, then further study is needed to address these questions.

What are some guidelines for choosing the decision points?

Decision points are selected so they occur at times when it makes sense to provide treatment. When defining decision points, a researcher should consider the following questions:

  • Are there times when a particular treatment is more or less likely to affect the proximal outcome? Take for example the contextually tailored activity suggestion component described above in the HeartSteps MRT. Previous data indicated five time periods within a day when there was high within-person variability in step count. These five times were selected as decision points for the activity suggestions, as the suggestions are more likely to increase participants’ step counts at these times. Another example is, if the treatment is a reminder to take a once per day medication, then the decision point might occur once per day at a time when a participant indicates they usually take the medication.
  • What are the contextual factors that impact the effectiveness of an intervention component, and how quickly are they changing? The frequency of decision points can also be related to the timescale at which scientists think there will be meaningful changes in factors that are relevant to deciding if and what treatment should be delivered. For example, in a smoking cessation study, Sense2Stop, researchers wanted to understand the benefits of delivering a reminder to practice a relaxation exercise when a person is classified stressed. In this case stress was the relevant factor. In this study stress classifications, based on sensor data, are made each minute. Accordingly, the decision points for the relaxation exercise component are every minute, in order to ensure opportunities for delivering treatment during times of stress. Note that just because the decision points are every minute does not mean that individuals receive an intervention every minute. In fact, at many or most decision points, no intervention will be provided; that is, at every minute the probability that determines whether a relaxation exercise is delivered will be set to a very low value.

Reference

Sense2Stop:Mobile Sensor Data to Knowledge. (2014). Retrieved from http://clinicaltrials.gov/ (Identification No. NCT03184389)

 What are some guidelines for choosing randomization probabilities?

  • Participant burden: Choice of randomization probabilities is primarily driven by considerations of participant burden, so that participants will not receive a dose/number of treatments that causes them to disengage or habituate to the intervention content. A researcher would start by defining the average number of times they want participants to receive a particular intervention component. For the activity suggestion component in HeartSteps, researchers originally decided that participants should receive an average of two activity suggestions per day (e.g. 2/5).
  • Availability: If there are availability considerations for an intervention component, e.g. times when it will not be appropriate to deliver intervention content, this also must be considered when defining the randomization probabilities. Pilot studies for the HeartSteps MRT demonstrated that participants would be available for approximately 80% of the decision times. Therefore, the randomization probability was increased to 3/5 so as to ensure that participants would receive approximately two messages per day.

How to decide the length of time over which one should observe the proximal outcome?

The dominant consideration is the “signal-to-noise ratio.” For each particular intervention component, a researcher needs to determine how long after delivering that component is it necessary to wait in order for a person to respond (to be able to detect the “signal”, its impact on the proximal outcome). If this time interval is too short, then the measure of the proximal outcome will not capture the effects of the intervention component. If this interval is too long, then the measure of the proximal outcome may include too much noise due to other things happening in the individual’s life. Determining “just the right duration” over which the proximal outcome should be measured can be based on prior data and domain expertise. For example, in HeartSteps the activity suggestions were tailored based on current location and weather, and the proximal outcome was measured in terms of step count. A five-minute duration for observing step count following a decision point would be too short, as the individual doesn’t have enough time to respond. However, a 60 minute duration for observing step count following a decision point was thought to be too long as the individual’s context (location, weather) may change significantly over an hour. Therefore, the research team selected 30 minutes as the duration over which the proximal outcome was to be measured.

 

Last updated: May 07, 2020

Micro-Randomized Trials (MRTs)

In micro-randomized trials (MRTs), individuals are randomized hundreds or thousands of times over the course of the study. The goal of these trials is to optimize mobile health interventions by assessing the effect of intervention components and assessing whether the intervention component effects vary with time or the individuals current context. Through MRTs we can gather data to build optimized just-in-time adaptive interventions (JITAIs). Read a definition of JITAIs.

JITAIs can include either or both engagement treatments and therapeutic treatments. The Heartsteps MRT is designed to promote physical activity among sedentary people. Heartsteps includes phone notifications that encourage physical activity; these are therapeutic in focus. The SARA MRT is designed to promote engagement by young adults in substance abuse research. SARA includes rewards for participants who complete assessments; these are engagement in focus. The design of both of these projects can be seen in the “Projects Using MRTs” section, below.

Over the course of the intervention, each participant may be randomized hundreds or thousands of times.

In the Heartsteps project, Methodology Center researchers are developing an app that encourages physical activity among people with a heart condition. The app displays messages on participants’ smartphones, and the messages encourage participants to engage in activity. The researchers identified five times throughout the day when people are mostly likely to be available to exercise, and one goal of the study is to determine which prompts work best at which times and under what circumstances. At each of the five time points, the application randomly decides to prompt or not prompt each participant to become active; over the course of the intervention, each participant is randomized hundreds or thousands of times. This sequence of both within-participant and between-participant randomizations comprises the MRT.

The resulting data is used to assess the effectiveness of the prompts and to build rules for when to prompt participants to become active.

The application also records outcomes. In this case, the app tracks whether or not the participant’s activity-tracking wristband detects physical activity in the hour following randomization, the participant’s overall level of physical activity, and the participant’s context during each randomization (using GPS to determine the person’s location and the local weather). The resulting data is used by researchers to assess the effectiveness of the prompts and to build rules for when to prompt and not to prompt participants to become active. In other MRTs, the randomization could apply to what type of intervention to provide, rather than whether or not to provide a prompt. The ultimate goal of Heartsteps is the development of a JITAI that will successfully encourage higher levels of physical activity among this at-risk population. The study design of the MRT used in Heartsteps is shown below.

MRTs are an emergent innovation in behavioral science. Below are designs of MRTs that are being used to build JITAIs that address a range of health problems from obesity to opioid use.

Want to learn more about MRTs? Read our FAQ.

References

Boruvka, A., Almirall, D., Witkiewitz, K., & Murphy, S. A. (2018). Assessing time-varying causal effect moderation in mobile health. Journal of the American Statistical Association113(523), 1112-1121.

Klasnja, P., Hekler, E.B., Shiffman, S., Boruvka, A., Almirall, D., Tewari, A. and Murphy, S. A. (2015). Micro-randomized trials: An experimental design for developing just-in-time adaptive interventions, Health Psychology. Vol 34(Suppl):1220-1228. doi: 10.1037/hea0000305. PubMed PMID: 26651463; PubMed Central PMCID: PMC4732571

Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., & Murphy, S. A. (2018). Just-in-time adaptive interventions (JITAIs) in mobile health: key components and design principles for ongoing health behavior support. Annals of Behavioral Medicine52(6), 446-462.

Smith, S. S., Lee, A.J., Hall, K., Seewald, N.J., Boruvka, A., Murphy, S. A. and Klasnja, P., Design lessons from a micro-randomized pilot study in mobile health, (2017) Mobile Health Sensors, Analytic Methods, and Applications, Springer International Publishing AG 2017, J.M. Rehg et al. (eds.), DOI 10.1007/978-3-319-51394-2_4, pgs. 59-82.

 

 

Projects Using MRTs

Below are several examples of MRTs that illustrate the different design possibilities and questions that an MRT can answer.

Heartsteps

This project tests the feasibility and effectiveness of providing, via a smartphone, just-in-time tailored physical activity suggestions as well as evening prompts to plan the following day’s physical activity so as to help sedentary individuals increase their activity. The resulting data will be used to inform the development of a JITAI for increasing physical activity.

Sense2Stop

This project tests the feasibility of conducting an MRT aiming to investigate whether real-time sensor-based assessments of stress are useful in optimizing the provision of just-in-time prompts to support stress-management in chronic smokers attempting to quit. The resulting data will be used to inform the development of a JITAI for smoking cessation.

  • PI: Santosh Kumar, Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K, https://md2k.org)
  • Location: Northwestern University, Bonnie Spring, (site P.I.)
  • Funding: NIBIB through funds provided by the trans-NIH Big Data to Knowledge (BD2K) initiative (www.bd2k.nih.gov). U54EB020404

 

Substance Abuse Research Assistance (SARA)

The Substance Abuse Research Assistance (SARA) is an app for gathering data about substance use in high-risk populations. App developers are using an MRT to improve engagement with completion of the self-report data collection measures. At the time this summary was written, this MRT is unique in that it has an engagement component, but not a treatment one.

  • PIs: Maureen Walton, Susan Murphy, and Mashfiqui Rabbi Shuvo
  • Location: Harvard University and University of Michigan
  • Funding: Michigan Institute for Data Science (PI S. Murphy), the University of Michigan Injury Center (PI M. Walton), NIDA P50 DA039838 (PI Linda Collins), NIAAA R01 AA023187 (PI S. Murphy), CDC R49 CE002099 (PI: M. Walton)

 

BariFit MRT

Researchers are conducting this quality-improvement MRT aiming to promote weight maintenance through increased activity and improved diet among people who received bariatric surgery. At the time it was developed, this project was novel in that it implemented separate randomizations at the start of the study, on a daily basis, and five times throughout the day.

  • PI: Pedja Klasna
  • Location & Funding: Kaiser Permanente

 

MRT to Optimize mHealth Messaging for Weight-Loss Support

The current study seeks to investigate whether, what type, and under what conditions prompts should be provided in the context of a weight-loss program that uses a mobile app as minimal support for obese/overweight adults.

  • PIs: Bonnie Spring and Inbal “Billie” Nahum-Shani
  • Location: Northwestern University
  • Funding: R01 DK108678​

 

MRT to Promote Engagement with Purpose-Driven Well-Being App

JOOL is a behavioral health and well‐being app that is designed to help people monitor and improve their sleep, presence, activity, creativity, and eating, with the ultimate goal of helping people move closer to fulfilling their life’s purpose. This MRT aims to understand whether push notifications of tailored health messages are useful in promoting engagement with the JOOL app; and, if so, when and under what circumstances they are most effective.

Smartphone Addiction Recovery Coach (SARC) MRT

The smartphone addiction recovery coach (SARC) project tests the feasibility and effectiveness of providing, via smartphone, messages designed to encourage use of the ecological momentary interventions (EMIs) to support young adults enrolled in an outpatient substance-use program as they recover from disordered substance use.

  • PI: Michael Dennis
  • Location: Chestnut Health Systems
  • Funding: NIDA award DA011323

MRT to Improve EMA Engagement in Oral Chemotherapy Adherence for Adolescents and Young Adults


This study seeks to examine the time-varying, contextual factors that influence daily oral chemotherapy adherence in adolescents and young adults with leukemia.

  • PI: Alexandra M. Psihogios
  • Location: The Children’s Hospital of Philadelphia
  • Funding: National Cancer Institute award K08CA241335-01

MRT to Improve Oral Chemotherapy Adherence for Adolescents and Young Adults

This study employs an MRT to test different strategies for promoting adherence to oral chemotherapy in adolescents and young adults with leukemia. It delivers individually-tailored content, including messages targeting disease self-management and preferred app engagement strategies.

  • PI: Alexandra M. Psihogios
  • Location: The Children’s Hospital of Philadelphia
  • Funding: National Cancer Institute award K08CA241335-01

 

Last updated: May 07, 2020

2019 Summer Institute: Mixed-Effects Location Scale Modeling

Topic: Variability in Intensive Longitudinal Data: Mixed-Effects Location Scale Modeling

Presenter: Donald Hedeker

Date: June 17 – 18, 2019

Venue: Penn State, University Park, PA

 

 

Workshop information

Modern data collection procedures, such as ecological momentary assessments, experience sampling, and diary methods have been developed to record the momentary events and experiences of subjects in daily life. These procedures yield relatively large numbers of subjects and observations per subject, and data from such designs are often referred to as intensive longitudinal data (ILD). Data from such studies are inherently multilevel with, for example, observations (level-1) nested within subjects (level-2), or observations (level-1) within days (level-2) within subjects (level-3). Thus, mixed models (aka multilevel or hierarchical linear models) are increasingly used for data analysis. In this workshop, we will begin with the basic 2- and 3-level mixed models, as well as extended uses of mixed models for analysis of ILD.

A major focus of the workshop was on the modeling of variances from ILD. In the standard mixed model, the error variance and the variance of the random effects are usually considered to be homogeneous. These variance terms characterize the within-subjects (error variance) and between-subjects (random-effects variance) variation in the data. In ILD studies, up to thirty or forty observations are often obtained for each subject, and there may be interest in characterizing changes in the variances, both within- and between-subjects. Thus, an extension of the standard mixed model will be described, which adds a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or scale, of their mood responses. These mixed-effects location scale (MELS) models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.


Prerequisites

Participants should be thoroughly familiar with multiple linear regression, and some knowledge of mixed models (i.e, multilevel/HLM) is helpful.


Computer requirements

Applications using SAS, Stata, and the freeware programs MixRegLS & MixWILD will be described and illustrated. MixRegLS & MixWILD can be downloaded at
https://hedeker-sites.uchicago.edu/page/mixregls-program-estimating-mixed-effects-location-scale-models.


Topics covered

  • Using 2- and 3-level mixed models considering observations-within-days and days-within-subjects, or observations-within-waves and waves-within-subjects
  • Estimating descriptive statistics for time-varying variables in situations where the number of observations per subject can be quite varied across subjects
  • Understanding occasion-varying covariates and the decomposition of the within-subjects (WS) and between-subjects (BS) effects of such covariates
  • Modeling random subject intercept and slope heterogeneity in terms of covariates
  • Modeling WS and BS variance in terms of covariates using mixed location-scale models that allow subject heterogeneity in both a subject’s mean (location) and variance (scale)
  • Modeling ordinal ILD outcomes using an ordinal extension of the mixed location-scale model
  • Using Item Response Theory (IRT) models for the timing of event reports

Return to top


How to attend

Enrollment is limited to 40 participants to maintain an informal atmosphere and to encourage interaction between and among the presenter and participants. We will give priority to individuals who are involved in drug abuse prevention and treatment research or HIV research, who have the appropriate statistical background to get the most out of the Institute, and for whom the topic is directly and immediately relevant to their current work. We also aim to maximize geographic and minority representation.

The application window is closed. Please check for information about the 2020 Summer Institute in January or February of 2020. 

Once accepted, participants will be emailed instructions about how to register. The registration fee of $395 for the two-day Institute will cover all instruction, program materials, and breakfast and lunch each day. A block of rooms at the Nittany Lion Inn will be available for lodging.

Participants are encouraged to bring their own laptop computers for conducting exercises.

The application period has closed.

Review our refund, access, and cancellation policy.

Return to top


Presenter

Donald Hedeker, Ph.D.

Donald Hedeker, Ph.D., is Professor of Biostatistics in the Department of Public Health Sciences at the University of Chicago.

He received his Ph.D. in Quantitative Psychology from The University of Chicago. Don’s main expertise is in the development and use of advanced statistical methods for clustered and longitudinal data, with particular emphasis on mixed-effects models. He is the primary author of several freeware computer programs for mixed-effects analysis: MIXREG for normal-theory models, MIXOR for dichotomous and ordinal outcomes, MIXNO for nominal outcomes, and MIXPREG for counts.  In 2008, these programs were restructured into the Supermix software program distributed by Scientific Software, Inc.

With Robert Gibbons, Don is the author of the text “Longitudinal Data Analysis,” published by Wiley in 2006. More recently, Don has developed methods for intensive longitudinal data, resulting in the freeware MixRegLS and MixWILD programs.

In 2000, Don was named a Fellow of the American Statistical Association, and he is an Associate Editor for Statistics in Medicine and Journal of Statistical Software.

Return to top


Location

The Pennsylvania State University, University Park campus

Return to top


Funding

Funding for this conference was made possible by award number R13 DA020334 from the National Institute on Drug Abuse. The views expressed in written conference materials or publications and by speakers and moderators do not necessarily reflect the official views and/or policies of the Department of Health and Human Services; nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government.

Return to top


Archive

  • 2018 – Analysis of Ecological Momentary Assessment Data by Stephanie T. Lanza and Michael Russell
  • 2017 – Statistical Power Analysis for Intensive Longitudinal Studies by Jean-Philippe Laurenceau and Niall Bolger
  • 2016 – Ecological Momentary Assessment (EMA): Investigating Biopsychosocial Processes in Context by Joshua Smyth, Kristin Heron, and Michael Russell
  • 2015 – An Introduction to Time-Varying Effect Modeling by Stephanie T. Lanza and Sara Vasilenko
  • 2014 – Experimental Design and Analysis Methods for Developing Adaptive Interventions: Getting SMART by Daniel Almirall and Inbal Nahum-Shani
  • 2013 – Introduction to Latent Class Analysis by Stephanie Lanza and Bethany Bray
  • 2012 – Causal Inference by Donna Coffman
  • 2011 – The Multiphase Optimization Strategy (MOST) by Linda Collins
  • 2010 – Analysis of Longitudinal Dyadic Data by Niall Bolger and Jean-Philippe Laurenceau
  • 2009 – Latent Class and Latent Transition Analysis by Linda Collins and Stephanie Lanza
  • 2008 – Statistical Mediation Analysis by David MacKinnon
  • 2007 – Mixed Models and Practical Tools for Causal Inference by Donald Hedeker and Joseph Schafer
  • 2006 – Causal Inference by Christopher Winship and Felix Elwert
  • 2005 – Survival Analysis by Paul Allison
  • 2004 – Analyzing Developmental Trajectories by Daniel Nagin
  • 2003 – Modeling Change and Event Occurrence by Judith Singer and John Willett
  • 2002 – Missing Data by Joseph Schafer
  • 2001 – Longitudinal Modeling with MPlus by Bengt Muthén and Linda Muthén
  • 2000 – Integrating Design and Analysis and Mixed-Effect Models by Richard Campbell, Paras Mehta, and Donald Hedeker
  • 1999 – Structural Equation Modeling by John McArdle
  • 1998 – Categorical Data Analysis by David Rindskopf and Linda Collins
  • 1997 – Hierarchical Linear Models and Missing Data Analysis by Stephen Raudenbush and Joseph Schafer
  • 1996 – Analysis of Stage Sequential Development by Linda Collins, Peter Molenaar, and Han van der Maas

Ask a Methodologist Archive

Significance Tests for Covariates in LCA and LTA

I am performing LCA and wondered how to test the significance of my covariates. I understand that I need a test statistic and its corresponding degrees of freedom (df) to perform the test, but I don’t know how to get this information from my output.  ––Signed, Feeling Insignificant

LCA vs Factor Analysis: About Indicators

My understanding is that when an indicator has no relation to the latent construct of interest, this is represented differently in LCA than in factor analysis. Can you explain how and why this works? ––Signed, Latently Lost

Factorial Experiments

I am interested in Dr. Collins’ work on optimizing behavioral interventions, and I was surprised that she advocates the use of factorial experimental designs for some experiments. I was taught that factorial experiments could never be sufficiently powered. Can factorial designs really be implemented in practice? — Signed, Fretting Loss of Power

Analyzing EMA Data

I designed a study to assess 50 college students’ motivations to use alcohol and its correlates during their first semester. The most innovative part of this study was that I collected data with smart phones that beeped at several random times on every Thursday, Friday, and Saturday throughout the semester. Now that I’ve collected the data, I’m overwhelmed by how rich the data are and don’t know where to start! My first thought is to collapse the data to weekly summary scores and model those using growth curve analysis. Is there anything more I can do with the data? — Signed, Swimming in Data

Are Adaptive Interventions Bayesian?

I love the idea of adaptive behavioral interventions. But, I keep hearing about adaptive designs and how they are Bayesian. How can an adaptive behavioral intervention be Bayesian? —Signed, Adaptively Confused, Determined to Continue

Modeling Multiple Risk Factors

I want to investigate multiple risk factors for health risk behaviors in a national study, but do not know how to handle the high levels of covariation among the different risk factors. Do you recommend that I regress the outcome on the entire  set of risk factors using multiple regression analysis? Or should I create a cumulative risk index by summing risk exposure,  and regress the outcome on that index? — Signed, Waiting to Regress

Handling Time-Varying Treatments

I’ve heard about new methodologies being developed that allow scientists to address novel scientific questions concerning the effects of time-varying treatments or predictors using observational longitudinal data. What are some examples of these scientific questions, and where can I read up on these newer methodologies? —Signed, Time for a Change

Experimental Designs That Use Fewer Subjects

I am trying to develop a drug abuse treatment intervention. There are six distinct components I am considering including in the intervention. I need to make the intervention as short as I can, so I don’t want to include any components that aren’t having much of an effect. I decided to conduct six experiments, each of which would examine the effect of a single intervention component. I need to be able to detect an effect size that is at least d =.3; any smaller than that and the component would not be pulling its own weight, so to speak. I have determined that I need a sample size of about N=200 for each experiment to maintain power at about .8. But then I did the math and figured out that with six experiments, I would need 6 X 200 = 1200 subjects! Yikes! Is there any way I can learn what I need to know, but using fewer subjects? —Signed, Experimental Design Gives Yips

Fit Statistics and SEM

I am interested in examining the role of three variables (family drug use, family conflict and family bonding) as mediators of the effect of neighborhood disorganization on adolescent drug use. I fit a structural equation model, but wonder which of the many fit statistics I should report?— Signed, Befuddled by Fit

I am interested in modeling change over time in risky sexual behavior during adolescence, but I cannot decide how to code my outcome variable. I could create a dummy variable at each time point that indicates whether or not the individual has had intercourse, a count variable for the number of partners, or a continuous measure of the proportion of times they used a condom, but none of these approaches seems to capture the complex nature of the behavior. — Uni Dimensional

Propensity Scores

I have been hearing a lot lately about propensity scores. What are they, and how can I use them? — Signed, Lost Cause

AIC vs. BIC

I often use fit criteria like AIC and BIC to choose between models. I know that they try to balance good fit with parsimony, but beyond that I’m not sure what exactly they mean. What are they really doing? Which is better? What does it mean if they disagree? — Signed, Adrift on the IC’s

Evaluating Latent Growth Curve Models

I am using a latent growth curve approach to model change in problem behavior across four time points. Although my exploratory analyses suggested that a linear growth function would describe individual trajectories well for nearly all of the adolescents in my sample, the overall model fit (in terms of RMSEA and CFI) is poor. Is my model really that bad? — Signed, Fit to be Tied

Multiple Imputation and Survey Weights

I’m analyzing data from a survey and would like to handle the missing values by multiple imputation. Should the survey weights be used as a covariate in the imputation model? — Signed, Weighting for Your Response

Determining Cost-Effectiveness

If I know the cost associated with administering my substance abuse intervention program, how do I determine whether my program was cost-effective? — Signed, Worried about Bottom Line

Maximum Likelihood vs. Multiple Imputation

Which is better for handling missing data: maximum likelihood approaches like the one incorporated in the structural equation modeling program AMOS, or multiple imputation approaches like the one implemented in Joe Schafer’s software NORM? — Signed, Not Uniformly There

 

Methodology Minutes Podcasts

This podcast series was produced by The Methodology Center to provide information on the Center’s methods, applications, and events.

Preventing Child Maltreatment With Kate Guastaferro

Kate GuastaferroFebruary 4, 2020

In this brief and inspiring podcast, Methodology Center Research Associate Kate Guastaferro talks about her research on preventing child maltreatment and on the multiphase optimization strategy (MOST) for optimizing interventions. Kate came to Penn State as a postdoctoral fellow in the Prevention and Methodology Training (PAMT) program. She discusses how her training has led her to work towards the elimination of child sexual abuse.

Download Podcast 36
Download the transcript for podcast 36

Timeline
00:00 — Introductions and Kate’s background
01:43 — Kate’s work at The Methodology Center
02:34 — Defining the multiphase optimization strategy (MOST)
03:54 — Kate’s work on preventing child maltreatment in Pennsylvania
07:50 — Research and results so far from the child sexual abuse intervention
11:07 — How MOST influences Kate’s work in child sexual abuse prevention
12:42 — Resistance to research on sexual abuse
14:20 — What everyone should know about sexual abuse
15:52 — Future plans

Getting Started in Grant Writing With Lisa Dierker

Lisa DierkerAugust 28, 2019

In the current research landscape, researchers need to develop grant writing skills. In this podcast, Methodology Center Investigator and Professor of Psychology at Wesleyan University Lisa Dierker discusses topics including how to learn what works in grant writing, the best funding mechanisms, and how to approach grant writing as a methodologist or applied researcher. This podcast is intended for graduate students and junior investigators, but there are tips for more senior researchers as well.

Download Podcast 35
Download the transcript for podcasts 35

Timeline
00:00 – Introductions
00:54 – Lisa’s background in research and grant writing
03:15 – The value of rejected grants
09:26 – Lisa’s favorite funding mechanisms
16:08 – How to get started in grant writing
19:45 – Whom to contact while preparing a grant
25:16 – How applied scientists can incorporate innovative methods into grant writing
28:48 – How methodologists can successfully get their work funded
31:15 – Pursuing grants in a difficult funding environment
34:15 – Top 3 pieces of advice on grant writing

Social Network Analysis With Ashton Verdery

October 10, 2018

In our latest podcast, Ashton Verdery, assistant professor of sociology and demography at Penn State, discusses social network analysis (SNA). One increasingly important use of SNA is to study marginalized populations who are otherwise hard to sample. In health, behavioral, and social sciences, SNA has been used to examine how people relate to one another; how relationships affect the flow of items such as diseases, goods, information, or behaviors; how individual positions in broader network structures affect the risks of contracting diseases, hearing of opportunities, or generating new ideas; and more. In this podcast, Ashton explains the value and challenges of SNA in a behavioral health context. He also discusses projects from his research, including his work studying the heroin crisis in Pennsylvania, kidney transplant candidates, and migrant populations.

Download podcast 34
Download the transcript for podcast 34

Timeline
00:00—Introduction
00:31—What is social network analysis (SNA) and why do it?
03:51—Why does SNA interest you?
05:46—Why is SNA valuable in behavioral health?
09:00—Do policy changes affect migrants’ social networks?
13:15—What are the methodological challenges in SNA?
19:17—How are the social network questions different and similar in your research projects on kidney transplants and your research on the heroin crisis?

New Book on Advanced Topics in MOST

Linda M. CollinsAugust 8, 2018

In podcast 33, Methodology Center Director Linda Collins and Faculty Affiliate Kari Kugler discuss the new book from Springer that they edited, Optimization of Behavioral, ​Biobehavioral, and Biomedical Interventions: Advanced Topics. This is the second book on the multiphase optimization strategy (MOST) to be published this year. MOST is an engineering-based framework for optimizing interventions that has been developed by Linda and her collaborators over the past 14 years. In this podcast, Linda and Kari explain the concepts behind and rationale for each of the chapters in the book. Both the book and the podcast explore topics ranging from the development of a conceptual model to the use of concepts from control systems engineering.

Download podcast 33
Download transcript for podcast 33

Timeline
00:00—Introduction
01:08—The differences between the two books on MOST
02:19—Developing a conceptual model for an intervention
04:54—Factorial experiments and types of experimental designs
08:35—Multi-level factorial designs
10:11—Adaptive interventions and MOST
11:38—Control systems engineering in MOST
13:29—Coding data for analysis
16:00—Cost effectiveness analysis in MOST
18:25—Mediation analysis in MOST
20:00—The future of MOST

Audiobook Excerpt: Preface to Linda Collins’ Book on MOST

Linda M. CollinsMay 17, 2018

In this special edition podcast, Methodology Center Director Linda Collins reads the preface to her new book from Springer, Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: The Multiphase Optimization Strategy (MOST). MOST is an engineering-based framework for optimizing interventions, developed by Linda and her collaborators over the past 14 years. In the preface, Linda explains the problem with the current state of intervention research and describes what MOST is and how it can help us address the problem. Then, she explains the content of the book. For researchers who are interested in optimizing interventions, this podcast succinctly introduces the need for and advantages of MOST; the podcast will enable listeners to decide whether to read the entire book.

Download podcast 32
Download the transcript for podcast 32

References for the Book and the Articles Discussed in the Podcast

Collins, L. M. (2018). Optimization of behavioral, biobehavioral, and biomedical Interventions: The multiphase optimization strategy (MOST). New York, NY: Springer. Visit Springer’s website

Preface

“In the United States and worldwide, billions of dollars have been spent to develop behavioral, biobehavioral, and biomedical interventions (hereafter referred to simply as interventions) to prevent and treat health problems, promote health and well-being, prevent violence, improve learning, promote academic achievement, and generally improve the human condition. Numerous interventions are in use that are successful in the sense that they have demonstrated a statistically and clinically significant effect in a randomized controlled trial (RCT). However, many are less successful in terms of progress toward solving problems. In fact, after decades of research, as a society we continue to struggle with the very issues these interventions have been designed to ameliorate. Only very slow progress is being made in many areas; in some, the problem continues to worsen. Let us consider two examples in the public health domain, both from the Healthy People goals set every ten years by the United States Centers for Disease Control and Prevention (CDC)…”

New Book on MOST With Linda Collins

February 26, 2018

In this podcast, Methodology Center Director Linda Collins discusses her new book from Springer, Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: The Multiphase Optimization Strategy (MOST)MOST is an engineering-based framework for optimizing interventions that has been developed by Linda and her collaborators over the past 14 years. In the podcast, she describes how MOST can help advance intervention research. She then explains the structure of MOST, using an example from an intervention to help overweight adults lose weight. Finally, she discusses why now is the right time for this book to be published.

Download podcast 31
Download the transcript for podcast 31

Podcast Timeline:

00:00—Introduction
00:50—The problem with the status quo in intervention design
03:04—Defining “optimization” and “MOST”
06:57—Describing the phases of MOST
07:39—The preparation phase
11:26—The optimization phase
15:54—The evaluation phase
19:22—How Linda’s thinking about MOST has evolved
21:23—Why is now the right time for this book?

References for the Book and the Articles Discussed in the Podcast

Collins, L. M. (2018). Optimization of behavioral, biobehavioral, and biomedical Interventions: The multiphase optimization strategy (MOST). New York, NY: Springer.

Pellegrini, C. A., Hoffman, S. A., Collins, L. M., & Spring, B. (2014). Optimization of remotely delivered intensive lifestyle treatment for obesity using the multiphase optimization strategy: Opt-IN study protocol. Contemporary Clinical Trials, 38(2), 251-259.

Pellegrini, C. A., Hoffman, S. A., Collins, L. M., & Spring, B. (2015). Corrigendum to “Optimization of remotely delivered intensive lifestyle treatment for obesity using the multiphase optimization strategy: Opt-IN study protocol.” Contemporary Clinical Trials, 45, 468-469.

Collecting Data in Schools with Zena Mello

Zena MelloJanuary 15, 2018

In a relaxed and engaging conversation, Zena Mello, associate professor of psychology at San Francisco State University, discusses the opportunities, complications, obligations, and challenges associated with collecting data in public high schools. She explains the different experiences she had developing relationships and working in two schools that are only minutes apart geographically but sharply divergent in terms of the educational resources available. Her research investigates how adolescents think about time and how that thinking relates to their substance use and other risky behavior.

Download podcast 30
Download the transcript for podcast 30

Podcast Timeline:

00:00—Introduction
00:32—Gaining access to high schools for collecting data
12:20—Introducing graduate students to a low-income high school
18:52—Maintaining a relationship with a high school administration
23:30—Gaining access to a high-income high school
30:31—Future directions of Zena’s research

The Past, Present, and Future of Prevention with Mark Greenberg

Mark GreenbergNovember 14, 2017

Mark Greenberg is one of the founders of prevention science as a recognized field. In 1998, he founded The Edna Bennett Pierce Prevention Research Center and served as its director until 2013. In this podcast, he talks with host Aaron Wagner about the founding of the center, its connection to The Methodology Center, the future of prevention science, and more.

Download podcast 29

Download the transcript for podcast 29

Podcast Timeline:

00:00—Introduction
00:37—The genesis of The Edna Bennett Pierce Prevention Research Center and the field of prevention science
06:07—Connections between The Edna Bennett Pierce Prevention Research Center and The Methodology Center
08:15—Mark’s research career
11:22—The impact Edna Bennett Pierce has on the field of prevention research
13:10—The future of prevention research

Getting Started with Secondary Data Analysis

Loren Masters and Kate GuastaferroJune 11, 2017

Secondary data analysis is a high priority for many funding agencies as they try to maximize the information gleaned from funded studies. In this podcast, Methodology Center Research Associate Kate Guastaferro and Methodology Center Data Manager Loren Masters discuss some of the issues and requirements associated with getting access to existing data. This podcast is intended for graduate students or investigators who are new to secondary data analysis. Along with the podcast, users can download an outline of the steps required before conducting a secondary data analysis.

Download podcast 28
Download the transcript for podcast 28
PDF: Steps for getting started in secondary data analysis

Podcast Timeline:

00:00—Introductions
02:14—Working with restricted data for qualified researchers
03:53—Working with IRBs
06:50—Data protection plans
11:10—Getting added to existing data use agreements
12:20—Identifying data sets available for secondary analyses
13:22—Working on data from your prior institution
15:47—Potential problems in data procurement
17:26—Closing advice

Ambulatory Assessment with Michael Russell

Michael RussellJanuary 23, 2017

In our latest podcast, Methodology Center Research Associate Michael Russell discusses ambulatory assessment and his pilot project examining self-report data during heavy drinking. In the project, Michael is combining ecological momentary assessment (EMA) of self-reported alcohol use with continuous data from ankle bracelets that measure alcohol intoxication levels through contact with the skin. He is investigating the accuracy of using EMA self-reports as a proxy for such intoxication measures during real-world drinking episodes. He discusses his thoughts on the challenges and opportunities of such data collection, and talks about some of his research using these and other intensive longitudinal data (ILD).

Download podcast 27
Download the transcript for podcast 27

Podcast Timeline:

00:00—Introduction
00:33—Developing an interest in methods
03:07—Ambulatory assessments for understanding substance use
06:29—Examining the accuracy of self-report data on alcohol use
08:30—Practical issues with ambulatory assessment studies
10:09—Methodological issues with ambulatory assessment studies
13:36—Implications for working with IRBs
15:40—Future of ambulatory assessment

Practical Advice on LCA with John Dziak

Aaron Wagner and John DziakDecember 1, 2016

Latent class analysis (LCA) is a widely used tool for identifying subgroups in a population. Many researchers have questions about how to conduct an LCA as responsibly and accurately as possible. In our latest podcast, John Dziak discusses important points to consider when conducting an LCA, like how to tell when an analysis is successful and how to make sure your model is properly identified. John is a Methodology Center research associate who studies LCA, and he is the lead developer of our LCA software, including PROC LCA. Note: this podcast is a companion piece to podcasts 15 and 16 with Stephanie Lanza and Bethany Bray. If you are new to LCA, you may want to start with Podcast 15.

Download podcast 26
Download the transcript for podcast 26

Podcast Timeline:

00:00—Introduction
00:30—What is LCA for?
01:15—Why would someone use LCA?
02:27—How does LCA work?
04:20—How do I select a model?
07:39—How do I know if my LCA worked?
13:45—How do I select items for my model?
18:20—What “percent identified” of random starts is high enough?
19:23—When should I use a higher value in NSTARTS?
20:13—What should I do if my model won’t converge?
23:00—When should I use the RESTRICT option?

Methodological Innovation in HIV Prevention Research with Cara Rice

Cara RiceSeptember 7, 2016

In this short podcast, Methodology Center Postdoctoral Research Associate, Cara Rice, discusses her research examining HIV-risk behavior among sexual minorities. She describes her work collecting survey data among high-risk populations and her application of new methods to these data.  As part of the Methodology Center, Cara has recently used both LCA and TVEM to understand more about the profiles of behavior that increase HIV risk among men who have sex with men (MSM).

Download podcast 25
Download the transcript to podcast 25

Podcast Timeline:

00:00—Introduction
00:30—Becoming an HIV researcher
01:56—Cara’s research
05:45—Applying methods to HIV research
09:34—Collecting extremely personal data
12:03—Applying TVEM to HIV-risk data
14:40—The future of Cara’s research and HIV research

Using MOST to Improve STI Prevention with Kari Kugler and Amanda Tanner

Kari Kugler and Amanda TannerJanuary 27, 2016

In this podcast, we discuss the application of the multiphase optimization strategy (MOST) to the development of an online intervention to reduce sexual risk behavior among college students. Host Aaron Wagner speaks with Kari Kugler, Methodology Center investigator, and Amanda Tanner, assistant professor of public health education at University of North Carolina at Greensboro (UNCG), about the project which is funded by the National Institute on Alcohol Abuse and Alcoholism.

In this study, the researchers will use MOST to strengthen intervention components aimed at reducing risky drinking, risky sex, and their co-occurrence, and then using the strengthened components to form an optimized intervention. The principal investigator of the project is Methodology Center Director Linda Collins. David Wyrick, associate professor of public health education, leads the team at UNCG.

Download podcast 23
Download the transcript for podcast 23

Podcast Timeline:

00:00—Introductions
01:07—What public health problem does the grant address?
03:57—Definition of multiphase optimization strategy
06:45—Other applications of MOST
07:41—Why use MOST on this project?
10:37—How will this project encourage college students to make better decisions?
13:06—Why an online intervention?
14:02—What is incorrect about the term “risky sex?”
15:53—What else should people know about MOST?
16:46—The quality of the project team

Getting Started with TVEM with Stephanie Lanza and Sara Vasilenko

Stephanie Lanza and Sara VasilenkoSeptember 15, 2015

This podcast will introduce interested scientists to time-varying effect modeling (TVEM). Host Aaron Wagner talks with Methodology Center Investigators Stephanie Lanza and Sara Vasilenko about the new types of questions scientists can answer by applying TVEM to existing data or to new studies.

Sara and Stephanie have been at the forefront of both applying TVEM and training scientists to use TVEM. Multiple participants from their TVEM workshop in June already have submitted TVEM manuscripts to journals. In this 25-minute podcast, they provide the introduction needed to determine whether TVEM could be useful in your work.

Download podcast 22
Download the transcript to podcast 22

Podcast Timeline:

00:00—Introduction
00:52—Defining TVEM
02:07—Time-varying versus time-invariant effects
04:00—Questions TVEM can answer
09:56—Data and TVEM
11:00—TVEM and ecological momentary assessments
13:03—TVEM and panel data
15:41—When not to use TVEM
20:12—Getting started
23:32—TVEM SAS macro

Adaptive Interventions and Personalized Medicine with Susan Murphy

Susan MurphyNovember 4, 2014

This podcast features Susan Murphy, Methodology Center principal investigator, Herbert E. Robbins Distinguished University Professor of Statistics, research professor at the Institute for Social Research, and professor of psychiatry at the University of Michigan. The discussion focuses two topics, the sequential, multiple assignment, randomized trial (SMART), which allows scientists to develop adaptive interventions, and the just-in-time, adaptive intervention (JITAI), which uses real-time data to deliver interventions as needed via mobile devices. Susan’s MacArthur Fellowship is also discussed; the podcast was recorded before she was elected to the Institute of Medicine of the National Academies.

Download poscast 19
Download the transcript for podcast 19

Podcast Timeline:

00:00 – Introduction
01:01 – MacArthur Foundation “Genius Grant”
03:31 – Two SMARTs in the field (one to develop an adaptive intervention for alcohol abuse and one to develop an adaptive intervention for helping clinics implement an intervention effectively)
12:44 – The just-in-time, adaptive intervention (JITAI)
18:34 – The future of SMART and JITAI

Physical Activity Research with David Conroy

David ConroyJune 2, 2014

This podcast features David Conroy, professor of kinesiology and human development and family studies at Penn State, and investigator at The Methodology Center. The discussion focuses on David’s research on physical activity and sedentary behavior, how physical activity impacts our lives, and the technological opportunities and methodological challenges of this research. David’s multiple, fascinating projects with other Methodology Center investigators are also discussed.

Download podcast 18
Download the transcript for podcast 18

Podcast Timeline:

00:00 – Introduction
00:45 – Sedentary behavior, active behavior, and your health
06:51 – David’s background
08:25 – Staying healthy in a sedentary society
13:08 – ACT UP interventions to promote activity
19:25 – Promoting health with smartphones
24:31 – Methodological issues: Intensive longitudinal data in this research

Latent Class Analysis (LCA) Part 2: Extensions of LCA with Stephanie Lanza and Bethany Bray

Stephanie Lanza, Aaron Wagner, and Bethany Bray

January 22, 2013

Stephanie Lanza and Bethany Bray discuss extensions of LCA with host Aaron Wagner. Topics include LCA with grouping variables and covariates, latent transition analysis, causal inference in LCA, and LCA with a distal outcome. The discussion assumes that users are familiar with LCA; part 1 provides introductory information.

Download podcast 16
Download the transcript for podcast 16

Podcast Timeline:

00:00 – Introduction
01:00 – Adding grouping variables and covariates to an LCA
12:42 – Causal inference in LCA
17:40 – Predicting distal outcomes using latent class membership
26:26 – Upcoming LCA trainings

References in the podcast:

Lanza, S. T., & Collins, L. M. (2008). A new SAS procedure for latent transition analysis: Transitions in dating and sexual behavior. Developmental Psychology, 42(2), 446-456. PMCID: PMC2846549

Lanza, S. T., Coffman, D. L., & Xu, S. (in press). Causal inference in latent class analysis. Structural Equation Modeling: A Multidisciplinary Journal.

Lanza, S. T., Tan, X., & Bray, B. C. (2013). Latent class analysis with distal outcomes: A flexible model-based approach. Structural Equation Modeling: A Multidisciplinary Journal.

Bray, B. C., Lanza, S. T., & Tan, X. (2012). An introduction to eliminating bias in classify-analyze approaches for latent class analysis (Technical Report No. 12-118). University Park, PA: The Methodology Center, The Pennsylvania State University.

Latent Class Analysis (LCA) Part 1: Common Questions about LCA with Stephanie Lanza and Bethany Bray

Stephanie Lanza and Bethany Bray

November 26, 2012

Methodology Center scientists Stephanie Lanza and Bethany Bray and host Aaron Wagner discuss common, practical issues that arise in latent class analysis (LCA). Issues include selecting indicator variables, selecting a model, determining the necessary sample size, finding LCA software, and getting started in LCA. This is the first in a two-part podcast; the next podcast will address some of our recent research on LCA.

Download podcast 15
Download the transcript for podcast 15

Podcast Timeline:

00:00 – Introduction
01:05 – What is LCA?
02:50 – How is LCA different from factor analysis?
06:48 – Indicator variables
10:54 – Model selection
14:30 – Sample size
19:07 – Software
20:06 – Getting started

References in the podcast

Collins, L. M., & Lanza, S. T. (2010). Latent class and latent transition analysis: With applications in the social, behavioral, and health sciences. New York: Wiley.

Lanza, S. T., Bray, B. C., & Collins, L. M. (2013). An introduction to latent class and latent transition analysis. In J. A. Schinka, W. F. Velicer, & I. B. Weiner (Eds.), Handbook of psychology: Research methods in psychology (2nd Edition, Vol. 2, pp. 691-716). Hoboken, NJ: Wiley.

 

New Methods for Smoking Research with Megan Piper, Lisa Dierker, and Stephanie Lanza

Megan Piper, Lisa Dierker, Stephanie LanzaAugust 28, 2012

Host Aaron Wagner interviews three researchers, Megan Piper of the University of Wisconsin’s Center for Tobacco Research and Intervention, Lisa Dierker of Wesleyan University, and Stephanie Lanza of the Methodology Center. They discuss time-varying effect models, the potential for ecological momentary assessment data to advance smoking research, and an upcoming the special issue of Nicotine and Tobacco Research that will focus on new methods for smoking research.

Download podcast 14
Download the transcript for podcast 14

Podcast Timeline:

00:00 – Introduction
01:30 – Megan Piper: EMA data allows understanding of how quitting smoking works
05:40 – Lisa Dierker: TVEM lets us answer new questions about how people develop smoking behavior
09:12 – Stephanie Lanza: Advancing the methodology of smoking research
13:50 – Stephanie Lanza: Upcoming special issue of Nicotine and Tobacco Research

Reference

Shiyko, M. P., Lanza, S. T., Tan, X., Li, R., &  Shiffman, S. (2012). Using the time-varying effects model (TVEM) to examine dynamic associations between negative affect and self confidence on smoking urges: Differences between successful quitters and relapsers. Prevention Science. PMCID: PMC3372905 doi: 10.1007/s11121-011-0264-z

Using Propensity Scores in Causal Inference with Donna Coffman and Max Crowley

April 13, 2012

Host Aaron Wagner interviews Methodology Center Research Associate Donna Coffman and graduate student Max Crowley. They discuss using propensity scores for causal inference. This is also the topic Donna will present at the upcoming 2012 Summer Institute on Innovative Methods. Propensity scores allow researchers to determine cause and effect in experiments that were not randomized.

Download podcast 13
Download the transcript for podcast 13

Podcast Timeline:

00:00 – Introductions
01:17 – Overview of propensity scores
05:53 – Hypothetical example / including confounders in the model
11:46 – Ways to use propensity scores
17:29 – Resources for using propensity scores

References

Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688-701. View abstract

Rosenbaum, P. R., & Rubin, D. B. (1984). Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79(387), 516-524. View abstract

Rosenbaum, P. R., & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39(1), 33-38. View abstract

Adaptive Health Interventions and Causal Inference with Daniel Almirall

Daniel Almirall and Aaron WagnerFebruary 24, 2012

Host Aaron Wagner interviews Daniel Almirall, Faculty Research Fellow at the University of Michigan’s Institute for Social Research and Investigator at The Methodology Center. The discussion focuses on sequential, multiple-assignment, randomized trials (SMARTs), which allow scientists to develop adaptive interventions. Danny works with Susan Murphy, the creator of SMART, to develop and promote this new methodological tool. Danny’s work on causal inference is also discussed.

Download podcast 12
Download the transcript for podcast 12

Podcast Timeline:

00:00 – Introduction and Danny’s background
02:38 – SMART designs for adaptive interventions
08:31 – SMART pilot studies
13:51 – Application of SMART
19:33 – Causal inference
22:39 – Publication update

HealthWise South Africa with Linda Caldwell, Ed Smith, and Linda Collins

Linda CaldwellEdward SmithAugust 5, 2011

Host Michael Cleveland interviews Linda Caldwell, Ed Smith, and Linda Collins. They discuss the history and future of HealthWise, a comprehensive risk-reduction, life-skills curriculum for adolescents in South Africa. The new phase of HealthWise is testing the effectiveness of the intervention’s components. Ed Smith is the associate director of the Prevention Research Center at Penn State; Linda Caldwell is a professor of Recreation, Park and Tourism Management at Penn State; and Linda Collins is the director of The Methodology Center.

Download podcast 9
Download the transcript for podcast 9

Podcast Timeline:

00:00 – Introductions
00:47 – Overview of HealthWise and project site
06:02 – Experimental conditions in the new phase of HealthWise
12:38 – Factorial experimental design
15:40 – Powering a factorial experiment
17:00 – Project time line

Configural Frequency Analysis with Mark Stemmler

Mark StemmlerMay 3, 2011

Host Michael Cleveland interviews Mark Stemmler, professor of psychological methodology and quality assurance and dean of the Faculty of Psychology and Sports Science, Bielefeld University, Germany.  Dr. Stemmler was a visiting scholar at The Methodology Center in fall of 2010. Micheal talks with Mark about configural frequency analysis, a tool for the analysis of multivariate categorical data. They also discuss Dr. Stemmler’s experiences visiting The Methodology Center and teaching at Penn State.

Download podcast 8
Download the transcript for Podcast 8

Podcast Timeline:

00:00 – Introduction
00:48 – Dr. Stemmler’s background and relationship with the Methodology Center
03:52 – Configural frequency analysis (CFA) overview
06:38 – Practical applications of CFA
09:18 – How to learn CFA
11:40 – Dr. Stemmler’s experience visiting Penn State

Where Are They Now? with Bethany Bray

Bethany Bray and Michael ClevelandMarch 10, 2011

Host Michael Cleveland interviews Bethany Bray, Assistant Professor of Psychology at Virginia Tech. Bethany was formerly the Assistant Director and a Research Associate at The Methodology Center. Michael talks with Bethany about her research interests, her experience as a pre-doctoral fellow in the Prevention and Methodology Training (PAMT) program here at Penn State, and her recently released article, “Modeling relations among discrete developmental processes: A general approach to associative latent transition analysis,” in Structural Equation Modeling.

Download podcast 7
Download the transcript for Podcast 7

Missing Data Analysis: Making it Work in the Real World with John Graham

John GarhamOctober 20, 2010

Host Michael Cleveland interviews John Graham, Professor of Biobehavioral Health and Human Development & Family Studies at Penn State, to discuss his recent Annual Review of Psychology article “Missing Data Analysis: Making it Work in the Real World.”  This extended podcast is available as a 2-part download.

Download podcast 5, part 1
Download the transcript for Podcast 5, Part 1

Download podcast 5, part 2
Download the transcript for Podcast 5, Part 2

An Odd Couple in Interdisciplinary Research with Linda Collins and Daniel Rivera

Linda CollinsDaniel RiveraAugust 30, 2010

Host Michael Cleveland interviews an odd couple in interdisciplinary research focused on optimizing behavioral interventions. Social scientist Linda Collins from Penn State and chemical engineer Daniel Rivera from Arizona State talk about their NIH Roadmap Initiative project.

Download podcast 4
Download the transcript for Podcast 4


New Book on Latent Class and Latent Transition Analysis with Linda Collins and Stephanie Lanza

Stephanie LanzaLinda Collins

May 5, 2010

Host Michael Cleveland interviews Linda Collins and Stephanie Lanza, authors of the new book, Latent Class and Latent Transition Analysis: With Applications in the Social, Behavioral, and Health Sciences.

Download podcast 2
Download the transcript for podcast 2

 

Last updated: May 5, 2020

The Methodology Center’s New Podcast Series – Methodology Minutes! with Linda Collins

Linda Collins

April 5, 2010

Introduction to The Methodology Center – who are we, what is our mission. Hosted by Michael Cleveland and Linda Collins.

Download podcast 1
Download the transcript for Podcast 1

Upcoming and Recent Workshops

Please subscribe to our eNews for regular announcements on trainings. We also offer one-credit courses for Penn State graduate students.

Building Effective Just-in-Time Adaptive Interventions Using Micro-Randomized Trial Designs (Susan Murphy and Daniel Almirall)
July 23-24, 2020
2020 Methodology Center Summer Institute
Hyatt Regency Bethesda in Bethesda, MD

Latent Class Analysis (Stephanie Lanza and Bethany Bray)
December 6-7, 2019
Statistical Horizons Workshop
Philadelphia, Pennsylvania

Experimental Designs for Personalized Digital Behavioral Interventions (Inbal “Billie” Nahum-Shani)
July 28-August 2, 2019
Presentation at mHealth2019 Summer Training Institute
University of California, Los Angeles

Optimizing Just-In-Time Adaptive Interventions for Mobile Health (Susan Murphy)
June 21, 2019
Research Society on Alcoholism satellite workshop
Minneapolis, MN

Training on Optimization of Behavioral and Biobehavioral Interventions (Linda Collins, Kate Guastaferro, Angela Pfammatter, Heather Wasser)
May 13 – 17, 2019
Hyatt Regency Bethesda Hotel, Bethesda, MD
 
Getting SMART About Adaptive Interventions in Education (Daniel Almirall)
March 11 – 14, 2019
Institute for Social Research 426 Thompson St Ann Arbor, Michigan
 
Latent Class Analysis (Stephanie Lanza)
December 7 – 8, 2018
Statistical Horizons
Philadelphia, PA
 
Australian Mathematical Sciences Institute (AMSI) Presentations (Susan Murphy)
August 14 – 24, 2018
Australia
 
2018 Summer Institute: Analysis of Ecological Momentary Assessment Data Using Multilevel Modeling and Time-Varying Effect Modeling (Stephanie Lanza, Michael Russell)
June 28 – 29, 2018
Penn State, University Park, PA
 
Just-in-Time Adaptive Interventions (Susan Murphy, Daniel Almirall)
May 21, 2018
Modern Modeling Methods Conference Pre-Conference Workshop
Storrs, CT​
 
Training on Optimization of Behavioral and Biobehavioral Interventions (Linda Collins, Daniel Almirall, Kari Kugler, Kate Guastaferro)
May 14 – 18, 2018
Bethesda North Marriott Hotel & Conference Center, North Bethesda, MD
 
Novel Experimental Approaches to Designing Effective Multi-Component Interventions (Daniel Almirall, Linda Collins, Susan Murphy, Inbal Nahum-Shani)
April 11, 2018
2018 Society for Behavioral Medicine Annual Meeting
Hilton New Orleans Riverside, New Orleans, LA
 
Analysis of Ambulatory Assessment Data in Behavioral Medicine (Stephanie Lanza, Michael Russell)
April 11, 2018
2018 Society for Behavioral Medicine Annual Meeting
Hilton New Orleans Riverside, New Orleans, LA

Our Mission

The mission of The Methodology Center is to advance public health by improving experimental design and data analysis in the social, behavioral, and health sciences. As a designated National Institute on Drug Abuse Center of Excellence, we serve as a national resource in the development and dissemination of innovative research methods. Although our work is broadly applicable in the behavioral sciences, we specialize in methods for research on behavioral approaches to the prevention and treatment of health problems, with emphasis on alcohol abuse, tobacco use, other drug abuse, and HIV.

We draw upon and integrate methodological perspectives from a variety of disciplines, including statistics, engineering, psychology, and human development, to enable new categories of scientific research questions to be addressed. We enhance the quality of prevention and treatment research worldwide by providing behavioral scientists with innovative methods. By training and mentoring new methodologists, we create a foundation for continued innovation and excellence in prevention and treatment science.

Our work is funded by the National Institutes of Health and by the National Science Foundation.

Solving Public Health Problems Using Complex Data

New technologies and approaches have enabled the collection of important data that have great potential for increasing scientific understanding of drug use, risky sex, and other dangerous behavior. For example, smartphones and sensors are being used to obtain frequent (e.g., several times/day) real-time measures of an individual’s behavior, cognitions, mood, and environment. Longitudinal studies have collected behavioral data on their participants across much of the life course, in some cases including multiple generations of participants. Very large databases on drug use, HIV risk behavior, mental health, and a wealth of related variables have been amassed and continue to be updated. Behavioral studies increasingly are collecting genetic data, which produce many thousands of variables that potentially can be linked with behavioral phenotypes.

These new technologies and approaches are producing complex data sets that are rich with information that could be used to create a new generation of interventions for drug abuse and HIV prevention and services. However, statistical analysis methods, the keys investigators use to open the door to the scientific knowledge contained in behavioral data, have not kept up with the complexity of modern data sets and the sophistication of the questions posed by today’s behavioral researchers. Moving forward, the Methodology Center is uniquely positioned to develop and disseminate innovative statistical methods that are essential to unlock the knowledge contained in complex behavioral data and apply it in the fight against drug abuse and HIV.