Here we offer some informal suggestions intended to help people who want to propose to implement MOST in their work.

We assume that you have a basic familiarity with factorial designs. If not, you may wish to read Chapters 3 and 4 in Collins (2018), which provide an introduction to factorial experiments aimed at those with a background in the RCT. You may also be interested in our “FAQ about factorial experiments.” If you are planning to conduct an optimization trial using a fractional factorial design, you may be interested in Chapter 5. If you are planning to conduct a sequential multiple-assignment randomized trial (SMART), please visit the Data Science For Dynamic Intervention Decision-Making Lab website for more information. If you are planning a microrandomized trial (MRT), please visit Susan Murphy’s site.

Collins, L.M. (2018). *Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST)*. New York: Springer.

## Offer evidence that MOST is better than the traditional approach in the long run

If your aim is to develop more effective and efficient behavioral interventions in a systematic fashion, it seems logical that MOST is a better long-run strategy than the traditional treatment package approach. But to convince the skeptics out there, hard evidence is necessary.

It is difficult to imagine conducting empirical research to compare MOST and the treatment package approach. One scenario would be randomly assigning, say, 100 behavioral scientists to use either MOST or the treatment package approach over a 10-year period, and then comparing the effectiveness and efficiency of the resulting interventions. That is obviously not practical.

Given that an empirical study to test the usefulness of MOST is out of the question, the next best thing is a statistical simulation. Some collaborators working in this area did an extensive statistical simulation to explore whether, when, and how MOST is better than the treatment package approach. You can read about it in

Collins, L. M., Chakraborty, B., Murphy, S. A., & Strecher, V. (2009). Comparison of a phased experimental approach and a single randomized clinical trial for developing multicomponent behavioral interventions. *Clinical Trials, 6*, 5-15. PMCID: PMC2711350

## Describe MOST succinctly

MOST is a comprehensive framework, not an off-the-shelf procedure. It is best not to try to cover the broader picture of MOST in your proposal. You have decided what aspects of it you are going to implement in your study, so confine your description to these parts of MOST. It may help to include a figure like the Figure 1.1 in Collins (2018), BUT one developed specifically for YOUR project. For example, if you have already completed the preparation phase, consider graying it out or somehow indicating which phases of MOST are going to be completed in the proposed work.

Collins, L.M. (2018). *Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST)*. New York: Springer.

## Show that a factorial experiment will be more economical than the commonly used alternatives

If you are planning to conduct a factorial or fractional factorial experiment, it may be helpful to make the case in the application that the design you have chosen is the most economical one, compared to viable alternatives. This can be done succinctly by including a table that lists the design alternatives you considered and shows how many subjects and how many experimental conditions each would require. Some experimental designs use a lot of subjects compared to others, but require fewer experimental conditions. Others use fewer subjects but may require more experimental conditions. How economical an experimental design is in your situation depends on the cost associated with each experimental subject relative to the overhead associated with an experimental condition. Moreover, if you know or can reliably estimate the costs associated with subjects and experimental conditions, you can express this comparison in terms of money. This is discussed in Chapter 6 in Collins (2018) and illustrated in Tables 6.2 and 6.3.

If you are planning to include this rationale in your grant proposal, it is a good idea to read Chapter 6 in Collins (2018) carefully. Although the numbers are easy to calculate (you may even be able to do the computations in your head), the rationale behind them is both subtle and complex. It is important to understand this rationale to ensure that you are applying the ideas correctly in your situation. Also, note that the expressions in Chapter 6 apply only to designs with two levels per factor.

The Methodology Center developed a SAS macro and an R package that can help you to select the most economical design from among four alternatives that are potentially useful in intervention science (individual experiments, single factor experiments such as comparative treatment designs, factorial experiments, and fractional factorial experiments)

Planning a factorial experiment when there is a subgroup structure in the data (subjects are clustered within schools, clinics, etc.) is discussed in Dziak, Nahum-Shani, and Collins (2012).

Collins, L.M. (2018). *Optimization of behavioral, biobehavioral, and biomedical interventions: The multiphase optimization strategy (MOST)*. New York: Springer.

Dziak, J. J., Nahum-Shani, I., & Collins, L. M. (2012). Multilevel factorial experiments for developing behavioral interventions: Power, sample size, and resource considerations. *Psychological Methods, 17*(2), 153. doi: 10.1037/a0026972 PMCID: PMC3351535

## Justify the use of a fractional factorial experiment

If you are planning to use any type of factorial experiment, it has to be appropriate to address the research questions at hand. The first step, then, is to justify the use of a factorial experiment. Once that is accomplished, the next step is to justify the use of a fractional factorial rather than a complete factorial.

The best justification for the use of a fractional factorial design rather than a complete factorial design is economy. **Remember that complete and fractional factorial designs are powered exactly the same, so there will be no savings in terms of subjects.** In other words, any fractional factorial design requires the same number of subjects as the corresponding complete factorial design to maintain the same level of power. Fractional factorial designs involve fewer experimental conditions, so you should be able to argue that having fewer experimental conditions will result in savings of money, personnel, equipment, materials, logistical or management difficulties, etc. The more specific you can be about this in a grant proposal, the better.

Even when a complete factorial involves a lot of experimental conditions, a fractional factorial may not be more economical. For example, if you are conducting an internet-based experiment, it may not be that much more difficult to program 32 conditions than to program 16 conditions. (To repeat: Although this may be counterintuitive, the 32-condition experiment will require no more subjects than the 16-condition experiment.)

When writing a proposal it is a good idea to be explicit about the trade-offs associated with using a fractional factorial design, namely aliasing of effects (for an explanation of aliasing, see Collins et al., 2009 or Chapter 5 in Collins, 2018). Whenever you remove conditions from a complete factorial design, aliasing occurs. When you select a fractional factorial design, you are choosing aliasing in a strategic fashion, in other words taking the position that the aliasing is an acceptable trade-off for increased economy.

It might help to remind reviewers that from one perspective, designs such as the familiar comparative treatment, constructive treatment, and dismantling designs also involve aliasing. These designs have properties that are not nearly as attractive as a good fractional factorial; see Collins, Dziak, and Li (2009).

Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs. *Psychological Methods, 14,* 202-224. PMCID: PMC2796056

## Explain the rationale behind your choice of a fractional factorial design

There are a lot of different fractional factorial designs with different aliasing structures. (If you do not know what the term aliasing means, read Chapter 5 in Collins, 2018.) You can use software like PROC FACTEX in SAS to select a design. We offer a mini-tutorial about doing this in Collins et al. (2009).

You might want to consider a 2^{5-1} design. This design is attractive because it enables examination of 5 factors, but requires only 16 experimental conditions (a complete factorial design would require 32 conditions, so this is a half fraction). The design has the following important characteristics:

- each effect is aliased with ONE other effect (because the design is a half fraction); and
- It is Resolution V; in this case this means that each main effect is aliased with one four-way interaction, and each two-way interaction is aliased with one three-way (or higher-order) interaction.

This aliasing seems pretty easy to justify. How many people would argue that four-way interactions are likely to be large and important? If some of the two-way interactions are scientifically important, make sure that they are aliased with three-ways that are likely to be negligible in size.

Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs.* Psychological Methods, 14,* 202-224. PMCID: PMC2796056

## Convince biostatisticians who may be reviewers that a factorial or fractional factorial experiment is a good idea

Many biostatisticians are steeped in the RCT tradition, but they are usually quick to see that factorial experiments have advantages in some situations when the argument is presented to them. These articles are likely to appeal to “card-carrying” biostatisticians:

Chakraborty, B., Collins, L. M., Strecher, V., and Murphy, S. A. (2009). Developing multicomponent interventions using fractional factorial designs. *Statistics in Medicine, 28,* 2687-2708.

Nair, V., Strecher, V., Fagerlin, A., Ubel, P., Resnicow, K., Murphy, A., Little, R., Chakraborty, B., & Zhang, A. (2008). Screening experiments and the use of fractional factorial designs in behavioral intervention research. *American Journal of Public Health, 98,* 1354-1359.

## Convince reviewers that your factorial experiment has sufficient power, keeping in mind that some reviewers may believe that factorial experiments can never be sufficiently powered

Reviewers who are simply unfamiliar with factorial experiments often come into a review with an open mind. They are likely to be convinced by a clear argument about the appropriateness and economy of a factorial experiment. However, some reviewers ardently believe “facts” about factorial experiments that are wrong. In particular, some reviewers look at a 2^{5} factorial experiment and see a 32-condition RCT. Because it would be nearly impossible to power a 32-condition RCT, they may conclude that it is impossible to power a 2^{5} factorial experiment. Fortunately, as behavioral scientists re-learn the logic behind factorial experiments there are likely to be fewer and fewer reviewers who make this incorrect conclusion.

If you can spare the room in the proposal (sometimes a big “if,”), it may be a good idea to include a brief tutorial covering the following points:

- Explain that the factors are crossed, and include a table showing the experimental conditions in the design you are proposing to use (it is helpful to number the conditions). It may seem silly to include a table of experimental conditions, but many people are unsure about how to get from a list of factors to a set of experimental conditions. Such people will not be able to visualize your design without the help that a table will provide.
- Based on the table of experimental conditions, illustrate how to compute a main effect estimate (e.g. “The main effect of Resistance Training is the mean of Experimental Conditions 1,3,5, and 7 minus the mean of Experimental Conditions 2,4,6, and 8”).
- Point out that the purpose is NOT to compare the means of these conditions to each other directly, as it would be in an RCT. Instead, the experimental conditions will be combined in various ways to estimate main effects and interactions in the ANOVA. Importantly, because of the way conditions are combined, each main effect and interaction is based on the entire set of subjects. This is why factorial experiments make such economical use of subjects (you can cite Collins, Dziak, & Li, 2009 or Collins, 2018).

Here is an example paragraph you can use as a starting point. This is based on a 16-condition factorial experiment, either a complete 2^{4} factorial or a fractional factorial with more than four factors, and assumes you have included a table, Table X, showing all the experimental conditions in the design. Suppose two of the factors are NORM and RESIST, and it has been determined that there will be sufficient power with a total *N*=300:

**We ask the reader to note that the factorial design in Table X should not be considered a 16-arm trial in which each condition is compared in turn to a control condition.** Our interest is in tests of standard ANOVA main effects and interaction effects. These tests involve comparison of means computed across aggregates of experimental conditions. For example, the main effect of the NORM component will be tested by comparing the mean of the outcome variable for the 150 subjects who receive NORM (i.e., those in Conditions 1-8 in Table X) versus the mean of the outcome variable for the 150 subjects who do not receive NORM (i.e., those in Conditions 9-16). Similarly, the main effect of RESIST will be tested by comparing the mean of the outcome variable for the 150 subjects who receive RESIST (i.e., Conditions 2,3,5,8,9,12,14,15) versus the mean of the outcome variable for the 150 subjects who do not receive RESIST (i.e., Conditions 1,4,6,7,10,11,13,16). Analysis of data from a 16-arm RCT would require comparison of individual experimental conditions, and therefore would be grossly underpowered. **However, because we are conducting a multi-factorial experiment and not an RCT, each effect estimate will involve all 16 of the conditions in Table X, thereby maintaining the power associated with all 300 subjects.**

Read a brief, informal webpage that explains factorial experiments for those trained primarily in RCTs. For a VERY brief article on this topic, see Collins, Dziak, Kugler, and Trail (2014). A more extensive introduction can be found in Chapters 3 and 4 in Collins (2018).

Collins, L. M., Dziak, J. J., & Li, R. (2009). Design of experiments with multiple independent variables: A resource management perspective on complete and reduced factorial designs.* Psychological Methods, 14,* 202-224. PMCID: PMC2796056

Factorial experiments: Efficient tools for evaluation of intervention components. American Journal of Preventive Medicine, 47, 498-504.

(2014).

## Cite the evidence that implementing a complex optimization trial in a field setting can be manageable

With careful planning and organization, thorough training of staff, good management, and creative use of technology, it is possible to implement complex optimization trials, such as factorial experiments with many experimental conditions, in field settings successfully. It is a good idea to rely primarily on professional implementation staff rather than graduate students. Listed below are a few published articles that describe factorial experiments in varying stages of completion.

Bernstein, S.L., Dziura, J., Weiss, J., Miller, T., Vickerman, K.A., Grau, L.E., Pantalon, M.V., Abroms, L., Collins, L.M., & Toll, B. (2018). Tobacco dependence treatment in the emergency department: A randomized trial using the multiphase optimization strategy. *Contemporary Clinical Trials, 66*, 1-8. PMCID: PMC5851600

Celano, C.M., Albanese, A., Millstein, R.A., Mastromauro, C.A., Chung, W.-J., Legler, S., Park, E.R, Healy, B.C., Collins, L.M., Januzzi, J.L., & Huffman, J.C. (2018). Optimizing a positive psychology intervention to promote health behaviors following an acute coronary syndrome: The Positive Emotions after Acute Coronary Events-III (PEACE-III) randomized factorial trial. *Psychosomatic Medicine, 80*, 526-534. PMCID: PMC6023730 doi: 10.1097/PSY.0000000000000584

Cook, J. W., Collins, L. M., Fiore, M. C., Smith, S. S., Fraser, D., Bolt, D. M., et al. (2016). Comparative effectiveness of motivation phase intervention components for use with smokers unwilling to quit: a factorial screening experiment. Addiction, 111(1)., 117-28. doi:10.1111/add.13161

Gwadz, M.V., Collins, L.M., Cleland, C.M., Leonard, N.R., Wilton, L., Gandhi, M., Braithwaite, R.S., Perlman, D.C., Kutnick, A., & Ritchie, A.S. (2017). Using the multiphase optimization strategy (MOST) to optimize an HIV care continuum intervention for vulnerable populations: A study protocol.* BMC Public Health, 17*, 383. PMCID: PMC5418718

Phillips, S.M., Cottrell, A., Lloyd, G.R., Penedo, F.J., Collins, L.M., Cella, D., Courneya, K.S., Ackermann, R.T., Siddique, J., & Spring, B. (2018). Optimization of a technology-supported physical activity intervention for breast cancer survivors: *Fit2Thrive* study protocol. *Contemporary Clinical Trials, 66*, 9-19. PMCID: PMC5828903

Schlam, T.R., Fiore, M.C., Smith, S.S., Fraser, S., Bolt, D.M., Collins, L.M., Mermelstein, R., Piper, M.E., Cook, J.W., Jorenby, D.E., Loh, W.-Y., & Baker, T.B. (2016). Comparative effectiveness of intervention components for producing long-term abstinence from smoking: A factorial screening experiment. *Addiction, 111, *142-155*.* PMCID: PMC4692280 doi: 10.1111/add.13153

Strecher, V. J., McClure, J. B., Alexander, G. W., Chakraborty, B., Nair, V. N., Konkel, J. M., Greene, S. M., Collins, L. M., Carlier, C. C., Wiese, C. J., Little, R. J., Pomerleau, C. S., Pomerleau, O. F. (2008). Web-based smoking cessation programs: Results of a randomized trial.* American Journal of Preventive Medicine, 34,*373-381.

Last updated: May 11, 2020