This article presents a pragmatic framework for time-sensitive analysis of behavioral RCTs using sequence methods and Markov modeling. The focus is not methodological novelty but translation: we map common policy questions to appropriate temporal tools, provide a reporting checklist for transparency, and show how estimates become implementable rules for booster timing, triage, and exit. We position sequence analysis alongside multi-state hazards, HMMs, SMART/MRT, and g-methods, and we introduce an openly documented R package, sequenceRCT, that operationalises the end-to-end workflow with uncertainty quantification and reproducible outputs. A simulated illustration demonstrates interpretation and decision use, with ablations and a counterfactual booster vignette. We extend the framework to personalized interventions–where state-specific individual treatment effects are difficult to detect–and to reinforcement learning, where sequence-derived state spaces, empirical kernels, and off-policy evaluation support safe policy learning. We conclude with a staged validation agenda on existing datasets.
Time-sensitive RCTs in behavioral public policy: a pragmatic framework using sequence methods, personalization, and reinforcement learning / Veltri, Giuseppe Alessandro. - In: FRONTIERS IN BEHAVIORAL ECONOMICS. - ISSN 2813-5296. - 5:(2026), pp. 1-13. [10.3389/frbhe.2026.1684887]
Time-sensitive RCTs in behavioral public policy: a pragmatic framework using sequence methods, personalization, and reinforcement learning
Veltri, Giuseppe Alessandro
Primo
2026-01-01
Abstract
This article presents a pragmatic framework for time-sensitive analysis of behavioral RCTs using sequence methods and Markov modeling. The focus is not methodological novelty but translation: we map common policy questions to appropriate temporal tools, provide a reporting checklist for transparency, and show how estimates become implementable rules for booster timing, triage, and exit. We position sequence analysis alongside multi-state hazards, HMMs, SMART/MRT, and g-methods, and we introduce an openly documented R package, sequenceRCT, that operationalises the end-to-end workflow with uncertainty quantification and reproducible outputs. A simulated illustration demonstrates interpretation and decision use, with ablations and a counterfactual booster vignette. We extend the framework to personalized interventions–where state-specific individual treatment effects are difficult to detect–and to reinforcement learning, where sequence-derived state spaces, empirical kernels, and off-policy evaluation support safe policy learning. We conclude with a staged validation agenda on existing datasets.| File | Dimensione | Formato | |
|---|---|---|---|
|
frbhe-5-1684887.pdf
accesso aperto
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Creative commons
Dimensione
880.65 kB
Formato
Adobe PDF
|
880.65 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



