Robust Intervention Learning from Emergency Stop Interventions
Abstract
Human interventions are a common source of data in autonomous systems during testing. These interventions provide an important signal about where the current policy needs improvement, but are often noisy and incomplete. We define Robust Intervention Learning (RIL) as the problem of learning from intervention data while remaining robust to the quality and informativeness of the intervention signal. In the best case, interventions are precise and avoiding them is sufficient to solve the task, but in many realistic settings avoiding interventions is necessary but not sufficient for achieving good performance. We study robust intervention learning in the context of emergency stop interventions and propose Residual Intervention Fine-Tuning (RIFT), a residual fine-tuning algorithm that treats intervention feedback as an incomplete learning signal and explicitly combines it with a prior policy. By framing intervention learning as a fine-tuning problem, our approach leverages structure encoded in the prior policy to resolve ambiguity when intervention signals under-specify the task. We provide theoretical analysis characterizing conditions under which this formulation yields principled policy improvement, and identify regimes where intervention learning is expected to fail. Our experiments reveal that residual fine-tuning enables robust and consistent policy improvement across a range of intervention strategies and prior policy qualities, and highlight robust intervention learning as a promising direction for future work.
Growth and citations
This paper is currently showing No growth state computed yet..
Citation metrics and growth state from academic sources (e.g. Semantic Scholar). See About for details.
Cited by (0)
No citing papers yet
Papers that cite this one will appear here once data is available.
View citations page →References (0)
No references in DB yet
References for this paper will appear here once ingested.
Related papers in Machine Learning
- Understanding and Exploiting Weight Update Sparsity for Communication-Efficient Distributed RL0 citations
- PLATE: Plasticity-Tunable Efficient Adapters for Geometry-Aware Continual Learning0 citations
- Preference-based Conditional Treatment Effects and Policy Learning0 citations
Growth transitions
No transitions recorded yet
Growth state transitions will appear here once computed.