This paper was accepted to the 5th Robot Learning Workshop: Trustworthy Robotics at NeurIPS 2022. The workshop was virtual so I presented in an online poster session.


Download:


Abstract:

RL is increasingly being used to control robotic systems that interact closely with humans. This interaction raises the problem of safe RL: how to ensure that a RL-controlled robotic system never, for instance, injures a human. This problem is especially challenging in rich, realistic settings where it is not even possible to clearly write down a reward function which incorporates these outcomes. In these circumstances, perhaps the only viable approach is based on IRL, which infers rewards from human demonstrations. However, IRL is massively underdetermined as many different rewards can lead to the same optimal policies; we show that this makes it difficult to distinguish catastrophic outcomes (such as injuring a human) from merely undesirable outcomes. Our key insight is that humans do display different behaviour when catastrophic outcomes are possible: they become much more careful. We incorporate carefulness signals into IRL, and find that they do indeed allow IRL to disambiguate undesirable from catastrophic outcomes, which is critical to ensuring safety in future real-world human-robot interactions.


Citation:

Hanslope, Jack R. P. and Aitchison, Laurence. 2022. “Imitating Careful Experts to Avoid Catastrophic Events” NeurIPS 2022 Workshop on Robotic Learning: Trustworthy Robotics http://www.robot-learning.ml/2022/