Power On and Go Robots

Full-Day (Virtual) Workshop | Robotics: Science and Systems | July 13, 2020


Submission Deadline: February 15th, 2021

The organizers are delighted to also invite interested workshop participants and attendees to submit contributions to a special issue of Autonomous Robots.

Call for Papers | Special Issue on Power-On-and-Go Autonomy: Right, Out of the Box


This special issue aims to present state-of-the-art research related to power-on-and-go robots: robotic systems that are able to quickly deal with new situations and to adapt immediately to new environments, or to changes in their own operating parameters, with limited input data. The ability to operate correctly from the time the switch is flipped may mean the difference between successful task completion and catastrophic failure.

How far are modern robotic systems away from Disney/PIXAR's WALL-E ?

RSS'20 | July 13 | VIRTUAL Workshop

See our PheedLoop page for information on how to view the event.

'Out-of-the-Box' Systems for Real-World Applications

The new service robot you ordered has just arrived! You take it out of the box, flip the “on” switch and... and... it sits immobile, the animated face on its screen looking confused. Frustrated, after an hour of trying to coax it to do something, you call the company and ask for a refund.

Workshop Playlist

A single YouTube playlist with all of our invited talks and the panel discussion. Please see below for individual talks.


Substantial advances have been made over the past two decades in the area of mobile robot autonomy, in part due to the development of sophisticated methods to fuse data from multiple information sources. However, these gains come with the caveat that proper system initialization and calibration are essential. Starting with or quickly discovering the “right” initial conditions for the selected estimation, planning, and control algorithms is a crucial but largely overlooked problem that has not yet been fully tackled by the community—instead it is often regarded as a post-hoc ‘engineering’ issue rather than a key safety concern, for example. In a future where robots actively operate alongside people in human environments, businesses and consumers will demand that the machines work correctly the first time, every time, anywhere, with minimal external (human) intervention.

The workshop will bring together researchers from diverse backgrounds to address topics related to power-on- and-go robots: robotic systems that are able to successfully deal with new situations fluidly and to adapt immediately to new environments or to changes in their own operating parameters.

Topics of Interest

The workshop will focus on a wide range of topics, including:

  • self-initialization, self-calibration, and self-healing systems,

  • time-constrained reasoning and learning, rapid environment assessment,

  • reliable and consistent state estimation from limited data,

  • few-shot and single-shot online learning,

  • integrity verification and assurance,

  • formal and probabilistic methods with safety guarantees,

  • cloud robotics solutions, and

  • data-sharing in multi-robot systems.

Schedule AT-A-GLANCE

Monday, July 13th, 2020

RSS POGO Workshop Schedule


Stefan is a Senior Lecturer (US equivalent Associate Professor) in Robotics in the Department of Computing at Imperial College London, where he leads the Smart Robotics Lab and furthermore co-directs the Dyson Robotics Lab. He has also co-founded SLAMcore, spin-out company aiming at commercialisation of localisation and mapping solutions for robots and drones. Stefan has received a BSc and MSc in Mechanical Engineering with a focus on Robotics and Aerospace Engineering from ETH Zurich, as well as a PhD on “Unmanned solar airplanes: design and algorithms for efficient and robust autonomous operation”, completed in 2014.

Talk #1, Stefan Leutenegger

08:10 - 08:35 (AM) PDT

Robustness in Mobile Robot Perception and Action

Despite huge advances in Spatial AI, regarding localisation, dense mapping and scene understanding fuelled by the advent of Deep Learning and powerful processors, robots still have a robustness problem: real-world applicability is limited to restricted tasks and restricted environments. Different paradigms have emerged as to how much the perception-action cycle of a mobile robot should remain somewhat hand-engineered and modular, or at the other extreme, end-to-end learned with rather black-box models, e.g. using Deep Reinforcement Learning from pixels to torques. In my talk, I will go through a couple of examples that sit in the middle. They leverage Deep Learning for sub-tasks in an otherwise modular and more classic approach. We explicitly estimate robot states in the form of e.g. position and orientation, as well as the environment, reconstructed to both geometrical accuracy, and decomposed into semantically meaningful entities, such as 3D objects that may even move. Importantly, the spatial representations need to be chosen for task-specific robust robotic interaction with the environment. In this context, I will be presenting some works on drone navigation and control, with an emphasis on accuracy, robustness, failure identification and mitigation – mostly in the application area of aerial inspection and manipulation.

Nathan Michael is an Associate Research Professor in the Robotics Institute, Carnegie Mellon University. Prof. Michael is: director of the Resilient Intelligent Systems Lab; advisor to 28 past and present PhD and MS students; author of over 160 publications on control, perception, and cognition for resilient intelligent single and multi-robot systems; nominee or recipient of 10 best paper/student paper awards (AIAA, ICRA, RSS, DARS, CASE, SSRR); PI of ongoing and past research programs supported by ARL, AFRL, DARPA, DOE, DTRA, NASA, NSF, ONR, and industry; and Chief Technology Officer of Shield AI.


08:40 - 09:05 (AM) PDT

Mitigating Unknown Unknowns: Challenges in developing and deploying fully autonomous aerial robots operating in extreme conditions at a global scale

This talk will highlight observations and insights acquired during the development, productization, and deployment of fully autonomous robots deployed as products at scale in extreme, diverse, and challenging operating conditions across the planet. The talk will span research conducted within the Resilient Intelligent System Lab (Robotics Institute, CMU) to productization and deployment of fully autonomous aerial robots that are actively employed in diverse and challenging conditions where systems regularly and necessarily operate in conditions that are not readily anticipated.

Arne Sieverling is a Senior Robotics Scientist at Realtime Robotics, a Boston-based startup bringing hardware-accelerated motion planning into industrial robotic applications. His work at Realtime focuses on sampling-based planning, real-time trajectory generation, and the system architecture to operate a wide array of industrial manipulators in a unified way. Arne obtained his PhD from the Robotics and Biology Lab at the Technical University Berlin supervised by Oliver Brock. During his PhD, he worked on planning and control algorithms for mobile manipulators using visual and tactile sensing. In 2015, he was part of the team that won the first Amazon Picking Challenge with a mobile manipulator and a vacuum cleaner.

Talk #3: Arne Sieverling

09:20 - 09:45 (AM) PDT

Real-time Motion Planning for the Masses

Traditional industrial automation is far from Power Up and Go. Setting up and maintaining robot work cells is a labour-intensive process that requires specialists to carefully orchestrate every motion. Robots should be smarter than that: If robots could perceive their surroundings and adapt their plans instantaneously, hours of integration time would be saved, and safe collaboration with humans would be possible. A fundamental requirement to enable these skills is the ability to plan motions almost latency-free. Realtime Robotics’ solution is a combination of precomputation and dedicated computing hardware that enables finding collision-free motions in milliseconds.

In this talk, I will share my experience (as a researcher) in getting hardware-accelerated real-time motion planning into factory floors, and the unexpected challenges and opportunities of industrial automation. I will discuss real world challenges for modelling, calibration, certification, and integration of perception and planning into robot workcells.

Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. He is a recipient of the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the best paper award at WAFR’16, the best Student paper award at the 2018 Symposium on VLSI Circuits, the best paper award in Robotic Vision at ICRA’20, and he was best paper finalist at RSS’15.


09:50 - 10:15 (AM) PDT

Towards Certifiably Robust Spatial Perception

Spatial perception is concerned with the estimation of a world model --that describes the state of the robot and the environment-- using sensor data and prior knowledge. As such, it includes a broad set of robotics and computer vision problems, ranging from object detection and pose estimation to robot localization and mapping. Most perception algorithms require extensive and application-dependent parameter tuning and often fail in off-nominal conditions (e.g., in the presence of large noise, outliers, and incorrect data association). In this talk, I present recent advances in the design of certifiably robust spatial perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. I show these algorithms can achieve unprecedented robustness in a variety of applications, ranging from mesh registration and image-based object localization, to SLAM.


Each workshop contribution below will be presented as a 3-minute pre-recorded presentation during the workshop.

The Transformation from Manual to Out-of-Box Industrialised Autonomous Material Transport (paper, video)

M. Bakr, T. Kruger-Basjmeleh, A. Wendt, D. Schu ̈the, B. Abel, P. Erbts and B. Hei

Observability-Aware Trajectories for Geometric and Inertial Self-Calibration (paper, video)

Christoph Bohm, Guanrui Li, Giuseppe Loianno, and Stephan Weiss

High Precision Real Time Collision Detection (paper, video)

Alexandre Coulombe and Hsiu-Chin Lin

Power-On-and-Go Capabilities for a Low-Cost Modular Autonomous Underwater Vehicle (paper, video)

Chelsey Edge, Sadman Sakib Enan, Michael Fulton, Jungseok Hong, and Junaed Sattar

Multimodal Data Fusion for Power-On-and-Go Robotic Systems in Retail (paper, video)

Shubham Sonawani, Kailas Maneparambil, and Heni Ben Amor

Lessons learned from catastrophic scenarios uncover open research questions in robotics (slides, video)

Werner Kraus
Ali-akbar Agha-mohammadi (Ali Agha) is a Robotics Research Technologist at NASA’s Jet Propulsion Laboratory (JPL), Caltech, where he is leading several projects focused on robotic autonomy with a dual focus on Mars exploration and terrestrial applications. Dr. Agha leads the team CoSTAR and the development of the NeBula autonomy architecture which won the DARPA Subterranean Challenge, Urban Circuit, in February 2020. Previously, he was with Qualcomm Research, leading the perception efforts for self-flying drones and cars. Prior to that, Dr. Agha was a Postdoctoral Researcher at MIT. His research interests include AI, autonomous decision making, and perception for robotic systems, with applications to drones, rovers, and legged robots. Dr. Agha selected as a NASA NIAC fellow in 2018.

Talk #5: Ali AGHA

10:50 - 11:15 (AM) PDT

Resilient and consistent robotic autonomy in unknown environments with extreme conditions

Consistency and robustness under extreme conditions are prerequisites to enable autonomous robotic operations in many application domains, ranging from space exploration, to search and rescue, to natural disaster response missions. Extreme conditions include mobility-stressing terrains, perceptually-degraded setting, and comm-denied environment, to name a few. In this presentation we will discuss some of the challenges and opportunities in addressing the problem of robotic autonomy under such extreme conditions.

We discuss the DARPA Subterranean Challenge as a representative mission that pushes the boundaries of robotic autonomy under extreme conditions. We go over TEAM-CoSTAR’s solution (called NeBula) that won the second phase of this challenge. We discuss NeBula’s algorithmic perspective on enabling robustness in robotic operations that aims at formulating and solving the problem in the “joint space” of traditional autonomy modules, including 1) traversability, 2) state estimation, 3) SLAM and semantic understanding, 4) task allocation, and 5) communication.

Dorsa Sadigh is an assistant professor in Computer Science and Electrical Engineering at Stanford University. Her research interests lie in the intersection of robotics, learning and control theory. Specifically, she is interested in developing efficient algorithms for safe, reliable, and adaptive human-robot interaction. Dorsa has received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) at UC Berkeley in 2017, and has received her bachelor’s degree in EECS at UC Berkeley in 2012. She is awarded the NSF CAREER award, the Google Faculty Award, and the Amazon Faculty Research Award.

Talk #6: Dorsa Sadigh

11:20 - 11:55 (AM) PDT

To Ignore Humans or to Accept them with Open Arms: Challenges and Opportunities for Efficient, Robust, and Adaptive POGO Robots

The field of robotics and autonomous driving has made a lot of advances over the past few decades, but the question of how we should treat humans (human designers, human operators, human users, or human observers) still remains: Should we assume humans don’t exist in the future of autonomy? Should we assume humans exist but are rational enough to stay away from our robots? Or should we accept them with open arms?

In the past, the general consensus has been to “ignore” humans and work on the “hardcore” robotics problems, i.e., full autonomy. However, avoiding humans in our formalism, software, or hardware design can lead to a number of inevitable roadblocks such as lack of robustness or safety guarantees when robots need to interact in non-stationary environments with humans present.

In this talk, we will discuss how we can start conceding that humans exist and that they are not necessarily fully rational. Specifically, we will go over some of the challenges and opportunities arising due to the presence of humans. Challenges such as planning for robots that interact with humans in high risk situations, and opportunities such as access to a diverse set of data that can be collected from interacting with humans. We end by discussing fast adaptation of robots in the presence of adapting human agents.

Gaurav S. Sukhatme is a Professor of Computer Science (joint appointment in Electrical Engineering) at the University of Southern California (USC). He received his undergraduate education at IIT Bombay in Computer Science and Engineering, and M.S. and Ph.D. degrees in Computer Science from USC. He is the co-director of the USC Robotics Research Laboratory and the director of the USC Robotic Embedded Systems Laboratory which he founded in 2000. His research interests are in multi-robot systems and robot networks with a particular focus on aquatic robots.


12:05 - 12:30 PDT

POGO robots in the wild: A historical perspective and future outlook

There are very few types of POGO robots in the wild. Why is this? As a civilization, we're pretty good at building POGO systems - and getting better - so what will it take to have more (assuming we want more) POGO robots in the wild? This talk will sketch a history of engineered POGO systems. We'll trace the evolution of how such systems came to be, and what they do (and don't do). We'll give some reasons why we believe building POGO robots is different than building other POGO systems. And finally, we'll make some predictions about the future of POGO robots.

PANEL DISCUSSION (12:35 - 13:15 PDT)


NASA JPL / CalTech

University of Southern California

Imperial College

Realtime Robotics


Stephan Weiss


Paolo Robuffo Giordano


Valentin Peretroukhin

Toronto / MIT