Navigation by autonomous vehicles or other forms of unmanned autonomous systems is a rapidly developing area within Robotics. Advances in technology and manufacturing mean that it is possible to deploy robots that span a couple orders of magnitude in size and available on-board computation. Our lab is interested in identifying a common vision-based navigation framework that can scale across the diversity of autonomous platforms envisioned by roboticists. Central to this vision is a means to minimally process visual information while maximally extracting task relevant information. We propose to employ a perception space representation which aligns with Marr’s 2.5D sketch, and to integrate it with best practice solutions in the perceive-plan-act robotics pipeline. Furthermore, we explore how learning-based strategies can provide constant-time outputs compatible with this pipeline. In achieving both objectives we can approach the goal of realizing computationally scalable visual navigation.
Patricio A. Vela is an associate professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. Dr. Vela's research focuses on geometric perspectives to control theory and computer vision, particularly how concepts from control and dynamical systems theory can serve to improve computer vision algorithms used in the decision-loop. More recent efforts expanding his research program involve studying the role of machine learning in adaptive control and autonomous robotics, and investigating how modern advances in adaptive and optimal control theory may improve locomotion effectiveness for biologically-inspired robotics. These efforts support a broad program to understand research challenges associated with autonomous robotic operation in uncertain environments. Dr. Vela received a B.S. and a Ph.D. from the California Institute of Technology.