Although recent decades have witnessed significant success in deploying robots and autonomous
systems in laboratory developments, manufacturing plants, transportation, and home applications,
the systems lack the intelligence and robustness to operate reliably in unstructured environments
and under adverse conditions. When humans perceive and navigate complex environments, we
identify various places and objects, i.e., semantics, and infer their properties, physics, and
relationships from semantics based on prior knowledge and experience. However, it is challenging
to enable robots and autonomous systems to have such a high- level understanding of their
surroundings, given only noisy sensor measurements and limited onboard computing resources. In
this talk, I will focus on a semantic perception system I developed for field robots that uses
semantics as a pivot to achieve high-level scene understanding and reliable state estimation for
planning and control. In addition to bringing human-level semantics to robot perception in a 3D
continuous semantic map representation for task planning, the system also reasons robot-specific
properties of the environment to assist more sophisticated robot behavior planning. Effective
learning methods to enable semantics acquisition under adverse sensing conditions will also be
discussed. Future research aims to extend current semantic perception to dynamics-aware robot
perception and increase the understanding of environmental dynamics for autonomous
Although recent decades have witnessed significant success in deploying robots and autonomous
systems in laboratory developments, manufacturing plants, transportation, and home applications,
the systems lack the intelligence and robustness to operate reliably in unstructured environments
and under adverse conditions. When humans perceive and navigate complex environments, we
identify various places and objects, i.e., semantics, and infer their properties, physics, and
relationships from semantics based on prior knowledge and experience. However, it is challenging
to enable robots and autonomous systems to have such a high- level understanding of their
surroundings, given only noisy sensor measurements and limited onboard computing resources. In
this talk, I will focus on a semantic perception system I developed for field robots that uses
semantics as a pivot to achieve high-level scene understanding and reliable state estimation for
planning and control. In addition to bringing human-level semantics to robot perception in a 3D
continuous semantic map representation for task planning, the system also reasons robot-specific
properties of the environment to assist more sophisticated robot behavior planning. Effective
learning methods to enable semantics acquisition under adverse sensing conditions will also be
discussed. Future research aims to extend current semantic perception to dynamics-aware robot
perception and increase the understanding of environmental dynamics for autonomous