While machine learning has demonstrated remarkable advances in computer vision, the field of autonomous driving faces significant challenges in perception systems, particularly due to the expensive and constrained nature of 3D labeled data. This limitation creates barriers for widespread deployment across diverse environments. My work focuses on leveraging auxiliary information sources to improve perception systems in a label-efficient manner. In this talk, I will present novel approaches that address these challenges through exploring two such additional channels of information. First, I demonstrate how utilizing repeated traversal data enables object discovery without requiring manual labels, achieving performance that surpasses out-of-domain models and competes with in-domain supervised approaches. Second, I show how readily available navigation map data can enhance lane topology prediction, providing a cost-effective alternative to expensive manual labeling processes. Through these approaches, I illustrate how leveraging alternative information channels can significantly advance autonomous driving perception systems while reducing dependency on traditional labeled datasets. I will conclude the talk with future steps toward developing perception systems that, inspired by human cognition, seamlessly integrate multiple complementary channels of information for a more complete and robust understanding of the world.