How does reflection change what we learn from image data? How should we think about symmetry in distributions of visual data? These and related questions are given renewed attention by Cornell researchers Abe Davis, Noah Snavely, Jin Sun, and Zhiqiu Lin.
In a paper on the subject of “Visual Chirality” they write:
How can we tell whether an image has been mirrored? While we understand the geometry of mirror reflections very well, less has been said about how it affects distributions of imagery at scale, despite widespread use for data augmentation in computer vision. In this paper, we investigate how the statistics of visual data are changed by reflection. We refer to these changes as “visual chirality,” after the concept of geometric chirality—the notion of objects that are distinct from their mirror image. Our analysis of visual chirality reveals surprising results, including low-level chiral signals pervading imagery stemming from image processing in cameras, to the ability to discover visual chirality in images of people and faces. Our work has implications for data augmentation, self-supervised learning, and image forensics.
As Melanie Lefkowitz writes in her Cornell Chronicle article on this research, Davis, et al. have “used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards—findings with implications for training machine learning models and detecting faked images.”
For a quick overview of their project, with some immediate implications of their research, watch this video. And continue with the Chronicle report; the news also appeared in SciTechDaily.