- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Physics-informed machine learning for computational imaging (via Zoom). Virtual talk.
Abstract: By co-designing optics and algorithms, computational cameras can do more than regular cameras - they can see in the extreme dark, measure 3D, be extremely compact, record different wavelengths of light, or capture the phase of light. These computational imagers are powered by algorithms which uncover the signal from encoded or noisy measurements. Over the years the classic methods to recover information from computational cameras have been based on minimizing an optimization problem consisting of a data fidelity and hand-picked prior term. More recently, deep learning has been applied to these problems, but often has no way to incorporate known optical characteristics, requires large training datasets, and results in black-box models that cannot easily be interpreted. In this talk, I will introduce physics-informed machine learning for computational imaging, which is a middle ground approach that combines elements of classic methods with deep learning. I will demonstrate this approach through two examples on real computational cameras: a tiny, cheap lensless camera and a high-end low-light camera for nighttime videography. In each case incorporating knowledge of imaging system physics into neural networks can improve image quality and performance beyond what is feasible with either classic or deep methods. For lensless imaging, physics-informed machine learning can speed up the reconstruction time by an order of magnitude and improve the perceptual image quality. For nighttime videography, we can learn a physics-informed noise generator that can realistically synthesize noise at extremely high-gain, low-light settings. Using this learned noise model, we can take videos of moving objects on a clear, moonless night with no external illumination (submillilux) for the first time, pushing the limit of what cameras can see in the extreme dark by an order of magnitude.
Bio: Kristina Monakhova is a postdoctoral fellow at MIT, supported by the MIT Postdoctoral Fellowship for Engineering Excellence. She received her PhD from UC Berkeley in Electrical Engineering and Computer Sciences, where she was a member of Laura Waller’s Computational Imaging research group. Her research focuses on making more capable cameras and microscopes through the co-design of imaging systems and algorithms. Her research lies at the intersection of signal processing, machine learning, and optics. She is a recipient of the NSF Graduate Research Fellowship and the NDSEG Fellowship.