- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Understanding and Improving Deep Neural Networks
Abstract: Deep neural networks have produced state-of-the-art results in a number of different areas of machine learning, including computer vision, natural language processing, robotics and reinforcement learning. I will summarize three projects on better understanding deep neural networks and improving their performance. First I will describe our sustained effort to study how much deep neural networks know about the images they classify. Our team initially showed that deep neural networks are “easily fooled,” meaning they will declare with near certainty that completely unrecognizable images are everyday objects. These results suggested that deep neural networks do not truly understand the objects they classify. However, our subsequent results reveal that, when augmented with powerful priors, deep neural networks actually have a surprisingly deep understanding of objects, which also enables them to be incredibly effective generative models that can produce a wide diversity of photo-realistic images. Second, I will summarize our Nature paper on learning algorithms that enable robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission. This work combines a novel stochastic optimization algorithm with Bayesian optimization to produce state-of-the-art robot damage recovery. Third, I will describe our recent Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solves Montezuma’s Revenge, considered by many to be a grand challenge of AI research. I will also very briefly summarize a few other machine learning projects from my career, including our PNAS paper on automatically identifying, counting, and describing wild animals in images taken remotely by motion-sensor cameras.
Bio: Jeff Clune is the Loy and Edith Harris Associate Professor in Computer Science at the University of Wyoming and a Senior Research Manager and founding member of Uber AI Labs, which was formed after Uber acquired a startup he helped lead. Jeff focuses on robotics and training deep neural networks via deep learning, including deep reinforcement learning. Since 2015, he won the Presidential Early Career Award for Scientists and Engineers from the White House, had papers on the cover of Nature and PNAS, won an NSF CAREER award, received an Outstanding Paper of the Decade award, and had best paper awards, oral presentations, and invited talks at the top machine learning conferences (NeurIPS, CVPR, ICLR, and ICML). His research is regularly covered in the press, including the New York Times, NPR, NBC, Wired, the BBC, the Economist, Science, Nature, National Geographic, the Atlantic, and the New Scientist. Prior to becoming a professor, he was a Research Scientist at Cornell University and received degrees from Michigan State University (PhD, master’s) and the University of Michigan (bachelor’s). More on Jeff’s research can be found at JeffClune.com.