- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Complexity Analysis Framework of Adaptive Optimization Methods via Martingales (via Zoom)
Abstract: We will present a very general framework for unconstrained adaptive optimization which encompasses standard methods such as line search and trust region that use stochastic function measurements and derivatives. In particular, methods that fall in this framework retain desirable practical features such as step acceptance criterion, trust region adjustment and ability to utilize second order models and enjoy the same convergence rates as their deterministic counterparts. The assumptions on stochastic derivatives are weaker than those standard in the literature, in that they are robust with respect to the presence of outliers. The framework is based on bounding the expected stopping time of a stochastic process, which satisfies certain assumptions. Thus this framework provides strong convergence analysis under weaker conditions than alternative approaches in the literature. We will conclude with a discussion about some interesting open questions.
Bio: Katya Scheinberg is a Professor and Director of Graduate Studies at the School of Operations Research and Information Engineering at Cornell University. Prior to joining Cornell she was the Harvey E. Wagner Endowed Chair Professor at the Industrial and Systems Engineering Department at Lehigh University. She attended Moscow University for her undergraduate studies and received her PhD degree from Columbia University. She has worked at the IBM T.J. Watson Research Center as a research staff member for over a decade before joining Lehigh in 2010. Katya’s main research areas are related to developing practical algorithms (and their theoretical analysis) for various problems in continuous optimization, such as convex optimization, derivative free optimization, machine learning, quadratic programming, etc. In 2015, jointly with Andy Conn and Luis Vicente, she received the Lagrange Prize awarded jointly by SIAM and MOS. In 2019 she was awarded the Farkas Prize by Informs Optimization Society. Katya is currently the editor-in-chief of Mathematics of Operations Research, and a co-editor of Mathematical Programming.