- About
- Events
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Robotics Ph. D. prgram
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Frameworks for High Dimensional Parallel and Distributed Optimization
Abstract: In this talk I present frameworks for solving high dimensional optimization problems, in which both the number of variables and the amount of data are huge. In these settings, practical applications require solvers to work at extreme scales, and despite decades of work, existing solvers do not scale as desired in many cases. I present black-box acceleration frameworks for speeding up convex solvers, which can be used in either parallel or distributed settings. Given a huge problem, collaborators and I develop dimension reduction techniques that allow the problem to be solved in a fraction of the original time, and make the computation more amenable to distributed or parallel computation.
I present a novel framework that can speedup linear programming (LP) solvers, such as Cplex and Gurobi, by orders of magnitude, while maintaining nearly optimal solutions. The framework can be used for both linear programs and integer linear programs. I present worst-case guarantees on both the quality of the solution and the speedup provided. Here, I consider the special case of packing linear programming. Secondly, I consider convex problems with linear constraints, defined with respect to a graph, where the problem data is distributed across the nodes. I present a framework that uses far less communication than existing distributed techniques require and is robust to link failures in the graph. Both theoretically and empirically, the approach uses orders of magnitude less communication than existing approaches, while maintaining nearly optimal solutions.
Bio: Palma London is a Ph.D. student in the Department of Computing and Mathematical Sciences at the California Institute of Technology. Prior to joining Caltech, she received a double B.S. degree in Electrical Engineering and Math at the University of Washington. Her research interests are generally in convex optimization, distributed algorithms, and machine learning.