- About
- Events
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Robotics Ph. D. prgram
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Approximate Diagonalization in Nearly Matrix Multiplication Time (via Zoom)
Abstract: In this talk I will show that any matrix can be approximately diagonalized in nearly matrix multiplication time. In particular, given a square, complex, n x n matrix A with operator norm at most one, and an error parameter d>0, one can compute with high probability an invertible matrix V and diagonal D such that ||A - VDV^{-1}|| < d, in O(MM(n) polylog(n/d)) arithmetic operations, using O(polylog(n/d)log(n)) bits of precision. Here MM(n) is the number of arithmetic operations required for numerically stable multiplication of two n x n matrices.
The algorithm we analyze is a variant of the well-studied spectral bisection algorithm in numerical linear algebra, with a crucial Gaussian perturbation preprocessing step. This is the first algorithm that has been shown to achieve nearly matrix multiplication time for diagonalization in finite precision arithmetic, and improves previous results by Armentano et al. in the case of general matrices, and Parlett in the case of Hermitian ones.
Our proof requires two novel ingredients. First, we show the addition of entrywise i.i.d. complex Gaussian noise with variance 1/poly(n) splits the pseudospectrum of *any* n x n matrix into n small, well-separated components. Second, we rigorously analyze (in finite precision arithmetic) Roberts’ Newton iteration method for computing the matrix sign function, itself an open problem for at least the last thirty years. This is achieved by controlling the pseudospectra of the iterates using a carefully chosen sequence of shrinking contour integrals in the complex plane.
I aim to make the talk self-contained and broadly accessible. In this spirit I’ll start by introducing the key linear algebraic techniques — pseudospectrum and stability, spectral projectors, matrix sign function, and holomorphic functional calculus — and only then move on to the probabilistic perturbation result and analysis of the algorithm.
Joint work with Jorge Garza-Vargas, Archit Kulkarni, and Nikhil Srivastava, available at https://arxiv.org/abs/1912.08805.
Bio: I am a fourth year PhD candidate in the University of California-Berkeley Mathematics department, working with Nikhil Srivastava. My research spans theoretical computer science, probability, and statistical physics, with particular focus on non-backtracking walks, semidefinite programming, and the use of randomness in numerical linear algebra; I am currently funded by the NSF Graduate Research Fellowship.
The pronouns I like are she/her.