Artificial intelligence (AI), a broad and booming subfield of computer science that includes machine learning, computer vision, and natural language processing, is transforming the world — from sustainable agriculture and urban design to cancer detection and precision behavioral health. Building on Cornell’s strengths in AI, the university recently launched the Cornell AI Initiative under the “Radical Collaboration Drives Discovery” umbrella, bringing together AI scholars from across campuses to develop new applications in diverse fields and advance AI education and ethics.
This semester, the Cornell Ann S. Bowers College of Computing and Information Science is hosting a weekly conversation on Fridays about the future of AI with their Fall 2022 AI seminar series. Guest presenters include top industry leaders and groundbreaking researchers who are building the technology and examining its societal, legal, and ethical impacts.
The AI seminars run each Friday from Sept. 2 to Dec. 2, 12:15 p.m. to 1:15 p.m. ET, in person at Gates 122 and streaming via Zoom. Everyone in the Cornell community is invited to attend and students can take the series as a one-credit class.
“We hope that the seminar series will further bring together the entire campus around the new AI initiative,” said Thorsten Joachims, associate dean for research at Cornell Bowers CIS and a professor of computer science and information science.
Oliver Richardson, a graduate student in the field of computer science, will be the first speaker on Sept. 2 with his talk, “Loss as the Inconsistency of a Probabilistic Dependency Graph: Choose Your Model, Not Your Loss Function.”
Abstract: In a world blessed with a great diversity of loss functions, I argue that the choice between them is not a matter of taste or pragmatics, but of model. Probabilistic dependency graphs (PDGs) are a very expressive class of probabilistic graphical models that comes equipped with a natural measure of inconsistency. The central finding of this work is that many standard loss functions can be viewed as measuring the inconsistency of the PDG that describes the scenario at hand. As a byproduct of this approach, we obtain an intuitive visual calculus for deriving inequalities between loss functions.
In addition to variants of cross entropy, a large class of statistical divergences can be expressed as inconsistencies, from which we can derive visual proofs of properties such as the information processing inequality. We can also use the approach to justify a well-known connection between regularizers and priors. In variational inference, we find that the ELBO, a somewhat opaque objective for latent variable models, and variants of it arise for free out of uncontroversial modeling assumptions — as do simple graphical proofs of their corresponding bounds. The talk is based on my AISTATS 22 paper.
As part of the recently established five-year strategic partnership between LinkedIn and Cornell Bowers CIS, the series will feature invited guest Souvik Ghosh, director of AI at LinkedIn, on Sept. 9. Ghosh’s presentation, “Some challenges with Recommender Systems at LinkedIn,” will be the capstone to several days of innovative programming and events to elevate AI research at the college.
Abstract: Machine Learning based recommender systems power almost every aspect of the LinkedIn experience. I will discuss three challenging and practical problems that we have made a dent at. With the LinkedIn feed as the main example, I’ll describe how we increased productivity of engineers through optimization techniques, improved experimentation methods in networks, and how we are measuring and mitigating bias in our algorithms.
Future speakers are listed on the Computer Science AI seminar website.