- INFO 1260 / CS 1340: Choices and Consequences in Computing
Jon Kleinberg and Karen Levy
Spring 2025,
Mon-Wed-Fri 11:15am-12:05pm, Bailey Hall
Course description
Computing requires difficult choices that can have serious implications
for real people. This course covers a range of ethical, societal, and
policy implications of computing and information. It draws on recent
developments in digital technology and their impact on society,
situating these in the context of fundamental principles from
computing, policy, ethics, and the social sciences. A particular
emphasis will be placed on large areas in which advances in computing
have consistently raised societal challenges: privacy of individual
data; fairness in algorithmic decision-making; dissemination of online
content; and accountability in the design of computing systems. As this
is an area in which the pace of technological development raises new
challenges on a regular basis, the broader goal of the course is to
enable students to develop their own analyses of new situations as they
emerge at the interface of computing and societal interests.
A more extensive summary of the material can be found in the
overview of course topics at the end of this page.
Course staff
-
Instructors:
- Jon Kleinberg jmk6
- Karen Levy kl838
- TA staff:
- Alexander Chen akc58
- Alexandra Gardi ajg333
- Arya Ramkumar adr62
- Baihe Peng bp352
- Caleb Chin ctc92
- Cameron Pien cyp22
- Cazamere Comrie clc348
- Celina Jang cjj48
- Divya Akkiraju dma232
- Eirian Huang ehh56
- Elisabeth Pan ep438
- Ella Kim ejk229
- Emily Fu ef442
- Enock Danso ed548
- Ethan Cohen esc82
- Farhan Mashrur fm454
- George Lee jl3697
- Haley Qin hq35
- Hannah Kim hek46
- Hayley Lim al2347
- Isabel Louie il289
- Isabella Pazmino-Schell ivs5
- Jae June Lee jl4487
- Jamie Tang jft75
- Jeffrey Wang yw2645
- Jennifer Otiono jco66
- Jenny Chen jc2676
- Jenny Fu xf89
- Johanna Jung jj425
- Joyce Chen jsc342
- Julia Graziano jgg87
- Julia Senzon jfs287
- Katherine Chang kjc249
- Kevin Zhang kz362
- Khai Xin Kuan kk996
- Lindsay Bank lsb239
- Michaela Eichel mae97
- Neha Arora na458
- Olivia Alonso oma35
- Rachel Wang jw879
- Ruth Martinez Yepes rdm268
- Sadie Roecker sdr83
- Sharon Heung ssh247
- Sophie Mallen smm486
- Sunoo Kim sk2698
- Suresh Kamath Bola sk2864
- Tenny George sog5
- Tianyi Wang tw324
- Waki Kamino wk265
- Yanbang Wang yw786
.
Requirements
There are no formal pre-requisites for this course. It is open to students of all majors.
For Information Science majors, the course may substitute for INFO 1200
to fulfill major requirements. Students may receive credit for
both INFO 1200 and INFO 1260, as the scopes of the two courses
are distinct.
Coursework
- Homework:
There will be
6 homework assignments, each worth 13 1⁄3% of the course grade.
Homework assignments must be submitted via the class
Canvas page.
Each assignment will consist of a
variety of different types of questions, including questions that
draw on mathematical models and quantitative arguments using basic
probability concepts, and questions
that draw on social science, ethics, and policy perspectives.
The planned due dates for the homework assignments are at
noon on the following Thursdays during the semester:
HW 1 (due 2/13), HW 2 (due 2/27), HW 3 (due 3/13), HW 4 (due 3/27), HW 5 (due 4/17), HW 6 (due 5/1).
The late policy for homework works as follows:
First, illnesses, family emergencies, religious observance,
and Cornell-sponsored travel are reasons for requesting homework
extensions without any grade penalty, and you should contact us by
e-mail to arrange these.
Second, we will also accept homework that comes in late without one of
these reasons subject to a grade penalty. Homework that comes in after
noon on the Thursday it is due but before noon on Friday
will be accepted with a
grade deduction of 10% of the maximum score (e.g. if the homework is
out of 70 points, then 7 points will be deducted). There will be an
additional deduction of 10% more of the maximum score for each 24
hours after (i.e. 20%, 30%, and 40% before noon on Saturday, Sunday,
and Monday respectively), until Monday at noon, after which the
homework will not receive credit. Because the homework submission site
will be open
during this time, the homework may be uploaded there directly; you do
not need prior arrangement to do this.
- Final Exam:
There will be an in-person final exam given during the final exam period
at the end of the semester, worth 20% of the course grade.
The date for the take-home final is determined by the university;
it has not been determined yet, but we will post it once it is known.
Academic Integrity
You are expected to observe Cornell’s Code of Academic Integrity in all aspects of this course.
You are allowed to collaborate on the homework and on the take-home
final exam to the extent of formulating ideas as a group. However, you
must write up the solutions to each assignment completely on your own,
and understand what you are writing. You must also list the names of
everyone with whom you discussed the assignment.
You are welcome to use generative AI tools like ChatGPT for research, used in a way similar to how you might use a search engine to learn more about a topic. But you may not submit output from one of these tools either verbatim or in closely paraphrased form as an answer to any homework or exam question;
doing so is a violation of the academic integrity policy for the course.
All homework and exam responses must be your own work, in your own words, reflecting your own understanding of the topic.
Among other duties, academic integrity requires that you properly cite
any idea or work product that is not your own, including the work of
your classmates or of any written source. If in any doubt at all,
cite! If you have any questions about this policy, please ask a member
of the course staff.
.
Overview of Topics
(Note on the readings: The readings listed in the outline are also available on the class
Canvas
page, and for students enrolled in the class, this is the most direct way to get them. The links below are to lists of publicly available versions, generally through Google Scholar.)
- Course introduction.
We begin by discussing some of the
broad forces that laid the foundations for this course,
particularly the ways in which applications of computing developed
in the online domain have come to impact societal
institutions more generally, and the ways in which principles
from the social sciences, law, and policy can be used to understand
and potentially to shape this impact.
- Course mechanics
- Overview of course themes (1/22-27)
- The relationship of computational models to the world
- The online world changes the frictions that determine what’s easy and what’s hard to do
- The contrast between policy challenges and implementation challenges
- The contrast between “Big-P Policy” and “Little-P policy”
- The non-neutrality of technical choices
- The challenge of anticipating the consequences of technical developments
- The layered design of computing systems
- Digital platforms can create diffuse senses of responsibility and culpability
- Computing as synecdoche: the problem in computing serves acts as a mirror for the broader societal problem
- Issues with significant implications for people’s everyday lives
- Content creation and platform policies. One of the most visible developments in computing over the past two decades has been the growth of enormous social platforms on the Internet through which people connect with each other and share information. We look at some of the profound challenges these platforms face as they set policies to regulate these behaviors, and how those decisions relate to longstanding debates about the values of speech.
- Principles of free speech
- Underpinnings of the First Amendment
- Restrictions on speech by non-governmental entities
- Readings (1/29):
- Basics of how Internet platforms organize information
- Modeling the user
- Attention as a scarce resource
- Rankings: disparities in attention, unpredictable outcomes
- Readings (1/31):
- Understanding speech in the online domain
- CDA 230
- Network effects in the competition between platforms.
- Readings (2/3):
- Kosseff, Jeff. Testimony before the Subcommittee on Communications, Technology, Innovation, and the Internet, United States Senate, July 28, 2020.
- Klonick, Kate. "The new governors: The people, rules, and processes governing online speech." Harv. L. Rev. 131 (2017). Read pp. 1598-1613 only.
- Personalization and its relationship to polarization
- Models and algorithms for personalized filtering
- Polarization: Evidence for and against the Filter Bubble
- Searching for online radicalization pathways
- Readings (2/5):
- Readings (2/7):
- Content Moderation and Bad behavior
- Hate speech against groups
- Abuse against individuals
- Platform responses, including the human cost of manual content moderation, and the difficulty of algorithmic content moderation
- Counter-measures to platform responses
- Readings (2/10):
- Misinformation/disinformation:
- Taxonomies of misinformation
- Coordinated dissemination of false information, data voids
- The psychology of sharing false information
- Readings (2/12):
- Platform economics in markets for content
- Hosting, and responsibility for providing infrastructure
- Markets for rules
- Readings (2/14):
- Gillespie, Tarleton, Patricia Aufderheide, Elinor Carmi, Ysabel Gerrard, Robert Gorwa, Ariadna Matamoros-Fernández, Sarah T. Roberts, Aram Sinnreich, and Sarah Myers West. "Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates." Internet Policy Review 9, no. 4 (2020).
- Mathematical models of biased information
- Inducing a spectrum from information sources
- Modeling consumers of information as Bayesian agents
- Readings (2/19-2/21):
- Data collection, data aggregation, and the problem of privacy. Computing platforms are capable of collecting vast amounts of data about their users, and can analyze those data to make inferences about users' characteristics and behaviors. Data collection and analysis have become central to platforms' business models, but also present fundamental challenges to users' privacy expectations. Here, we describe the difficult choices that platforms must make about how they gather, store, combine, and analyze users' information, and what social and political impacts those practices can have.
- Privacy as a fundamental concept:
- Values served by privacy,
- Locating privacy in the law,
- The Panopticon,
- Contextual integrity,
- Psychological dimensions of privacy,
- Evaluating common fallacies about privacy
- Readings (2/24-28):
- Digital Data and the Limits of Anonymization.
- Aggregate things we can learn from collective data
- Sensitive things we can learn from data about individuals
- Networked dependencies between people's data
- Readings (3/3-5):
- Sweeney, L., 1997. Weaving technology and policy together to maintain confidentiality. The Journal of Law, Medicine & Ethics, 25(2-3). To read: pages 98-102, 108-110.
- Mayer, J., Mutchler, P. and Mitchell, J.C., 2016. Evaluating the privacy properties of telephone metadata. Proceedings of the National Academy of Sciences, 113(20), pp.5536-5541.
- Narayanan, A. and Shmatikov, V., 2008, May. Robust de-anonymization of large sparse datasets. In 2008 IEEE Symposium on Security and Privacy. To read: Sections 1, 2, 5, and 6. And by the same authors: Myths and fallacies of personally identifiable information. Communications of the ACM, 53(6), 2010, pp.24-26.
- Constitutional right to privacy
- Changes in technology change your expectations about privacy
- Interactions between government and firms on privacy matters
- Readings (3/7-10):
- Privacy in non-constitutional law
- Notice and consent model
- Data ownership model
- Readings (3/12):
- Collection and use of data
- The challenge of precommitment
- Case study of IDNYC
- The problem of culpability
- Readings (3/14):
- Differential privacy
- Basic principles
- Mathematical model
- Applications to the U.S. Census and to research datasets
- Readings (3/17):
- Readings (3/19):
- Surveillance of work and workers
- Scientific management and the history of workplace observation
- Legal protections
- New frontiers of workplace data collection
- Readings (3/21):
- The role of cryptography and security
- Public-key cryptography
- Secure multi-party communication
- Policy questions for encrypted data
- Readings (3/24):
- Bogetoft, P., Christensen, D.L., Damgård, I., Geisler, M., Jakobsen, T., Krøigaard, M., Nielsen, J.D., Nielsen, J.B., Nielsen, K., Pagter, J. and Schwartzbach, M., 2009, February. Secure multiparty computation goes live. In International Conference on Financial Cryptography and Data Security (pp. 325-343).
- Singh, Simon, 2000. The code book: the science of secrecy from ancient Egypt to quantum cryptography. Anchor. To read: Chapter 5.
- Readings (3/26):
- Privacy from whom?
- Stalking and abuse
- Open-source information
- Doxing
- Readings (3/28):
- Donovan, Joan. Refuse and Resist! 2017. Limn, issue 8.
- Freed, Diana, Sam Havron, Emily Tseng, Andrea Gallardo, Rahul Chatterjee, Thomas Ristenpart, and Nicola Dell. "Is my phone hacked?" Analyzing Clinical Computer Security Interventions with Survivors of Intimate Partner Violence. Proceedings of the ACM on Human-Computer Interaction 3, no. CSCW (2019): 1-24.
- Data-Driven Decision-Making. Algorithms trained using machine learning are increasingly being deployed as part of decision-making processes in a wide range of applications. We discuss how this development is the most recent in a long history of data-driven decision methodologies that companies, governments, and organizations have deployed. When these methods are used to evaluate people, in settings that include employment, education, credit, healthcare, and the legal system, there is the danger that the resulting algorithms may incorporate biases that are present in the human decisions they're trained on. And when the methods are evaluated using experimental interventions, it is important to understand how to apply principles for the ethical conduct of experiments with human participants.
- Principles of quantification in decision-making by organizations.
- History of quantification and rationalization
- The rise of bureaucracies and administrative decision-making
- Decision-making via optimization
- The choice of an objective function
- Readings (4/7-9):
- Inequality and power
- Social stratification, structural embeddedness, intersectionality
- Historical perspectives on structural discrimination
- Principles from discrimination law: disparate treatment and disparate impact
- Readings (4/11):
- The basic methodology of machine learning
- Features and labels
- Training procedures and evaluation
- The problem of interpretability
- Readings (4/14):
- Sources of bias in algorithmic decision-making
- Bias in features and labels
- Bias in training procedures
- Implications for discrimination law
- Readings (4/16-18):
- Experiments as a research methodology
- Establishing causality
- Contrasts with observational data
- Principles and practice of A/B testing
- Spillover between individuals
- Explore/exploit trade-offs
- Readings (4/21):
- Research ethics frameworks for conducting experiments
- The Belmont and Menlo reports
- IRBs and human subjects research
- Aversion to experiments
- Readings (4/23):
- Meyer, Michelle N., Patrick R. Heck, Geoffrey S. Holtzman, Stephen M. Anderson, William Cai, Duncan J. Watts, and Christopher F. Chabris. Objecting to experiments that compare two unobjectionable policies or treatments. Proceedings of the National Academy of Sciences 116, no. 22 (2019): 10723-10728.
- Salganik, Matthew J. Bit by bit: Social research in the digital age. Princeton University Press, 2019. To read: pages 281-288, 294-301.
- Inter-personal discrimination
- Principles from the behavioral sciences
- Audit studies as experimental investigations of bias
- Implications for user choice in online services
- Readings (4/25):
- Formalizing notions of fairness for algorithms
- Equalization of error rates
- Enforcing calibrated predictions
- Inherent trade-offs between different guarantees
- Readings (4/28):
- Mayson, Sandra G. Bias in, bias out. Yale Law Journal 128 (2018): 2218. To read: pages 2221-2251.
- Automated decisions in the physical world
- A taxonomy of cyber-physical systems, including robots, drones, and sensors
- Autonomous vehicles
- Autonomous weapons and their relation to theories of warfare
- Liability and culpability
- Anthropomorphism, care, and deception.
- Readings (4/30):
- Feedback loops in data-driven decision-making and generative AI
- Self-fulfilling predictions
- Emergent representations from user activity
- Long-term impacts of interventions
- Readings (5/3):