![]() |
![]() |
![]() |
- INFO 1260 / CS 1340: Choices and Consequences in Computing
Jon Kleinberg and Karen Levy
Spring 2025, Mon-Wed-Fri 11:15am-12:05pm, Bailey Hall
Course description
Computing requires difficult choices that can have serious implications for real people. This course covers a range of ethical, societal, and policy implications of computing and information. It draws on recent developments in digital technology and their impact on society, situating these in the context of fundamental principles from computing, policy, ethics, and the social sciences. A particular emphasis will be placed on large areas in which advances in computing have consistently raised societal challenges: privacy of individual data; fairness in algorithmic decision-making; dissemination of online content; and accountability in the design of computing systems. As this is an area in which the pace of technological development raises new challenges on a regular basis, the broader goal of the course is to enable students to develop their own analyses of new situations as they emerge at the interface of computing and societal interests.
A more extensive summary of the material can be found in the overview of course topics at the end of this page.
Course staff
-
Instructors:
- Jon Kleinberg jmk6
- Karen Levy kl838
- TA staff:
- Alexander Chen akc58
- Arya Ramkumar adr62
- Baihe Peng bp352
- Caleb Chin ctc92
- Cameron Pien cyp22
- Cazamere Comrie clc348
- Celina Jang cjj48
- Divya Akkiraju dma232
- Eirian Huang ehh56
- Elisabeth Pan ep438
- Ella Kim ejk229
- Emily Fu ef442
- Enock Danso ed548
- Ethan Cohen esc82
- Farhan Mashrur fm454
- George Lee jl3697
- Haley Qin hq35
- Hannah Kim hek46
- Hayley Lim al2347
- Isabel Louie il289
- Isabella Pazmino-Schell ivs5
- Jae June Lee jl4487
- Jamie Tang jft75
- Jeffrey Wang yw2645
- Jennifer Otiono jco66
- Jenny Chen jc2676
- Jenny Fu xf89
- Johanna Jung jj425
- Joyce Chen jsc342
- Julia Graziano jgg87
- Julia Senzon jfs287
- Katherine Chang kjc249
- Kevin Zhang kz362
- Khai Xin Kuan kk996
- Lindsay Bank lsb239
- Michaela Eichel mae97
- Neha Arora na458
- Olivia Alonso oma35
- Rachel Wang jw879
- Ruth Martinez Yepes rdm268
- Sadie Roecker sdr83
- Sharon Heung ssh247
- Sophie Mallen smm486
- Sunoo Kim sk2698
- Suresh Kamath Bola sk2864
- Tenny George sog5
- Tianyi Wang tw324
- Waki Kamino wk265
- Yanbang Wang yw786
.
Requirements
There are no formal pre-requisites for this course. It is open to students of all majors.
For Information Science majors, the course may substitute for INFO 1200 to fulfill major requirements. Students may receive credit for both INFO 1200 and INFO 1260, as the scopes of the two courses are distinct.
Coursework
- Homework:
There will be
6 homework assignments, each worth 13 1⁄3% of the course grade.
Homework assignments must be submitted via the class
Canvas page.
Each assignment will consist of a
variety of different types of questions, including questions that
draw on mathematical models and quantitative arguments using basic
probability concepts, and questions
that draw on social science, ethics, and policy perspectives.
The planned due dates for the homework assignments are at noon on the following Thursdays during the semester: HW 1 (due 2/13), HW 2 (due 2/27), HW 3 (due 3/13), HW 4 (due 3/27), HW 5 (due 4/17), HW 6 (due 5/1).
The late policy for homework works as follows: First, illnesses, family emergencies, religious observance, and Cornell-sponsored travel are reasons for requesting homework extensions without any grade penalty, and you should contact us by e-mail to arrange these. Second, we will also accept homework that comes in late without one of these reasons subject to a grade penalty. Homework that comes in after noon on the Thursday it is due but before noon on Friday will be accepted with a grade deduction of 10% of the maximum score (e.g. if the homework is out of 70 points, then 7 points will be deducted). There will be an additional deduction of 10% more of the maximum score for each 24 hours after (i.e. 20%, 30%, and 40% before noon on Saturday, Sunday, and Monday respectively), until Monday at noon, after which the homework will not receive credit. Because the homework submission site will be open during this time, the homework may be uploaded there directly; you do not need prior arrangement to do this.
- Final Exam: There will be an in-person final exam given during the final exam period at the end of the semester, worth 20% of the course grade. The date for the final exam is determined by the university; it has not been determined yet, but we will post it once it is known.
Academic Integrity
You are expected to observe Cornell’s Code of Academic Integrity in all aspects of this course.
You are allowed to collaborate on the homework to the extent of formulating ideas as a group. However, you must write up the solutions to each assignment completely on your own, and understand what you are writing. You must also list the names of everyone with whom you discussed the assignment.
You are welcome to use generative AI tools like ChatGPT for research, used in a way similar to how you might use a search engine to learn more about a topic. But you may not submit output from one of these tools either verbatim or in closely paraphrased form as an answer to any homework or exam question; doing so is a violation of the academic integrity policy for the course. All homework and exam responses must be your own work, in your own words, reflecting your own understanding of the topic.
Among other duties, academic integrity requires that you properly cite any idea or work product that is not your own, including the work of your classmates or of any written source. If in any doubt at all, cite! If you have any questions about this policy, please ask a member of the course staff.
.
Overview of Topics
(Note on the readings: The readings listed in the outline are also available on the class Canvas page, and for students enrolled in the class, this is the most direct way to get them. The links below are to lists of publicly available versions, generally through Google Scholar.)
- Course introduction. We begin by discussing some of the broad forces that laid the foundations for this course, particularly the ways in which applications of computing developed in the online domain have come to impact societal institutions more generally, and the ways in which principles from the social sciences, law, and policy can be used to understand and potentially to shape this impact.
- Course mechanics
- Overview of course themes (1/22-27)
- The relationship of computational models to the world
- The online world changes the frictions that determine what’s easy and what’s hard to do
- The contrast between policy challenges and implementation challenges
- The contrast between “Big-P Policy” and “Little-P policy”
- The non-neutrality of technical choices
- The challenge of anticipating the consequences of technical developments
- The layered design of computing systems
- Digital platforms can create diffuse senses of responsibility and culpability
- Computing as synecdoche: the problem in computing serves acts as a mirror for the broader societal problem
- Issues with significant implications for people’s everyday lives
- Content creation and platform policies. One of the most visible developments in computing over the past two decades has been the growth of enormous social platforms on the Internet through which people connect with each other and share information. We look at some of the profound challenges these platforms face as they set policies to regulate these behaviors, and how those decisions relate to longstanding debates about the values of speech.
- Principles of free speech
- Underpinnings of the First Amendment
- Restrictions on speech by non-governmental entities
- Readings (1/29):
- Schauer, Frederick. "The boundaries of the First Amendment: A preliminary exploration of constitutional salience." Harv. L. Rev. 117 (2003). Read pp. 1784-1796 only.
- Basics of how Internet platforms organize information
- Modeling the user
- Attention as a scarce resource
- Rankings: disparities in attention, unpredictable outcomes
- Readings (1/31):
- Salganik, Matthew J., Peter Sheridan Dodds, and Duncan J. Watts. "Experimental study of inequality and unpredictability in an artificial cultural market." Science 311.5762 (2006): 854-856.
- Understanding speech in the online domain
- CDA 230
- Network effects in the competition between platforms.
- Readings (2/3):
- Kosseff, Jeff. Testimony before the Subcommittee on Communications, Technology, Innovation, and the Internet, United States Senate, July 28, 2020.
- Klonick, Kate. "The new governors: The people, rules, and processes governing online speech." Harv. L. Rev. 131 (2017). Read pp. 1598-1613 only.
- Personalization and its relationship to polarization
- Models and algorithms for personalized filtering
- Polarization: Evidence for and against the Filter Bubble
- Searching for online radicalization pathways
- Readings (2/5):
- Bell, R.M., Bennett, J., Koren, Y. and Volinsky, C., 2009. "The million dollar programming prize." IEEE Spectrum, 46(5), pp.28-33.
- Readings (2/7):
- Benkler, Yochai, Robert Faris, Hal Roberts, and Ethan Zuckerman. "Study: Breitbart-led right-wing media ecosystem altered broader media agenda." Columbia Journalism Review 3 (2017): 2017.
- Steck, Harald. "Calibrated recommendations." Proceedings of the 12th ACM conference on recommender systems. 2018. To read: Sections 1 and 2
- Content Moderation and Bad behavior
- Hate speech against groups
- Abuse against individuals
- Platform responses, including the human cost of manual content moderation, and the difficulty of algorithmic content moderation
- Counter-measures to platform responses
- Readings (2/10):
- Citron, Danielle K. "Addressing Cyber Harassment: An Overview of Hate Crimes in Cyberspace." Case Western Reserve Journal of Law, Technology & the Internet 6 (2015).
- Misinformation/disinformation:
- Taxonomies of misinformation
- Coordinated dissemination of false information, data voids
- The psychology of sharing false information
- Readings (2/12):
- Wardle, Claire, and Hossein Derakhshan. "Information disorder: Toward an interdisciplinary framework for research and policy making." Council of Europe report 27 (2017). Read Part 1 (pp. 20-48) only.
- Platform economics in markets for content
- Hosting, and responsibility for providing infrastructure
- Markets for rules
- Readings (2/14):
- Gillespie, Tarleton, Patricia Aufderheide, Elinor Carmi, Ysabel Gerrard, Robert Gorwa, Ariadna Matamoros-Fernández, Sarah T. Roberts, Aram Sinnreich, and Sarah Myers West. "Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates." Internet Policy Review 9, no. 4 (2020).
- Mathematical models of biased information
- Inducing a spectrum from information sources
- Modeling consumers of information as Bayesian agents
- Readings (2/19-2/21):
- Glaeser, Edward, and Cass R. Sunstein. "Does more speech correct falsehoods?" The Journal of Legal Studies 43.1 (2014): 65-93. To read: 65-76
- Data collection, data aggregation, and the problem of privacy. Computing platforms are capable of collecting vast amounts of data about their users, and can analyze those data to make inferences about users' characteristics and behaviors. Data collection and analysis have become central to platforms' business models, but also present fundamental challenges to users' privacy expectations. Here, we describe the difficult choices that platforms must make about how they gather, store, combine, and analyze users' information, and what social and political impacts those practices can have.
- Privacy as a fundamental concept:
- Values served by privacy,
- Locating privacy in the law,
- The Panopticon,
- Contextual integrity,
- Psychological dimensions of privacy,
- Evaluating common fallacies about privacy
- Readings (2/24-28):
- Solove, Daniel J. 2011. Nothing to Hide: The False Tradeoff Between Privacy and Security. To read: Chapters 2 and 5.
- Browne, Simone. Dark Matters: On the Surveillance of Blackness. 2015. To read: pages 76-83.
- Digital Data and the Limits of Anonymization.
- Aggregate things we can learn from collective data
- Sensitive things we can learn from data about individuals
- Networked dependencies between people's data
- Readings (3/3-5):
- Sweeney, L., 1997. Weaving technology and policy together to maintain confidentiality. The Journal of Law, Medicine & Ethics, 25(2-3). To read: pages 98-102, 108-110.
- Mayer, J., Mutchler, P. and Mitchell, J.C., 2016. Evaluating the privacy properties of telephone metadata. Proceedings of the National Academy of Sciences, 113(20), pp.5536-5541.
- Narayanan, A. and Shmatikov, V., 2008, May. Robust de-anonymization of large sparse datasets. In 2008 IEEE Symposium on Security and Privacy. To read: Sections 1, 2, 5, and 6. And by the same authors: Myths and fallacies of personally identifiable information. Communications of the ACM, 53(6), 2010, pp.24-26.
- Constitutional right to privacy
- Changes in technology change your expectations about privacy
- Interactions between government and firms on privacy matters
- Readings (3/7-10):
- Bankston, Kevin S. and Ashkan Soltani. Tiny Constables and the Cost of Surveillance: Making Cents Out of United States v. Jones, Yale Law Journal Online 123 (2014): 335-357.
- Koepke, Logan, Emma Weil, Urmila Janardan, Tinuola Dada, and Harlan Yu. Mass Extraction: The Widespread Power of U.S. Law Enforcement to Search Mobile Phones. Upturn. 2020. To read: pages 4-39.
- Privacy in non-constitutional law
- Notice and consent model
- Data ownership model
- Readings (3/12):
- Nissenbaum, Helen. A contextual approach to privacy online. Daedalus 140, no. 4 (2011): 32-48.
- Collection and use of data
- The challenge of precommitment
- Case study of IDNYC
- The problem of culpability
- Readings (3/14):
- Seltzer, William, and Margo Anderson. The Dark Side of Numbers: The Role of Population Data Systems in Human Rights Abuses. Social Research 68.2 (2001): 481-513.
- Differential privacy
- Basic principles
- Mathematical model
- Applications to the U.S. Census and to research datasets
- Readings (3/17):
- Dwork, C. and Roth, A., 2014. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4). To read: pages 5-18.
- Readings (3/19):
- Ruggles, S., Fitch, C., Magnuson, D. and Schroeder, J., 2019, May. Differential privacy and census data: Implications for social and economic research. In AEA papers and proceedings (Vol. 109, pp. 403-08).
- Surveillance of work and workers
- Scientific management and the history of workplace observation
- Legal protections
- New frontiers of workplace data collection
- Readings (3/21):
- Ajunwa, Ifeoma, Kate Crawford, and Jason Schultz. Limitless worker surveillance. Calif. L. Rev. 105 (2017). To read: pages 735-48.
- Privacy from whom?
- Stalking and abuse
- Open-source information
- Doxing
- Readings (3/24):
- Donovan, Joan. Refuse and Resist! 2017. Limn, issue 8.
- Freed, Diana, Sam Havron, Emily Tseng, Andrea Gallardo, Rahul Chatterjee, Thomas Ristenpart, and Nicola Dell. "Is my phone hacked?" Analyzing Clinical Computer Security Interventions with Survivors of Intimate Partner Violence. Proceedings of the ACM on Human-Computer Interaction 3, no. CSCW (2019): 1-24.
- The role of cryptography and security
- Public-key cryptography
- Secure multi-party communication
- Policy questions for encrypted data
- Readings (3/26):
- Bogetoft, P., Christensen, D.L., Damgård, I., Geisler, M., Jakobsen, T., Krøigaard, M., Nielsen, J.D., Nielsen, J.B., Nielsen, K., Pagter, J. and Schwartzbach, M., 2009, February. Secure multiparty computation goes live. In International Conference on Financial Cryptography and Data Security (pp. 325-343).
- Singh, Simon, 2000. The code book: the science of secrecy from ancient Egypt to quantum cryptography. Anchor. To read: Chapter 5.
- Readings (3/28):
- Abelson, Harold et al. Keys Under Doormats: Mandating Insecurity by Requiring Government Access to All Data and Communications. MIT CSAIL Technical Report (2015).
- Gasser, Urs et al. Don’t Panic: Making Progress on the ‘Going Dark’ Debate. Report, Berkman Center for Internet & Society (Feb 1, 2016).
- Data-Driven Decision-Making. Algorithms trained using machine learning are increasingly being deployed as part of decision-making processes in a wide range of applications. We discuss how this development is the most recent in a long history of data-driven decision methodologies that companies, governments, and organizations have deployed. When these methods are used to evaluate people, in settings that include employment, education, credit, healthcare, and the legal system, there is the danger that the resulting algorithms may incorporate biases that are present in the human decisions they're trained on. And when the methods are evaluated using experimental interventions, it is important to understand how to apply principles for the ethical conduct of experiments with human participants.
- Principles of quantification in decision-making by organizations.
- History of quantification and rationalization
- The rise of bureaucracies and administrative decision-making
- Decision-making via optimization
- The choice of an objective function
- Readings (4/7-9):
- Scott, James C. Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press, 2020. To read: pages 11-33.
- Mulvey, John M., and Sally Blount White. Computers in the government: Modeling and policy design. Public Productivity Review (1987): 35-43.
- Inequality and power
- Social stratification, structural embeddedness, intersectionality
- Historical perspectives on structural discrimination
- Principles from discrimination law: disparate treatment and disparate impact
- Readings (4/11):
- Fischer, Claude S., Michael Hout, Martín Sánchez Jankowski, Samuel R. Lucas, Ann Swidler, and Kim Voss. Inequality by design. The inequality reader (2018): 20-24.
- Small, Mario L., and Devah Pager. Sociological perspectives on racial discrimination. Journal of Economic Perspectives 34, no. 2 (2020): 49-67.
- The basic methodology of machine learning
- Features and labels
- Training procedures and evaluation
- The problem of interpretability
- Readings (4/14):
- Mullainathan, Sendhil, and Jann Spiess. Machine learning: an applied econometric approach. Journal of Economic Perspectives 31, no. 2 (2017): 87-106. To read: pages 87-93 and 99-104.
- Sources of bias in algorithmic decision-making
- Bias in features and labels
- Bias in training procedures
- Implications for discrimination law
- Readings (4/16-18):
- Barocas, Solon, and Andrew D. Selbst. Big data's disparate impact. Calif. L. Rev. 104 (2016): 671. To read: 677-694.
- Buolamwini, Joy, and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency. 2018.
- Chouldechova, Alexandra, and Aaron Roth. A snapshot of the frontiers of fairness in machine learning. Communications of the ACM 63.5 (2020): 82-89.
- Experiments as a research methodology
- Establishing causality
- Contrasts with observational data
- Principles and practice of A/B testing
- Spillover between individuals
- Explore/exploit trade-offs
- Readings (4/21):
- Kohavi, Ron, Roger Longbotham, Dan Sommerfield, and Randal M. Henne. Controlled experiments on the web: survey and practical guide. Data mining and knowledge discovery 18, no. 1 (2009): 140-181. To read: pages 140-152 (through Section 3.1).
- Research ethics frameworks for conducting experiments
- The Belmont and Menlo reports
- IRBs and human subjects research
- Aversion to experiments
- Readings (4/23):
- Meyer, Michelle N., Patrick R. Heck, Geoffrey S. Holtzman, Stephen M. Anderson, William Cai, Duncan J. Watts, and Christopher F. Chabris. Objecting to experiments that compare two unobjectionable policies or treatments. Proceedings of the National Academy of Sciences 116, no. 22 (2019): 10723-10728.
- Salganik, Matthew J. Bit by bit: Social research in the digital age. Princeton University Press, 2019. To read: pages 281-288, 294-301.
- Inter-personal discrimination
- Principles from the behavioral sciences
- Audit studies as experimental investigations of bias
- Implications for user choice in online services
- Readings (4/25):
- Edelman, Benjamin, Michael Luca, and Dan Svirsky. Racial discrimination in the sharing economy: Evidence from a field experiment. American Economic Journal: Applied Economics 9, no. 2 (2017): 1-22.
- Formalizing notions of fairness for algorithms
- Equalization of error rates
- Enforcing calibrated predictions
- Inherent trade-offs between different guarantees
- Readings (4/28):
- Mayson, Sandra G. Bias in, bias out. Yale Law Journal 128 (2018): 2218. To read: pages 2221-2251.
- Automated decisions in the physical world
- A taxonomy of cyber-physical systems, including robots, drones, and sensors
- Autonomous vehicles
- Autonomous weapons and their relation to theories of warfare
- Liability and culpability
- Anthropomorphism, care, and deception.
- Readings (4/30):
- Bainbridge, Lisanne. Ironies of automation. Automatica 19, no. 6 (1983): 775-779.
- Darling, Kate. 'Who's Johnny?'Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy." Robot Ethics 2.0 (2015).
- Feedback loops in data-driven decision-making and generative AI
- Self-fulfilling predictions
- Emergent representations from user activity
- Long-term impacts of interventions
- Readings (5/3):
- Sweeney, Latanya. Discrimination in online ad delivery. Communications of the ACM 56, no. 5 (2013): 44-54.
- Lum, Kristian, and William Isaac. To predict and serve?. Significance 13.5 (2016): 14-19.
- Noble, Safiya Umoja. Algorithms of oppression: How search engines reinforce racism. NYU Press, 2018. To read: pages 64-90.