![]() |
CCAI9014 Artificial Intelligence
|
Course Description
The twenty-first century stands to bring about a fundamental shift in how we think about crime, punishment, and moral responsibility. In this course, we will critically study the law and ethics of artificial intelligence (AI) – namely, the possibilities, and the myths, of how data-driven models will shape and be shaped by the surrounding legal and political institutions within which they are embedded. We will study the extent to which AI models can predict future crimes and what that entails for traditional notions of moral responsibility, punishment, and rehabilitation. We will investigate how AI models reflect and exacerbate existing social biases, as well as how they create biases and unfair outcomes of their own. We will examine the way models are implemented in finance and capital markets, underwriting, and marketing, and the extent to which they can improve operational inefficiencies. We will study what digital privacy means, and how we need to reshape our conceptions of privacy in an algorithmic world. Finally, we will evaluate what makes models transparent or explainable, and whether and how so-called black box models are any different from human decision makers. The course will be based around several pivotal case studies, where students will work in groups to solve legal and ethical problems brought about by algorithmic decision making.
Course Learning Outcomes
On completing the course, students will be able to:
- Explain how central philosophical concepts such as bias, fairness and justice can be applied to algorithmic decision making using AI models.
- Demonstrate an understanding of basic probabilistic literacy for everyday life.
- Apply normative concepts to think critically how to design better social institutions.
- Improve their ability to work in groups in order to develop novel solutions to AI governance problems.
- Develop the ability to think critically about essential concepts such as privacy in the context of AI institutions.
Offer Semester and Day of Teaching
Second semester (Wed)
Study Load
Activities | Number of hours |
Lectures | 24 |
Tutorials | 8 |
Reading / Self-study | 26 |
Assessment: Essay / Report writing | 20 |
Assessment: Group project | 20 |
Assessment: In-class test (incl preparation) | 24 |
Total: | 122 |
Assessment: 100% coursework
Assessment Tasks | Weighting |
Essay | 30 |
Group Project | 30 |
Final test | 30 |
In-class participation and discussions | 10 |
Required Reading
Reading:
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica.
- Babic, B., & Cohen, I. G. (2023). The Algorithmic Explainability “Bait and Switch”. The Minnesota Law Review.
- Babic, B., et. al. (2021). Beware Explanations from AI in Health Care. Science, 373(6552), 284-286.
- BBC. (2014). The Birthday Paradox at the World Cup.
- Chalmers, D. J. (1995). Facing Up to the Hard Problem of Consciousness. Journal of Consciousness Studies, 2(3), 200-219.
- Gareth, J., et al. (2013). Introduction to Statistical Learning. [Chap. 1]
- Gelman, A., et. al. (2007). An Analysis of the NYPD Stop and Frisk Policy in the Context of Racial Bias. Journal of the American Statistical Association, 102(479), 813-823.
- Jackson, F. (1982). Epiphenomenal Qualia. The Philosophical Quarterly, 32(127), 127-136.
- Kearns, M., & Roth, A. (2019). The Ethical Algorithm. [Chap. 1]
- Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016). How We Analyzed the COMPAS Recidivism Algorithm. ProPublica.
- Meyer, J. (1969). Reflections on Some Theories of Punishment. Journal of Criminal Law and Criminology, 59(4), 595.
- Mlodinov, L. (2008). The Drunkard’s Walk: How Randomness Rules our Lives. [Excerpts]
- Nagel, T. (1974). What is it Like to be a Bat. The Philosophical Review, 83(4), 435-450.
- O’Neill, C. (2016). Weapons of Math Destruction. [Chap. 8]
- Roberts, S. (2020, August 4). How to Think Like an Epidemiologist. New York Times.
- Ross, A. (2019). First Course in Probability (10th ed). [Chap. 1]
- Rudin, C., et. al. (2020). The Age of Secrecy and Unfairness in Recidivism Prediction. Harvard Data Science Review.
- Solove, D. (2007). “I’ve Got Nothing to Hide” and Other Misunderstandings of Privacy. San Diego Law Review, 44, 745.
Movies:
- Ex Machina. (2014).
- Film Screening: Crime + Punishment. (2016).
- Minority Report. (2002).
Shows:
- Futurama, Law and Oracle. (2011).
Other Videos:
- 3Blue1Brown. Bayes’ Theorem.
- 3Blue1Brown. But What is a Neural Network.
- Numberphile. The Monty Hall Problem.
Course Co-ordinator and Teacher(s)
Course Co-ordinator | Contact |
Professor B. Babic School of Humanities (Philosophy), Faculty of Arts / Institute of Data Science |
Tel: Email: babic@hku.hk |
Teacher(s) | Contact |
Professor B. Babic School of Humanities (Philosophy), Faculty of Arts / Institute of Data Science |
Tel: Email: babic@hku.hk |