Science Fiction
Thu, Nov 16
Today's session investigates the astounding claims being made about the capabilities of some AI technologies. Information asymmetry plagues the field of AI, making it easy for grifters to move in to make an easy buck thanks to common misunderstandings about what algorithms can do. Further complications are induced by academics who are looking for splashy topics to publish on, but also misunderstand the algorithms and the data they are trained with. We'll attempt to dispel various myths and suggest some best practices for undertaking AI work.
We have three main in-class learning goals. By the end of lecture today you will:
- Understand why it is impossible to determine whether or not somebody is a criminal based on a photo of their face.
- Link earlier class discussions on hiring to problems with AI technologies that assess job candidates.
- Learn how to approach new technologies that appear to good to be true with a skeptical attitude.
The slides for today's lecture.
Read This:
Kevin Bowyer and Your Very Own Professor team up with colleagues to explain Why Algorithms Can't Predict Criminality from a Face
Mahdi Hashemi and Margeret Hall's retracted paper in the Journal of Big Data on Criminal Tendency Detection from Facial Images and the Gender Bias Effect
Xiaolin Wu and Xi Zhang respond to their detractors, insisting that the Machine Learning of Criminality Perceptions is Real
Blaise Agüera y Arcas, Margaret Mitchell and Alexander Todorov explain the the Surprising Reemergence of Physiognomy
Drew Harwell writing in the Washington Post about the tech company HireVue, which markets an AI job interviewing platform
Angela Chen and Karen Hao Question HireVue's Claims in the MIT Technology Review
Stella Biderman and Your Very Own Professor provide some common-sense recommendations for Avoiding Pitfalls in Machine Learning Research
Walter Kirn writing for Harper's on Why Data Scientists Will Always Have Trouble Predicting the Future
These optional readings provide more context on emotion recognition and its applications, as well as the strange history of physiognomy:
AI is increasingly being used to identify emotions – here’s what’s at stake
AI emotion-detection software tested on Uyghurs
Do This:
Writing Reflection 09
See the instructions posted on the assignment's page.
This writing reflection is due on 11/21 at 5pm.
This Week's Dialogue Group Meeting
Find at least one hour to meet with your group to discuss the prompt of the week: What concerns about AI are legitimate at the present moment?
Schedule your group meeting for next week. How can the legitimate concerns about AI be addressed?
Reminder of Project 02 Deadline
This project is due Tuesday 11/21 at 5pm.
The project for this unit on AI will be a Letter to the Editor that summarizes your group's dialogues over the course of the next several weeks. Review the instructions that have been posted on the Project 02 page.
Please complete the Team Member Evaluation Form and drop it in your Google drive folds that is shared with Louisa before the project deadline.