While there is a lot of excitement about AI research, there are also concerns about the way it might be implemented, used and abused.
In this episode Hannah investigates the more human side of the technology, some ethical issues around how it is developed and used, and the efforts to create a future of AI that works for everyone.
Interviewees: Verity Harding, Co-Lead of DeepMind Ethics and Society; DeepMind’s COO Lila Ibrahim, and research scientists William Isaac and Silvia Chiappa.
Find out more about the themes in this episode:
- The Partnership on AI
- ProPublica: investigation into machine bias in criminal sentencing
- Science Museum – free exhibition: Driverless: who is in control (until Oct 2020)
- Survival of the best fit: An interactive game that demonstrates some of the ways in which bias can be introduced into AI systems, in this case for hiring
- Joy Buolamwini: AI, Ain’t I a Woman: A spoken word piece exploring AI bias, and systems not recognising prominent black women
- Hannah Fry: Hello World - How to be Human in the Age of the Machine
- DeepMind Ethics & Society
- Future of Humanity Institute: AI Governance:A Research Agenda
If you know of other resources we should link to, please help other listeners by either replying to us on Twitter (#DMpodcast) or emailing us at email@example.com. You can also use that address to send us questions or feedback on the series.
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)