Ethics & Society team

Exploring the real-world impacts of AI

Ethics & Society

As scientists and practitioners, we take responsibility for investigating the impacts of our work.

Overview

Securing safe, accountable, and socially beneficial technology cannot be an afterthought. With the right focus on ethical standards and safety, we have better chances of finding AI’s potential benefits. By researching the ethical and social questions involving AI, we ensure these topics remain at the heart of everything we do.

Social purpose

We start from the belief that AI should be used for socially beneficial purposes and always remain under meaningful human control. Understanding what this means in practice is essential.  

Finding ways to involve the broader society in our work is fundamental to our mission, so partnerships with others in the field of AI ethics is a crucial element of our approach.

We embrace scientific values like transparency, freedom of thought, and the equality of access, and we deeply respect the independence and academic integrity of our researchers and partners.

Our work

Questions about AI extend far beyond its technology. Through our partnerships, we’ve created public lectures, forums, and resources to better understand the societal impacts of AI. Below are some of our recent highlights.

Launching public lectures

We partnered with the Royal Society on a free public lecture and panel series, You & AI. These lectures, featuring experts like Kate Crawford and Joseph Stiglitz, explored AI’s capabilities, future directions, and potential societal effects. Each lecture was recorded and is available to watch online.

Engaging citizens directly

Together with the RSA, we created the Forum for Ethical AI, a public engagement programme for discussing the use of automated decision-making tools. During this forum, citizen participants developed a critical framework for addressing transparency, accountability, and accessibility of AI technology.

Convening experts

In partnership with Princeton University, we organised a workshop to explore how criminal justice systems use AI technology. We brought together technologists and advocates to discuss solutions and create resources, directly informed by affected communities, which explore the harm that can be caused by predictive tools.

Themes

We want to promote research that ensures AI works for all. Our research themes are designed to reflect the key ethical challenges that exist for us and the wider AI community. We undertake research and collaborations in each of these areas, determined by the urgent challenges ahead.

Privacy, transparency, and fairness

AI systems can use large-scale and sometimes sensitive datasets, such as medical or criminal justice records. This raises important questions about protecting people’s privacy and ensuring that they understand how their data is used. Also, the data used for training automated decision-making systems can contain biases, creating systems that might discriminate against certain groups of people. 

  • How do concepts such as consent and ownership relate to using data in AI systems? 
  • What can AI researchers do to detect and minimise the effects of bias?
  • What policies and tools allow meaningful audits of AI systems and their data?

Fellows

Our fellows are independent advisors that help provide critical feedback and guidance. These research fellows not only bring expertise but also their values and capacity for asking challenging questions. Engaging with world-class philosophers, economists, and practitioners help us better understand the implications of AI, keeping us focused on questions that matter.

Photo of Nick Bostrom
Nick Bostrom

Professor at the University of Oxford, Director of the Future of Humanity Institute and the Governance of Artificial Intelligence Program

Photo of Diane Coyle
Professor Diane Coyle

Bennett Professor of Public Policy, University of Cambridge

Photo of Edward Felten
Professor Edward W Felten

Professor of Computer Science and Public Affairs, Founding Director of Princeton's Center for Information Technology Policy

Photo of James Manyika
James Manyika

Senior Partner at McKinsey & Company and Chair of the McKinsey Global Institute

Photo of Jeffrey Sachs
Professor Jeffrey D Sachs

Professor of Economics, Director of the Center for Sustainable Development at Columbia University and Senior UN Advisor

Partners

Collaboration, diversity of thought, and meaningful public engagement are key if we are to develop and apply AI for maximum benefit. The Ethics & Society team works with a variety of partners in an effort to support and learn from the broadest possible viewpoints, creating space for interdisciplinary collaboration that can approach complex challenges in creative ways. We will always be open about who we work with and what projects we fund. All of our research grants will be unrestricted and we will never attempt to influence or pre-determine the outcome of studies we commission. When we collaborate or co-publish with external researchers, we will disclose whether they have received funding from us.

Article 36

Article 36 is a non-profit organisation working to prevent harm caused by certain weapons. Led by Richard Moyes, previously co-chair of the Cluster Munition Coalition, it is a founding member of the Campaign to Stop Killer Robots. The organisation developed the concept of “meaningful human control” as an approach to guide international discussions on autonomous weapons systems. Article 36 is also part of the steering group of the International Campaign to Abolish Nuclear Weapons (ICAN), which was awarded the 2017 Nobel Peace Prize, and has led efforts to establish the impact of explosive weapons in populated areas as an international humanitarian priority. Previously, Richard established and managed explosive ordnance disposal projects for the UK NGO Mines Advisory Group. He is an Honorary Fellow at the University of Exeter and serves on the Aviation Futures policy panel of the UK’s Civil Aviation Authority. We have worked with Article 36 to explore the risks of intelligent systems in international human rights law and international humanitarian law.

Find out more