Members of the Ethics & Society team
Ethics & Society

Exploring the real-world impacts of AI

Launching public lectures
We partnered with the Royal Society on a free public lecture and panel series, You & AI. These lectures, featuring experts like Kate Crawford and Joseph Stiglitz, explored AI’s capabilities, future directions, and potential societal effects. Each lecture was recorded and is available to watch online.
Engaging citizens directly
Together with the RSA, we created the Forum for Ethical AI, a public engagement programme for discussing the use of automated decision-making tools. During this forum, citizen participants developed a critical framework for addressing transparency, accountability, and accessibility of AI technology.
Convening experts
In partnership with Princeton University, we organised a workshop to explore how criminal justice systems use AI technology. We brought together technologists and advocates to discuss solutions and create resources, directly informed by affected communities, which explore the harm that can be caused by predictive tools.

Privacy, transparency, and fairness

AI systems can use large-scale and sometimes sensitive datasets, such as medical or criminal justice records. This raises important questions about protecting people’s privacy and ensuring that they understand how their data is used. Also, the data used for training automated decision-making systems can contain biases, creating systems that might discriminate against certain groups of people.

  • How do concepts such as consent and ownership relate to using data in AI systems?
  • What can AI researchers do to detect and minimise the effects of bias?
  • What policies and tools allow meaningful audits of AI systems and their data?

AI morality and values

AI systems could make societies fairer and more equal. But different groups of people hold different values, meaning it is difficult to agree on universal principles. Likewise, endorsing values held by a majority could lead to discrimination against minorities.

  • How can we ensure that the values designed for AI systems reflect society?  
  • How do we prevent AI systems from causing discrimination?
  • How do we integrate inclusive values into AI systems?

Governance and accountability

The creation and use of powerful new technologies requires effective governance and regulation, ensuring they are used safely and with accountability. In the case of AI, new standards or institutions may be needed to oversee its use by individuals, states, and the private sector - both internationally and within national borders.

  • What kinds of governance makes sense for rapidly developing technologies like AI?
  • Can existing institutions uphold the rights of everyone affected by AI?
  • What can we learn from other fields like biotechnology or genetics that might influence how AI is used?

AI and the world’s complex challenges

By uncovering patterns in complex datasets and suggesting promising new ideas and strategies, AI technologies may one day help solve some of humanity’s most urgent problems. But applying AI technologies to real-world problems takes careful consideration.

  • Which problems could AI help address?
  • How can AI research best contribute?
  • Who should we be working with to help solve problems?

Misuse and unintended consequences

While AI systems have great potential, they also come with risks. For example, they might malfunction or not operate in the ways they were intended. We might also rely on them too heavily in situations that go beyond their abilities or a technology designed to help society might be repurposed in unethical or harmful ways.

  • How can these risks be monitored across the world?
  • What structures can be put in place to minimise harm?
  • How do we ensure that people maintain control of AI systems?

Economic impact: inclusion and equality

Like previous waves of technology, AI could contribute to a huge increase in productivity. However, it could also lead to the widespread displacement of jobs and alter economies in ways that disproportionately affect some sections of the population. This poses important questions about the kinds of societies and economies we want to build.

  • How can we anticipate the social or economic impacts of AI?
  • What new opportunities are created?
  • How do we ensure AI has a net positive effect on the world?

Article 36

Article 36 is a non-profit organisation working to prevent harm caused by certain weapons. Led by Richard Moyes, previously co-chair of the Cluster Munition Coalition, it is a founding member of the Campaign to Stop Killer Robots. The organisation developed the concept of “meaningful human control” as an approach to guide international discussions on autonomous weapons systems. Article 36 is also part of the steering group of the International Campaign to Abolish Nuclear Weapons (ICAN), which was awarded the 2017 Nobel Peace Prize, and has led efforts to establish the impact of explosive weapons in populated areas as an international humanitarian priority. Previously, Richard established and managed explosive ordnance disposal projects for the UK NGO Mines Advisory Group. He is an Honorary Fellow at the University of Exeter and serves on the Aviation Futures policy panel of the UK’s Civil Aviation Authority. We have worked with Article 36 to explore the risks of intelligent systems in international human rights law and international humanitarian law.

Center for Information Technology Policy - Princeton University

The Center for Information Technology Policy is an interdisciplinary center at Princeton University, focussing on research, teaching, and events that address digital technologies as they interact with society. CITP and DeepMind partnered to organise a workshop that explored solutions to the ways AI has been used in the US criminal justice system. This workshop brought together civil and human rights groups with technologists to explore solutions to a lack of fairness, accountability, and transparency when AI/ML technology is used in the provision of public services.

Digital Asia Hub

Digital Asia Hub is an independent, non-profit think tank focused on internet and society research. At the core of the Hub are independent and interdisciplinary research exploring both the opportunities and challenges related to digital technology, innovation, and society in Asia. DeepMind has provided support for the Hub to expand their regional efforts on AI, ethics, and governance.

Hoffmann Centre for Sustainable Resource Economy

The Hoffmann Centre is an organisation based within Chatham House whose goal is to create a sustainable resource economy, in which the world’s citizens and environment thrive together, now and in the future. Their mission is to accelerate the uptake of smart policies, technologies, and business models that will reshape the world’s demand for resources and transform the global economy. DeepMind and the Hoffmann Centre partnered to organise a series of workshops focused on ways that AI can transform our approach to complex global challenges, including sustainability in the food and land use system, deep decarbonisation, and reducing emissions in major industries.


Involve is a charity that’s on a mission to put people at the heart of decision-making. DeepMind and Involve partnered to organise a series of three half-day roundtables to investigate what meaningful public engagement looks like around AI and ethics, and to explore how these methods and best practices can be built into decision-making by researchers, technologists, and policymakers.

Leverhulme Centre for the Future of Intelligence (CFI)

The mission of the Leverhulme Centre for the Future of Intelligence (CFI) is to create a new interdisciplinary community of researchers, with strong links to technologists and the policy world and a clear practical goal: to work together to ensure that humans make the best use of the opportunities presented by AI. With support from DeepMind Ethics & Society, CFI will launch a series of roundtables and publish new research on topics related to the interpretability of AI systems. DeepMind has also provided support for CFI’s Global AI Narratives programme.

Oxford Internet Institute’s Digital Ethics Lab

The Digital Ethics Lab (DELab) is part of the Oxford Internet Institute (OII), the world's leading research and teaching department of the University of Oxford, dedicated to the social science of the internet. DELab’s mission is to help design a better information society. Its goal is to identify the benefits and enhance the opportunities of digital innovation as a force for good, and avoid or mitigate its risks and shortcomings. Its work builds on Oxford’s expertise in conceptual design, horizon scanning, foresight analysis, and translational research on ethics, governance, and policy-making. With support from DeepMind Ethics & Society, OII’s DELab has conducted research on explainable and accountable algorithms and automated decision-making in Europe.

Partnership on AI

DeepMind is pleased to be a founding member of the Partnership on AI (PAI), a global nonprofit organisation committed to the creation and dissemination of best practices in artificial intelligence. By gathering the leading companies, organisations, and people who are affected by artificial intelligence in different ways, PAI establishes a common ground between entities which otherwise might not be working together. Together, these groups serve as a uniting force for good in the AI ecosystem. PAI convenes more than 100 partner organisations from around the world to realise the promise of artificial intelligence. DeepMind has also supported the establishment of a PAI fellowshipfocused on diversity and inclusion.

The AI Now Institute at NYU

The AI Now Institute at NYU is an independent, interdisciplinary research initiative dedicated to understanding the social and economic implications of AI. AI Now conducts empirical research focused on AI across four key areas: bias and inclusion, labor change and automation, critical infrastructure and safety, and basic rights and liberties. With support from DeepMind Ethics & Society, AI Now hosts ten two-year NYU postdoctoral positions to advance research related to AI Now's mission.

The Alan Turing Institute

The Alan Turing Institute is the national institute for data science and artificial intelligence, with headquarters at the British Library. DeepMind has committed an unrestricted charitable donation to the Institute to support research in data science and artificial intelligence. This unrestricted gift will help enable the Turing Institute to support areas with the greatest need, which are strategically important to their mission.

The Institute for Policy Research (IPR) at the University of Bath

The Institute for Policy Research (IPR) at the University of Bath aims to further the public good through research into policy issues. At the heart of the IPR lies its ability to facilitate exchange between researchers, practitioners, and policymakers. Bringing diverse perspectives together, the IPR produces reports, policy briefs, and empirical research that inform and influence public policy debates. With an unrestricted donation from DeepMind Ethics & Society, IPR will conduct research that aims to provide a better understanding of the broader relationship between labour market changes in Europe and attitudes towards basic income and welfare. This research helps assess the case for a universal basic income and alternative reform packages in Europe through comparative regression analysis and microsimulation of fiscal and distributional effects.

The Royal Society

The Royal Society is a self-governing fellowship of many of the world’s most distinguished scientists drawn from all areas of science, engineering, and medicine. The Society’s fundamental purpose, reflected in its founding charters of the 1660s, is to recognise, promote, and support excellence in science, and to encourage the development and use of science for the benefit of humanity. With support from DeepMind Ethics & Society, The Royal Society launched You & AI, a public lecture series that explores cutting edge AI research and its implications for society, building on the society’s recent projects in these areas. Lectures came from leading figures in AI and those thinking about its societal consequences will provide a public forum to explore AI’s capabilities, future directions, and potential societal effects.

The Royal Society for the encouragement of Arts, Manufactures, and Commerce

The RSA is a charity which seeks to harness human potential to address the challenges that society faces. The mission of the RSA is to enrich society through ideas and action. DeepMind and the RSA partnered to create the Forum for Ethical AI, a series of citizen juries that explore automated decision-making. These events have used immersive scenarios to help participants understand the ethical issues raised by automated decision-making systems, and facilitated public engagement on some of the most pressing issues facing society today.


WITNESS is an international non-profit that makes it possible for anyone, anywhere to use video and technology to protect and defend human rights. Working alongside both local communities and technology giants, WITNESS fills critical gaps in use of video and technology for human rights. DeepMind has provided support for WITNESS to expand their research and programs exploring technical and societal solutions to emerging threats from so-called deepfakes and other forms of AI-generated synthetic media.

Team profile
Sean Legassick
Head of Ethics Research

Sean’s background is in software engineering, AI, and sociology, with a long-standing interest in how technology and society interact and shape our world.

Sean focuses on finding the best ways for contributing philosophy and social science expertise to the ethical advancement of AI research and development.

“We have an unprecedented opportunity at DeepMind to collaborate with the world’s best AI researchers for addressing complex ethical challenges.”
Portrait of Sean Legassick, Head of Ethics Research
Team profile
Jennifer Bernal
Public Policy Manager

Jennifer has worked at the intersection of public policy and technology for a decade. She previously managed Google’s policy strategy for media and intellectual property in Europe, the Middle East, and Africa.

Jennifer works with with governments and the policy community, supporting discussions about the governance of new technologies and ensuring that public interests in creating safe and ethical AI are reflected in DeepMind’s research.

“As a society, we have yet to figure out what governance systems will help maximise the benefits of AI, and minimise its risks. DeepMind continuously reflects on this question and I am excited to help advance this conversation.”
Portrait of Jennifer Bernal, Public Policy Manager
Team profile
Iason Gabriel
Senior Research Scientist

Iason is a political theorist and philosopher by training. Before joining DeepMind he worked for the United Nations and also taught politics at Oxford University for a number of years.

Iason’s work focuses on how to ensure that the systems we build are aligned with human values. He also teaches ethics to researchers at DeepMind.

“We aim to build safe and ethical AI systems. It’s great to be somewhere people think seriously not only about what they build but why, as well as the wider social purpose of technology.”
Portrait of Iason Gabriel, Senior Research Scientist
Arrow left
Arrow right