With the right focus on ethical standards and safety, we have better chances of finding AI’s potential benefits. By researching the ethical and social questions involving AI, we ensure these topics remain at the heart of everything we do.
We start from the belief that AI should be used for socially beneficial purposes and always remain under meaningful human control. Understanding what this means in practice is essential.
Finding ways to involve the broader society in our work is fundamental to our mission, so partnerships with others in the field of AI ethics is a crucial element of our approach.
We embrace scientific values like transparency, freedom of thought, and the equality of access, and we deeply respect the independence and academic integrity of our researchers and partners.
AI systems can use large-scale and sometimes sensitive datasets, such as medical or criminal justice records. This raises important questions about protecting people’s privacy and ensuring that they understand how their data is used. Also, the data used for training automated decision-making systems can contain biases, creating systems that might discriminate against certain groups of people.
AI systems could make societies fairer and more equal. But different groups of people hold different values, meaning it is difficult to agree on universal principles. Likewise, endorsing values held by a majority could lead to discrimination against minorities.
The creation and use of powerful new technologies requires effective governance and regulation, ensuring they are used safely and with accountability. In the case of AI, new standards or institutions may be needed to oversee its use by individuals, states, and the private sector - both internationally and within national borders.
By uncovering patterns in complex datasets and suggesting promising new ideas and strategies, AI technologies may one day help solve some of humanity’s most urgent problems. But applying AI technologies to real-world problems takes careful consideration.
While AI systems have great potential, they also come with risks. For example, they might malfunction or not operate in the ways they were intended. We might also rely on them too heavily in situations that go beyond their abilities or a technology designed to help society might be repurposed in unethical or harmful ways.
Like previous waves of technology, AI could contribute to a huge increase in productivity. However, it could also lead to the widespread displacement of jobs and alter economies in ways that disproportionately affect some sections of the population. This poses important questions about the kinds of societies and economies we want to build.
Article 36 is a non-profit organisation working to prevent harm caused by certain weapons. Led by Richard Moyes, previously co-chair of the Cluster Munition Coalition, it is a founding member of the Campaign to Stop Killer Robots. The organisation developed the concept of “meaningful human control” as an approach to guide international discussions on autonomous weapons systems. Article 36 is also part of the steering group of the International Campaign to Abolish Nuclear Weapons (ICAN), which was awarded the 2017 Nobel Peace Prize, and has led efforts to establish the impact of explosive weapons in populated areas as an international humanitarian priority. Previously, Richard established and managed explosive ordnance disposal projects for the UK NGO Mines Advisory Group. He is an Honorary Fellow at the University of Exeter and serves on the Aviation Futures policy panel of the UK’s Civil Aviation Authority. We have worked with Article 36 to explore the risks of intelligent systems in international human rights law and international humanitarian law.
The Center for Information Technology Policy is an interdisciplinary center at Princeton University, focussing on research, teaching, and events that address digital technologies as they interact with society. CITP and DeepMind partnered to organise a workshop that explored solutions to the ways AI has been used in the US criminal justice system. This workshop brought together civil and human rights groups with technologists to explore solutions to a lack of fairness, accountability, and transparency when AI/ML technology is used in the provision of public services.
Digital Asia Hub is an independent, non-profit think tank focused on internet and society research. At the core of the Hub are independent and interdisciplinary research exploring both the opportunities and challenges related to digital technology, innovation, and society in Asia. DeepMind has provided support for the Hub to expand their regional efforts on AI, ethics, and governance.
The Hoffmann Centre is an organisation based within Chatham House whose goal is to create a sustainable resource economy, in which the world’s citizens and environment thrive together, now and in the future. Their mission is to accelerate the uptake of smart policies, technologies, and business models that will reshape the world’s demand for resources and transform the global economy. DeepMind and the Hoffmann Centre partnered to organise a series of workshops focused on ways that AI can transform our approach to complex global challenges, including sustainability in the food and land use system, deep decarbonisation, and reducing emissions in major industries.
Involve is a charity that’s on a mission to put people at the heart of decision-making. DeepMind and Involve partnered to organise a series of three half-day roundtables to investigate what meaningful public engagement looks like around AI and ethics, and to explore how these methods and best practices can be built into decision-making by researchers, technologists, and policymakers.
The mission of the Leverhulme Centre for the Future of Intelligence (CFI) is to create a new interdisciplinary community of researchers, with strong links to technologists and the policy world and a clear practical goal: to work together to ensure that humans make the best use of the opportunities presented by AI. With support from DeepMind Ethics & Society, CFI will launch a series of roundtables and publish new research on topics related to the interpretability of AI systems. DeepMind has also provided support for CFI’s Global AI Narratives programme.
The Digital Ethics Lab (DELab) is part of the Oxford Internet Institute (OII), the world's leading research and teaching department of the University of Oxford, dedicated to the social science of the internet. DELab’s mission is to help design a better information society. Its goal is to identify the benefits and enhance the opportunities of digital innovation as a force for good, and avoid or mitigate its risks and shortcomings. Its work builds on Oxford’s expertise in conceptual design, horizon scanning, foresight analysis, and translational research on ethics, governance, and policy-making. With support from DeepMind Ethics & Society, OII’s DELab has conducted research on explainable and accountable algorithms and automated decision-making in Europe.
DeepMind is pleased to be a founding member of the Partnership on AI (PAI), a global nonprofit organisation committed to the creation and dissemination of best practices in artificial intelligence. By gathering the leading companies, organisations, and people who are affected by artificial intelligence in different ways, PAI establishes a common ground between entities which otherwise might not be working together. Together, these groups serve as a uniting force for good in the AI ecosystem. PAI convenes more than 100 partner organisations from around the world to realise the promise of artificial intelligence. DeepMind has also supported the establishment of a PAI fellowshipfocused on diversity and inclusion.
The AI Now Institute at NYU is an independent, interdisciplinary research initiative dedicated to understanding the social and economic implications of AI. AI Now conducts empirical research focused on AI across four key areas: bias and inclusion, labor change and automation, critical infrastructure and safety, and basic rights and liberties. With support from DeepMind Ethics & Society, AI Now hosts ten two-year NYU postdoctoral positions to advance research related to AI Now's mission.
The Alan Turing Institute is the national institute for data science and artificial intelligence, with headquarters at the British Library. DeepMind has committed an unrestricted charitable donation to the Institute to support research in data science and artificial intelligence. This unrestricted gift will help enable the Turing Institute to support areas with the greatest need, which are strategically important to their mission.
The Institute for Policy Research (IPR) at the University of Bath aims to further the public good through research into policy issues. At the heart of the IPR lies its ability to facilitate exchange between researchers, practitioners, and policymakers. Bringing diverse perspectives together, the IPR produces reports, policy briefs, and empirical research that inform and influence public policy debates. With an unrestricted donation from DeepMind Ethics & Society, IPR will conduct research that aims to provide a better understanding of the broader relationship between labour market changes in Europe and attitudes towards basic income and welfare. This research helps assess the case for a universal basic income and alternative reform packages in Europe through comparative regression analysis and microsimulation of fiscal and distributional effects.
The Royal Society is a self-governing fellowship of many of the world’s most distinguished scientists drawn from all areas of science, engineering, and medicine. The Society’s fundamental purpose, reflected in its founding charters of the 1660s, is to recognise, promote, and support excellence in science, and to encourage the development and use of science for the benefit of humanity. With support from DeepMind Ethics & Society, The Royal Society launched You & AI, a public lecture series that explores cutting edge AI research and its implications for society, building on the society’s recent projects in these areas. Lectures came from leading figures in AI and those thinking about its societal consequences will provide a public forum to explore AI’s capabilities, future directions, and potential societal effects.
The RSA is a charity which seeks to harness human potential to address the challenges that society faces. The mission of the RSA is to enrich society through ideas and action. DeepMind and the RSA partnered to create the Forum for Ethical AI, a series of citizen juries that explore automated decision-making. These events have used immersive scenarios to help participants understand the ethical issues raised by automated decision-making systems, and facilitated public engagement on some of the most pressing issues facing society today.
WITNESS is an international non-profit that makes it possible for anyone, anywhere to use video and technology to protect and defend human rights. Working alongside both local communities and technology giants, WITNESS fills critical gaps in use of video and technology for human rights. DeepMind has provided support for WITNESS to expand their research and programs exploring technical and societal solutions to emerging threats from so-called deepfakes and other forms of AI-generated synthetic media.
Sean’s background is in software engineering, AI, and sociology, with a long-standing interest in how technology and society interact and shape our world.
Sean focuses on finding the best ways for contributing philosophy and social science expertise to the ethical advancement of AI research and development.
Jennifer has worked at the intersection of public policy and technology for a decade. She previously managed Google’s policy strategy for media and intellectual property in Europe, the Middle East, and Africa.
Jennifer works with with governments and the policy community, supporting discussions about the governance of new technologies and ensuring that public interests in creating safe and ethical AI are reflected in DeepMind’s research.
Iason is a political theorist and philosopher by training. Before joining DeepMind he worked for the United Nations and also taught politics at Oxford University for a number of years.
Iason’s work focuses on how to ensure that the systems we build are aligned with human values. He also teaches ethics to researchers at DeepMind.