We believe that artificial intelligence (AI) should be one of society’s most useful inventions. We research and build AI systems that learn how to solve problems and advance scientific discovery for all.
However, securing safe, accountable, and socially beneficial AI requires a proactive approach to the actual and potential impacts of AI on human rights. As scientists and practitioners, we take responsibility for investigating human rights issues that may arise from our research.
We are committed to respecting internationally recognized human rights as enshrined in the Universal Declaration of Human Rights and its implementing treaties. We are committed to implementing an approach based on the UN Guiding Principles on Business and Human Rights.
The breadth and diversity of our research is such that we assess impact against relevant human rights broadly.
Our commitment to human rights is informed by the right to share in scientific advancement and its benefits. For example, we:
Given the long term impacts of our work, and our frequent focus on the basic building blocks of science, we seek to gain foresight into potential impacts over extended time periods. We will continue to refine our approach as our work evolves and knowledge improves.
Oversight and responsibility for this human rights policy resides with our interdisciplinary Institutional Review Committee (IRC), which meets regularly to carefully evaluate DeepMind projects, papers, and collaborations.
We’ve carefully designed our review process to include internal and external experts from a wide range of disciplines, with machine learning researchers, ethicists and safety experts sitting alongside engineers, security experts, policy professionals and more. These diverse voices regularly identify ways to expand the benefits of our technologies, suggest areas of research and applications to change or slow, and highlight projects where further external consultation is needed. The Institutional Review Committee will review the contents of this policy on an annual basis.