As with all cutting-edge science, responsible AI development is an area of continual, collective learning. From the outset, we have championed the development of robust principles to guide these efforts.
This includes the active pursuit of opportunities where AI can unlock widespread societal benefit, and equally active efforts to guard against harmful uses. It means pioneering with care in the spirit of the scientific method, continually expanding our understanding and paying close attention to the impacts as well as the intent of our work.
Below is the current manifestation of these principles at DeepMind. They are designed for our role as a research-driven science company and consistent with Google's AI Principles.
We commit to:
- Social benefit: Advancing the development, distribution and use of our technologies for broad social benefit, particularly in those application areas to which AI technologies are uniquely suited, such as advancing science and addressing climate and sustainability;
- Scientific excellence & integrity: Achieving and maintaining the highest levels of scientific excellence and integrity through rigorously applying the scientific method and being at the forefront of artificial intelligence research and development;
- Safety and ethics: Upholding and contributing to best practices in the fields of AI safety and ethics, including fairness and privacy, to avoid unintended outcomes that create risks of harm;
- Accountability to people: Designing AI systems that are aligned with and accountable to people, with appropriate levels of interpretability and human direction and control, and engaging with a wide range of stakeholder groups to gather feedback and insights;
- Sharing knowledge responsibly: Sharing scientific advances thoughtfully and responsibly by continuously evaluating our work, including our research and publications, to maximise their potential for social benefit and minimise potential harms; and
- Diversity, equity and inclusion: Advancing diversity, equity and inclusion in every part of our organisation and in the AI ecosystem, including advocating for fair and just outcomes as AI technologies are applied.
We will not pursue:
- Harmful technologies: Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints;
- Weapons: Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people;
- Surveillance technology: Technologies that gather or use information for surveillance violating internationally accepted norms; and
- Violations of international law or human rights: Technologies whose purpose contravenes widely accepted principles of international law and human rights.