We believe this approach also means ruling out the use of AI technology in certain fields. For example, we’ve signed public pledges against using our technologies for lethal autonomous weapons, alongside many others from the AI community.
These issues go well beyond any one organisation. Our ethics team works with many brilliant non-profits, academics, and other companies, and creates forums for the public to explore some of the toughest issues. Our safety team also collaborates with other leading research labs, including our colleagues at Google, OpenAI, the Alan Turing Institute, and elsewhere.
It’s also important that the people building AI reflect the broader society. We’re working with universities on scholarships for people from underrepresented backgrounds, and support community efforts such as Women in Machine Learning and the African Deep Learning Indaba.
Technical safety is a core element of our research. Our goal is to ensure that AI systems of the future are proven to be safe - because we’ve built them that way. Just as software engineering has a set of best practices for security and reliability, our AI safety teams develop approaches to specification, robustness, and assurance for AI systems both now and in the future.
Our team of ethicists and policy researchers work closely with our AI research team to understand how technical advances will impact society, and find ways to reduce risk.
We also partner with outside experts and the general public to find answers together. We’ve supported partners including the Royal Society and the RSA to carry out public discussions and citizens’ juries around AI ethics, and have given unrestricted financial grants to several universities working on these issues. We also helped co-found the Partnership on AI to bring together academics, charities, and company labs to solve common challenges.