Members of the Research team
Research

Pioneering intelligent systems, with scientific rigour

Control & Robotics

Visual representation of Control and Robotics

General purpose learning systems must be able to cope with the richness and complexity of the real world. These topics drive the control and robotics teams at DeepMind, which aim to create mechanical systems that can learn how to perform complex manipulation tasks with minimal prior knowledge. The shared ambition is to create systems that are data-efficient, reliable, and robust.

Deep Learning

Visual representation of Deep Learning

The development and use of deep neural networks underpins much of the current wave of AI research and is a critical technique for many modern applications such as machine translation. Deep learning methods are at the core of many research areas at DeepMind, including deep reinforcement learning, generative models, theory and optimisation, transfer learning, computer vision, program synthesis, and hierarchical reinforcement learning.

Neuroscience

The brain is the best example of a general purpose learning system and we use it as an inspiration for our algorithms. We conduct experiments to try and understand how human intelligence works, from memory and learning to internal navigation systems and motor control. Their insights are then used to build the next generation of algorithms. We also develop tools inspired by neuroscience that can probe our AI systems in the same way a neuroscientist studies neural circuits in the brain - an important step towards building interpretable AI systems.

Reinforcement Learning

Visual representation of Reinforcement Learning

Giving computer systems the ability to learn through trial-and-error has shaped many of DeepMind’s most well-known projects including AlphaGo, AlphaZero, and AlphaStar. We continuously push the boundaries of this powerful technique, advancing areas such as credit assignment, planning, locomotion, and meta-learning.

Safety

We study theoretical and practical problems that might arise when building general purpose learning systems. These problems fall loosely into three categories: specification (defining the purpose of a system), robustness (designing systems that can withstand outside perturbations), and assurance (monitoring and controlling a system’s activity). Our goal is to understand the behaviour of systems, including unintended behaviours or side effects; aligning agents with the goals, preferences, and ethics of the system's operators; understanding the ways in which artificial intelligence might want to modify itself over time; and approaches to containing or restricting the scope, behaviour, or design of a system.

Theory & Foundations

Visual representation of Theory and Foundations

We focus on the theoretical foundations of machine learning to understand the limits of current architectures and support the development of new, efficient, and effective learning algorithms. Our researchers cover a wide range of topics including passive, active, partial, and full information feedback learning, as well as representation, supervised, and unsupervised learning. In all cases, we aim to create principled solutions that are robust and scalable.

Unsupervised Learning & Generative Models

Visual representation of Unsupervised Learning

Unsupervised learning is a powerful technique that allows systems to learn directly from datasets that don’t have specific labels or rewards. This is an important attribute for AI, allowing them to learn and therefore make sense of their environment in much the same way a child learns through play and observation. We work on various approaches to generative and predictive models of unstructured data streams, such as text, image and video.

WaveNet: A generative model for raw audio
WaveNet generates realistic human-sounding speech that reduced the gap between computer and human performance by over 50%, when it was first introduced. It now powers the voice of the Google Assistant.
Find out more
Giving doctors a headstart on acute kidney injury
Our technology is helping doctors diagnose acute kidney injury (AKI) up to 48-hours earlier than current methods. With early detection, patients get better preventative care, avoiding invasive procedures, and reducing costs.
Find out more
More accurately identifying breast cancer
We worked with Google Health, Northwestern University, Cancer Research UK and Royal Surrey County Hospital to develop an AI system that can better identify breast cancer in X-rays across populations and systems.
Find out more
AlphaStar plays StarCraft II at Grandmaster level
AlphaStar is the first AI to reach the top league of StarCraft II without any restrictions. Understanding the potential and limitations of open-ended learning like this is a critcial step towards creating robust systems for real-world domains.
Find out more
AlphaZero: Shedding new light on chess, shogi, and Go
AlphaZero learned to play three famously complex games, becoming the strongest player in history for each. Learning entirely from scratch, it developed its own distinctive style that continues to inspire human grandmasters.
Find out more
DQN: Human-level control of Atari games
A great challenge in AI is building flexible systems that can take on a wide range of tasks. Our Deep Q-Network (DQN) made progress on this goal when it learned how to play 49 different Atari games using only raw pixels and the score as inputs.
Find out more
A neural network with dynamic memory
The differentiable neural computer (DNC) can use its external memory to answer questions about complex structured data, such as stories, family trees, or a map of the London Underground.
Find out more
AlphaGo defeats Lee Sedol in the game of Go
While becoming the first computer program to defeat a professional human Go player, AlphaGo taught the world new knowledge about perhaps the most studied and contemplated game in history.
Find out more
GQN: Neural scene representation and rendering
The Generative Query Network (GQN) allows computers to learn about a generated scene purely from observation, much like how infants learn to understand the world.
Find out more
Team profile
Raia Hadsell
Director of Robotics

Raia worked in philosophy and religion before switching to machine learning and robotics. Her PhD from NYU focused on representation learning and robot navigation, using convolutional networks to see the world.

Raia’s team researches embodied and lifelong learning in complex situations, including dexterous manipulation with multi-sensor robot hands, robot locomotion, and city-scale navigation.

"The pace of research progress at DeepMind is so fast it can be almost dizzying."
Portrait of Raia Hadsell, Director of Robotics
Team profile
Ali Eslami
Research Scientist

Ali holds a PhD in generative models from the University of Edinburgh, conducted his postdoc at Microsoft Research in Cambridge, and was a visiting researcher at the University of Oxford.

Ali figures out how computers can learn to see with less supervision. His work involves a mix of deep learning, probabilistic inference, and reinforcement learning.

"DeepMind is like a year-round conference, with a bit more focus."
Portrait of Ali Eslami, Research Scientist
Team profile
Jess Hamrick
Research Scientist

Jess holds a PhD in psychology from the University of California, Berkeley, and earned her BS and MEng in computer science from MIT.

Jess currently applies insights from cognitive science to problems in AI, with an emphasis on structured representations, model-based reasoning, and planning.

"As a cognitive scientist, it’s a unique place to work. DeepMind truly recognises that understanding human intelligence is a key path towards AGI.”
Portrait of Jess Hamrick, Research Scientist
Team profile
Rich Sutton
Distinguished Research Scientist

Rich researched reinforcement learning at universities in Alberta and Massachusetts, and at corporate labs within AT&T and GTE, since 1978.

Rich works between DeepMind Alberta and the University of Alberta. He identifies unknown parts of the mind, which therefore prevent us from recreating its abilities in machines.

“DeepMind is the world’s leading organisation in artificial intelligence research. What an opportunity! What a responsibility!”
Portrait of Richard Sutton, Distinguished Research Scientist
Team profile
Remi Munos
Head of Paris and Integration teams

Remi worked at Inria and taught at École Polytechnique. He did his postdoc at Carnegie Mellon University and holds a PhD on reinforcement learning.

Remi focuses on deep reinforcement learning and combinations with unsupervised and imitation learning, and learning from a teacher.

“If we look at the world with a love of life, the world will reveal its beauty to us.”
– Daisaku Ikeda
Portrait of Remi Munos, Head of Paris and Integration teams
Team profile
Yazhe Li
Research Engineer

Yazhe studied theoretical and applied mechanics and civil engineering. She started her career as a civil engineer, but soon found her passion in computer science and machine learning.

Yazhe collaborates with research scientists on advancing our understanding of machine learning and developing state-of-the-art deep learning algorithms.

"DeepMind is a fantastic place to work, it fosters personal development and brings out the best in me."
Portrait of Yazhe Li, Research Engineer
Team profile
Hado van Hasselt
Research Scientist

Hado studied cognitive artificial intelligence and holds a PhD in AI from Utrecht University, NL. He later joined DeepMind after working with Professor Rich Sutton at the University of Alberta.

Hado builds systems and solves challenges with reinforcement learning, deep learning, and optimisation. He also co-leads an effort on core reinforcement learning algorithms.

"Learning algorithms can help solve problems you don’t yet have solutions for.”
Portrait of Hado van Hasselt, Research Scientist
Team profile
Jonathan Schwarz
Research Engineer

Jonathan earned a masters in machine learning from the University of Edinburgh, working on modelling sequential data, robot navigation, and climate research.

Jonathan collaborates with his team on cutting-edge machine learning problems, running experiments, discussing new ideas on the whiteboard, or presenting his latest work.

"At DeepMind, I can express my full technical creativity and work with the most inspiring researchers in my field.”
Portrait of Jonathan Schwarz, Research Engineer
Team profile
Edward Lockhart
Research Engineer

Edward worked in quantitative finance for twenty years, developing skills in mathematics, statistics, and software engineering, which he applies to cutting-edge AI research.

Edward helps organise the research team and contributes to research efforts. He runs the Research Engineering Intern programme and develops agents that learn to collaborate.

"DeepMind is a great place to learn from world-leading experts who are eager to share their knowledge."
Portrait of Edward Lockhart, Research Engineer
Arrow left
Arrow right