Meet Edgar Duéñez-Guzmán, a research engineer on our Multi-Agent Research team who’s drawing on knowledge of game theory, computer science, and social evolution to get AI agents working better together.
I've wanted to save the world ever since I can remember. That's why I wanted to be a scientist. While I loved superhero stories, I realised scientists are the real superheroes. They are the ones who give us clean water, medicine, and an understanding of our place in the universe. As a child, I loved computers and I loved science. Growing up in Mexico, though, I didn't feel like studying computer science was feasible. So, I decided to study maths, treating it as a solid foundation for computing and I ended up doing my university thesis in game theory.
As part of my PhD in computer science, I created biological simulations, and ended up falling in love with biology. Understanding evolution and how it shaped the Earth was exhilarating. Half of my dissertation was in these biological simulations, and I went on to work in academia studying the evolution of social phenomena, like cooperation and altruism.
From there I started working in Search at Google, where I learned to deal with massive scales of computation. Years later, I put all three pieces together: game theory, evolution of social behaviours, and large-scale computation. Now I use those pieces to create artificially intelligent agents that can learn to cooperate amongst themselves, and with us.
It was the mid-2010s. I’d been keeping an eye on AI for over a decade and I knew of DeepMind and some of their successes. Then Google acquired it and I was very excited. I wanted in, but I was living in California and DeepMind was only hiring in London. So, I kept tracking the progress. As soon as an office opened in California, I was first in line. I was fortunate to be hired in the first cohort. Eventually, I moved to London to pursue research full time.
How ridiculously talented and friendly people are. Every single person I’ve talked to also has an exciting side outside of work. Professional musicians, artists, super-fit bikers, people who appeared in Hollywood movies, maths olympiad winners – you name it, we have it! And we’re all open and committed to making the world a better place.
At the core of my research is making intelligent agents that understand cooperation. Cooperation is the key to our success as a species. We can access the world's information and connect with friends and family on the other side of the world because of cooperation. Our failure to address the catastrophic effects of climate change is a failure of cooperation, as we saw during COP26.
The flexibility to pursue the ideas that I think are most important. For example, I’d love to help use our technology for better understanding social problems, like discrimination. I pitched this idea to a group of researchers with expertise in psychology, ethics, fairness, neuroscience, and machine learning, and then created a research programme to study how discrimination might originate in stereotyping.
DeepMind is one of those places where freedom and potential go hand-in-hand. We have the opportunity to pursue ideas that we feel are important and there’s a culture of open discourse. It’s not uncommon to infect others with your ideas and form a team around making it a reality.
I love getting involved in extracurriculars. I’m a facilitator of Allyship workshops at DeepMind, where we aim to empower participants to take action for positive change and encourage allyship in others, contributing to an inclusive and equitable workplace. I also love making research more accessible and talking with visiting students. I’ve created publicly available educational tutorials for explaining AI concepts to teenagers, which have been used in summer schools across the world.
To have the most positive impact, it simply needs to be that the benefits are shared broadly, rather than kept by a tiny number of people. We should be designing systems that empower people, and that democratise access to technology.
For example, when I worked on WaveNet, the new voice of the Google Assistant, I felt it was cool to be working on a technology that is now used by billions of people, in Google Search, or Maps. That's nice, but then we did something better. We started using this technology to give their voice back to people with degenerative disorders, like ALS. There's always opportunities to do good, we just have to take them.
There are both practical and societal challenges. On the practical side, we’re hard at work trying to make our algorithms more robust and adaptable. As living creatures, we take robustness and adaptability for granted. Slightly changing the furniture arrangement doesn't cause us to forget what a fridge is for. Artificial systems really struggle with this. There are some promising leads, but we still have a way to go.
On the societal side, we need to collectively decide what kind of AI we want to create. We need to make sure that whatever is made, is safe and beneficial. But this is particularly hard to achieve when we don't have a perfect definition of what this means.
Right now I'm still riding the high of AlphaFold, our protein-folding algorithm. I have a background in biology, and understand how promising protein structure prediction can be for biomedical applications. And I am particularly proud of how DeepMind released the protein structure of all the known proteins in the human body in the global datasets, and now released nearly all catalogued proteins known to science.
Be playful, be flexible. I couldn’t have optimised for a career leading to DeepMind (there wasn't even a DeepMind to optimise to!) But what I could do was always allow myself to dream of the potential of technology, of creating intelligent machines, and of improving the world with them.
Programming is exhilarating in its own right, but for me it was always more of a means to an end. This is what enabled me to stay current as technologies came and went. I wasn't tied to the tools, I was focused on the mission. Don't focus on the "what", but on the "why", and the "how" will manifest itself.