Research scientist, Kevin McKee, tells how his early love of science fiction and social psychology inspired his career, and how he’s helping advance research in ‘queer fairness’, support human-AI collaboration, and study the effects of AI on the LGBTQ+ community.
The signs were clear, right from the start. I’ve always loved science fiction. I couldn’t tell you how many times I read and reread Isaac Asimov’s I, Robot as a kid. These short stories explore the psychology of Asimov’s fictional robots, frequently using them as a mirror to uncover insights about the human mind. I was completely enthralled.
It’s no surprise that I took an early interest in psychological science. In elementary school, I often tried running controlled psychology experiments for my science projects. Looking back, I’m not sure how successful I was with those experiments, but they led me to my studies in psychology and neuroscience – and then eventually to DeepMind.
Everyone at DeepMind gets to work on an absurdly diverse set of projects. Much of our work is driven from the bottom up, so DeepMinders frequently get invited to collaborate on exciting projects from across the organisation.
My current projects span traditional machine learning methods and social science approaches; research on cooperative AI and the social implications of AI development; and collaborations with engineers, mathematicians, and ethicists.
I co-lead QueerMinds, our employee resource group for LGBTQ+ employees and allies. When I joined DeepMind, in 2017, we didn't have a formal community or an official space for identities like mine. Over time, I realised that as someone queer myself, I could help create that visibility and foster that community for others at DeepMind.
QueerMinds feels vibrant these days, with regular socials, talks by external researchers and authors, and group field trips, including a recent one to the new queer Queer Britain, the new queer museum next to our King’s Cross office. Since stepping into the role, I haven’t regretted it for a moment. It’s been a huge joy – and a continuous learning experience – to create a space for the queer people in DeepMind's community.
I prefer working from the office. It’s really energising to see my teammates and random DeepMinders every day. These are known as ‘weak ties’ in social psychology and sociology, and they definitely inject my day with a lot of happiness.
In research, I find a lot of breakthroughs come from spontaneous conversations and unplanned moments – you never know where the next idea or collaboration will come from. Just chatting through the current challenge with a teammate over coffee is often enough to catalyse a lightbulb moment.
When we talk about our goals as an organisation, we often frame the conversation around the motivation of ‘advancing science and benefiting humanity’. It’s amazing to be on a team committed to those aims. In working toward them, I think we have a real chance to include groups that historically have been excluded from scientific work. If we bring marginalised communities into the agenda-setting process for our work, what sorts of research questions and priorities will we establish?
AI and machine learning can make a difference, even in small ways. My sister is a speech-language pathologist who works with trans teens to help them develop their voices and communication in a way that affirms their gender identities. Recent advances in AI research show a lot of promise for supporting her and others working with queer communities. For example, generative models could help trans patients form realistic, healthy targets for their voice exercises in therapy sessions.
It’s a tie between two projects. First, a paper I worked on about ‘queer fairness’, where we advocated for more research to understand the effects of AI on LGBTQ+ communities. AI development creates both new opportunities and serious risks for queer people. Yet, most work aimed at measuring and correcting algorithmic bias – what AI scientists call ‘algorithmic fairness’ research – tends to overlook LGBTQ+ communities. My co-authors and I reviewed potential points of promise and concern across areas like privacy, censorship, and mental health.
Second, is an ongoing project on cooperative AI, which we talk about in the podcast episode Better together. Humans are actually fairly good at cooperating with each other, even in the face of the incentive or motivation to act selfishly.
In social psychology, one popular model of human altruism argues that humans pay attention not just to our own goals and outcomes, but also to the goals and outcomes of those around us – especially those with whom we have close relationships, like friends and family. If I’m picking up lunch for a friend and myself, I’ll probably skip the sandwich shop that I like but he hates. Instead, I’ll likely find one that we both like, because I care about his happiness and rewards. That sort of ‘reward sharing’ is key to human altruism, and potentially to our close relationships, too.
Drawing inspiration from this reward sharing model, my co-authors and I developed cooperative AI agents that humans can interact with. They’re really fun to play with. As a cherry on top, one of the games we used for studying human-AI collaboration is actually my friends’ and my favourite to play outside work: Overcooked!
I’m an avid surfer. I grew up in California, so I was a bit worried about the surfing prospects when moving to London. Turns out that it’s a quick jump to Portugal and Spain, where there are awesome waves. Some of my friends even swear that surfing in Cornwall is first class! We try to make a trip every few months, for a long weekend or a full week on the beach.
Don’t be afraid to take big jumps! Before joining DeepMind, my entire life – my career, family, and friends – was based in the US. Moving to the UK felt a bit daunting. Five years in, I can confidently say that making the jump to London was one of the best decisions I’ve ever made.
Learn more about research at DeepMind and search for open roles today