Cognitive Scientist | Neuroscientist | AI Researcher | Author
I am Professor of Cognitive Neuroscience at the University of Oxford, and a Research Director at the UK AI Security Institute. My work focuses understanding the cognitive and neural mechanisms that underlie human learning and decision-making, and on studying the cognitive and social impacts of AI on people.
My research bridges the fields of cognitive science, neuroscience, and artificial intelligence. I am particularly interested in how insights from human cognition can inform the development of more advanced and safer AI systems.
I also lead the Human Information Processing (HIP) lab in the Department of Experimental Psychology at the University of Oxford. For more information about my academic work and lab, please visit humaninformationprocessing.com or view my Google Scholar profile.
Explores the algorithms and architectures that are driving progress in AI research. This book discusses intelligence in the language of psychology and biology, using examples and analogies to be comprehensible to a wide audience. It tackles longstanding theoretical questions about the nature of thought and knowledge.
An insider look at the Large Language Models (LLMs) that are revolutionizing our relationship to technology, exploring their surprising history, what they can and should do for us today, and where they will go in the future. This accessible, up-to-date, and authoritative examination of the world's most radical technology explores what it really takes to build a brain from scratch.
"An engaging, insightful and panoramic survey of where we are, why we got here and what it means. A brilliant guide to the most important technology of our times." — Mustafa Suleyman, CEO of Microsoft AI & Cofounder of DeepMind
"By far the best guide to a newly emerging species with which we will share the planet for the foreseeable future." — Stuart Russell, author of Human Compatible
I am a former Research Scientist at DeepMind UK (2010-2023). My work at DeepMind focussed on using AI to help design beneficial social, economic and political mechanisms. Example projects used reinforcement learning (RL) to design fair and sustainable redistribution principles (Koster et al 2022, Koster et al 2024) and using LLMs to help people find agreement (Tessler et al 2024).
As a Research Director at the UK AI Security Institute, I lead work on societal impacts of artificial intelligence. My research studies how AI systems might create harm by being used to manipulate or influence people, or by creating new opportunities for criminal or socially destabilising activities. Much of our work monitors how AI is being deployed in the real world, and explores solutions that may help guard against the risks it incurs.