My research focuses on machine learning and statistics for optimal control and decision making, as well as using these mathematical frameworks to understand how the brain learns. In recent work, I've developed new algorithms and approaches for exploiting deep neural networks in the context of reinforcement learning, and new recurrent memory architectures for one-shot learning. Applications of this work include approaches for recognizing images from a single example, visual question answering, deep learning for robotics problems, and playing games such as Go and StarCraft. I'm also fascinated by the development of deep network models that might shed light on how robust feedback control laws are learned and employed by the central nervous system.