Open Phil AI Fellowship — 2022 Class

Open Philanthropy recommended a total of approximately $1,840,000 over five years in PhD fellowship support to eleven promising machine learning researchers that together represent the 2022 class of the Open Phil AI Fellowship. This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized.

These fellows were selected for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence.

We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI.

The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence.

Fellows with an asterisk (*) next to their names are also Vitalik Buterin Postdoctoral Fellows — winners of grants from the Future of Life Institute (FLI). Open Philanthropy and FLI split funding equally for those fellows.

The 2022 Class of Open Phil AI Fellows

Adam Gleave

Adam is a fifth-year PhD candidate in Computer Science at UC Berkeley. His research focuses on out-of-distribution robustness for deep RL, with a particular emphasis on value learning and multi-agent adversarial robustness. To validate learned reward functions, Adam has developed the EPIC distance and interpretability methods. Adam has demonstrated the existence of adversarial policies in multi-agent environments — policies that cause an opponent to fail despite behaving seemingly randomly — and is currently investigating if narrowly superhuman policies have similar failure modes. For more information, see his website.

Cassidy Laidlaw

Cassidy is a PhD student in computer science at UC Berkeley, advised by Stuart Russell and Anca Dragan. He is interested in developing scalable and robust methods for aligning AI systems with human values. His current work focuses on modeling uncertainty in reward learning, scaling methods for human-AI interaction, and adversarial robustness to unforeseen attacks. Prior to his PhD, Cassidy received bachelor’s degrees in computer science and mathematics from the University of Maryland, College Park. For more information, see his website.

Cynthia Chen*

Cynthia is an incoming PhD student at ETH Zurich, supervised by Prof. Andreas Krause. She is broadly interested in building AI systems that are aligned with human preferences, especially under situations where mistakes are costly and human signals are sparse. She aspires to develop AI solutions that can improve the world in the long run. Prior to ETH, Cynthia interned at the Center for Human-Compatible AI at UC Berkeley and graduated with honours from the University of Hong Kong. You can find out more about Cynthia’s research at her website.

Daniel Kunin

Daniel is a PhD student in the Institute for Computational and Mathematical Engineering at Stanford, co-advised by Surya Ganguli and Daniel Yamins. His research uses tools from physics to build theoretical models for the learning dynamics of neural networks trained with stochastic gradient descent. Thus far, his research has focused on the effect of regularization on the geometry of the loss landscape, identification of symmetry and conservation laws in the learning dynamics, and how ideas from thermodynamics can be used to understand the impact of stochasticity in SGD. His current research focuses on linking the dynamics of SGD to implicit biases impacting the generalization and robustness properties of the trained network. To learn more about his research, visit his Google Scholar page.

Erik Jenner*

Erik is an incoming CS PhD student at UC Berkeley. He is interested in developing techniques for aligning AI with human values that could scale to arbitrarily powerful future AI systems. Erik has previously worked on reward learning with the Center for Human-Compatible AI, focusing on interpretability of reward models and on a better theoretical understanding of the structure of reward functions. He received a BSc in physics from the University of Heidelberg in 2020 and is currently finishing a Master’s in artificial intelligence at the University of Amsterdam. For more information about his research, see his website.

Johannes Treutlein*

Johannes is an incoming PhD student in Computer Science at UC Berkeley. He is broadly interested in empirical and theoretical research to ensure that AI systems remain safe and reliable with increasing capabilities. He is currently working on investigating learned optimization in machine learning models and on developing models whose objectives generalize robustly out of distribution. Previously, Johannes studied computer science and mathematics at the University of Toronto, the University of Oxford, and the Technical University of Berlin. For more information, visit his website.

Lauro Langosco

Lauro is a PhD student with David Krueger at the University of Cambridge. His primary research interest is AI alignment: the problem of building generally intelligent systems that do what their operator wants them to do. He also investigates the science and theory of deep learning, that is the study of how DL systems generalize and scale. Previously, Lauro interned at the Center for Human-Compatible AI in Berkeley and studied mathematics at ETH Zurich.

Maksym Andriushchenko

Maksym is a PhD student in Computer Science at École Polytechnique Fédérale de Lausanne (EPFL), advised by Nicolas Flammarion. His research focuses on making machine learning algorithms adversarially robust and improving their reliability. He is also interested in developing a better understanding of generalization in deep learning, including generalization under distribution shifts. He has co-developed RobustBench, an ongoing standardized robustness benchmark, and his research on robust image fingerprinting models has been applied for content authenticity purposes. Prior to EPFL, he worked with Matthias Hein at the University of Tübingen on adversarial robustness. To learn more about his research, visit his scholar page.

Qian Huang

Qian is a first-year PhD student in Computer Science at Stanford University, advised by Jure Leskovec and Percy Liang. She is broadly interested in aligning machine reasoning with human reasoning, especially for rationality, interpretability, and extensibility with new knowledge. Currently, she is excited about disentangling general reasoning ability from domain knowledge, particularly through the use of graph neural networks and foundation models. Qian received her B.A. in Computer Science and Mathematics from Cornell University. For more information, see her website.

Usman Anwar*

Usman is a PhD student at the University of Cambridge, where he is advised by David Krueger. His research interests span Reinforcement Learning, Deep Learning and Cooperative AI.  Usman’s goal in AI research is to develop useful, versatile and human-aligned AI systems that can learn from humans and each other. His research focuses on identifying the factors which make it difficult to develop human-aligned AI systems and developing techniques to work around these factors. In particular, he is interested in exploring ways through which rich human preferences and desires may be adaptively communicated to the AI agents, especially in complex scenarios such as multi-agent planning and time-varying preferences with the ultimate goal of both broadening the scope of tasks that AI agents can undertake as well as making the AI agents more aligned and trustworthy. For publications and other details, please visit https://uzman-anwar.github.io

Zhijing Jin*

Zhijing is a PhD student in Computer Science at Max Planck Institute, Germany, and ETH Zürich, Switzerland. She is co-supervised by Prof Bernhard Schoelkopf, Rada Mihalcea, Mrinmaya Sachan and Ryan Cotterell. She is broadly interested in making natural language processing (NLP) systems better serve for humanities. Specifically, she uses causal inference to improve the robustness and explainability of language models (as part of the “inner alignment” goal), and make language models align with human values (as part of the “outer alignment” goal). Previously, Zhijing received her bachelor’s degree at the University of Hong Kong, during which she had visiting semesters at MIT and National Taiwan University. She was also a research intern at Amazon AI with Prof Zheng Zhang. For more information, see her website.

 

 

Open Phil AI Fellowship — 2021 Class


Grant investigator: Daniel Dewey

This page was reviewed but not written by the grant investigator.


Open Philanthropy recommended a total of approximately $1,000,000 over five years in PhD fellowship support to four promising machine learning researchers that together represent the 2021 class of the Open Phil AI Fellowship. This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized. These fellows were selected from 397 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence.

We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI.

The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests.

The 2021 Class of Open Phil AI Fellows

Collin Burns

Collin is an incoming PhD student in Computer Science at UC Berkeley. He is broadly interested in doing foundational work to make AI systems more trustworthy, aligned with human values, and helpful for human decision making. He is especially excited about using language to control and interpret machine learning models more effectively. Collin received his B.A. in Computer Science from Columbia University. For more information, see his website.

Jared Quincy Davis

Jared Quincy Davis is a PhD Student in Computer Science at Stanford University. His research asks what progress is necessary for the most compelling advances of machine learning (e.g. those that powered AlphaGo) to be applied more broadly and extensively in real-world, non-stationary, multi-particle domains with nth order dynamics. Jared is motivated by the potential of such AI advances to accelerate the rate of progress in, and adoption of, technology. Thus far, his work has focused on addressing the specific computational complexity, memory, and optimization challenges that arise when learning high resolution representations of the dynamics within complex systems. Jared’s fundamental research has thus far been applied to great effect in problems spanning structural biology, industrial systems control, robotic planning and navigation, natural language processing, and beyond. To learn more about his research, visit his scholar page.

Jesse Mu


Jesse is a PhD student in Computer Science at Stanford University, advised by Noah Goodman and affiliated with the Stanford NLP Group and Stanford AI Lab. He is interested in using language and communication to improve the interpretability and generalization of machine learning models, especially in multimodal or embodied settings. Previously, Jesse received an MPhil in Advanced Computer Science from the University of Cambridge as a Churchill scholar, and a BS in Computer Science from Boston College. For more information, see his website.

Meena Jagadeesan

Meena Jagadeesan is a first-year PhD student at UC Berkeley, advised by Moritz Hardt, Michael I. Jordan, and Jacob Steinhardt. She aims to develop theoretical foundations for machine learning systems that account for economic and societal effects, especially in strategic or dynamic environments. Her work currently focuses on reasoning about the incentives created by decision-making systems and on ensuring fairness in multi-stage systems. Meena completed her Bachelor’s and Master’s degrees at Harvard University in 2020, where she studied computer science, mathematics, and statistics. For more information, visit her website.

Tan Zhi-Xuan

Xuan (Sh-YEN) is a PhD student at MIT co-advised by Vikash Mansinghka and Joshua Tenenbaum. Their research sits at the intersection of AI, philosophy, and cognitive science, asking questions like: How can we specify and perform inference over rich yet structured generative models of human motivation and bounded reasoning, in order to accurately infer human goals and values? To answer these questions, Xuan’s work includes the development of probabilistic programming infrastructure, so as to enable fast and flexible Bayesian inference over complex models of agents and their environments. Prior to MIT, Xuan worked with Desmond Ong at the National University of Singapore on deep generative models, and Brian Scassellati at Yale on human-robot interaction. They graduated from Yale with a B.S. in Electrical Engineering & Computer Science.

Open Phil AI Fellowship — 2020 Class


Grant investigators: Catherine Olsson and Daniel Dewey

This page was reviewed but not written by the grant investigators.


Open Philanthropy recommended a total of approximately $2,300,000 over five years in PhD fellowship support to 10 promising machine learning researchers that together represent the 2020 class of the Open Phil AI Fellowship. This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized. These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence.

We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutionsCHAR(59) while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI.

The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests.

The 2020 Class of Open Phil AI Fellows

Alex Tamkin

Alex is a PhD student in Computer Science at Stanford University, where he is advised by Noah Goodman and a member of the Stanford NLP Group. Alex’s research focuses on unsupervised learning: how can we understand and guide systems that learn general-purpose representations of the world? Alex received his B.S. in Computer Science from Stanford, and has spent time at Google Research on the Brain and Language teams. For more information, visit his website.

Clare Lyle

Clare is pursuing a PhD in Computer Science at the University of Oxford as a Rhodes Scholar, advised by Yarin Gal and Marta Kwiatkowska. She is interested in developing theoretical tools to better understand the generalization properties of modern ML methods, and in creating principled algorithms based on these insights. She received a BSc in mathematics and computer science from McGill University in 2018. For more information, see her website.

Cody Coleman

Cody is a computer science Ph.D. student at Stanford University, advised by Professors Matei Zaharia and Peter Bailis. His research focuses on democratizing machine learning by reducing the cost of producing state-of-the-art models and creating novel abstractions that simplify machine learning development and deployment. His work spans from performance benchmarking of hardware and software systems (i.e., DAWNBench and MLPerf) to computationally efficient methods for active learning and core-set selection. He completed his B.S. and M.Eng. in electrical engineering and computer science at MIT. For more information, visit his website.

Dami Choi

Dami is a PhD student in computer science at the University of Toronto, supervised by David Duvenaud and Chris Maddison. Dami is interested in ways to make neural network training faster, more reliable, and more interpretable via inductive bias. Previously, she spent a year at Google as an AI Resident, working with George Dahl on studying optimizers, and speeding up neural network training. She obtained her Bachelor’s degree in Engineering Science from the University of Toronto. You can find more about Dami’s research in her scholar page.

Dan Hendrycks

Dan Hendrycks is a second-year PhD student at UC Berkeley, advised by Jacob Steinhardt and Dawn Song. His research aims to disentangle and concretize the components necessary for safe AI. This leads him to work on quantifying and improving the performance of models in unforeseen out-of-distribution scenarios, and he works towards constructing tasks to measure a model’s alignment with human values. Dan received his BS from the University of Chicago. You can find out more about his research at his website.

Ethan Perez

Ethan is a PhD student in Computer Science at New York University working with Kyunghyun Cho and Douwe Kiela of Facebook AI Research. His research focuses on developing learning algorithms that have the long-term potential to answer questions that people cannot. Supervised learning cannot answer such questions, even in principle, so he is investigating other learning paradigms for generalizing beyond the available supervision. Previously, Ethan worked with Aaron Courville and Hugo Larochelle at the Montreal Institute for Learning Algorithms, and he has also spent time at Facebook AI Research and Google. Ethan earned a Bachelor’s from Rice University as the Engineering department’s Outstanding Senior. For more information, visit his website.

Frances Ding

Frances is a PhD student in Computer Science at UC Berkeley advised by Jacob Steinhardt and Moritz Hardt. Her research aims to improve the reliability of machine learning systems and ensure that they have positive, equitable social impacts. She is interested in developing algorithms that can handle dynamic environments and adaptive behavior in the real world, and in building empirical and theoretical understanding of modern ML methods. Frances received her B.A. in Biology from Harvard University and her M.Phil. in Machine Learning from the University of Cambridge. For more information, visit her website.

Leqi Liu

Leqi is a PhD student in machine learning at Carnegie Mellon University, where she is advised by Zachary Lipton. Her research aims to develop learning systems that can infer human preferences from their behaviors, and better facilitate humans to achieve their goals and well-being. In particular, she is interested in bringing theory from social sciences into algorithmic design. You can learn more about her research on her website.

Peter Henderson

Peter is a PhD student at Stanford University advised by Dan Jurafsky. His research focuses on creating robust decision-making systems grounded in causal inference mechanisms — particularly in natural language domains. He also spends time investigating reproducible and thorough evaluation methodologies to ensure that such systems perform as expected when deployed. Peter’s other work reaches into policy and technical issues related to the use of machine learning in governance and law, as well as applications of machine learning for positive social impact. Previously he earned his B.Eng. and M.Sc. from McGill University with a thesis on reproducibility and reusability in deep reinforcement learning advised by Joelle Pineau and David Meger. For more information, see his website.

Stanislav Fort

Stanislav is a PhD student at Stanford University, advised by Surya Ganguli. His research focuses on developing a scientific understanding of deep learning and on applications of machine learning and artificial intelligence in the physical sciences, in domains spanning from X-ray astrophysics to quantum computing. Stanislav spent a year as a Google AI Resident, where he worked on deep learning theories and their applications in collaboration with colleagues from Google Brain and DeepMind. He received his Bachelor’s and Master’s degrees in Physics at Trinity College, University of Cambridge, and a Master’s degree at Stanford University. For more information, visit his website.

Open Phil AI Fellowship — 2019 Class

 

Grant investigator: Daniel Dewey

This page was reviewed but not written by the grant investigator.

The Open Philanthropy Project recommended a total of approximately $2,325,000 over five years in PhD fellowship support to eight promising machine learning researchers that together represent the 2019 class of the 2019 class of the Open Phil AI Fellowship1 .This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized. These fellows were selected from more than 175 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence.

We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI.

The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests.

The 2019 Class of Open Phil AI Fellows

Aidan Gomez


Aidan is a doctoral student of Yarin Gal and Yee Whye Teh at the University of Oxford. He leads the research group FOR.ai, focusing on providing resources, mentorship, and facilitating collaboration between academia and industry. On a technical front, Aidan’s research pursues new methods of scaling individual neural networks towards trillions of parameters, and hundreds of tasks. On an ethical front, his work takes a humanist stance on machine learning applications and their risks. Aidan is a Student Researcher at Google Brain, working with Jakob Uszkoreit; Previously at Brain, he worked with Geoffrey Hinton and Łukasz Kaiser. He obtained his B.Sc from The University of Toronto with supervision from Roger Grosse.

Andrew Ilyas


Andrew Ilyas is a first-year PhD student at MIT working on machine learning. His interests are in building robust and reliable learning systems, and in understanding the underlying principles of modern ML methods. Andrew completed his B.Sc and MEng. in Computer Science as well as B.Sc. in Mathematics at MIT in 2018. For more information, see his website.

Julius Adebayo


Julius is a PhD student in Computer Science at MIT. He is interested in provable methods to enable algorithms and machine learning systems exhibit robust and reliable behavior. Specifically, he is interested in constraints relating to privacy/security, bias/fairness, and robustness to distribution shift for agents and systems deployed in the real world. Julius received masters degrees in computer science and technology policy from MIT, where he looked at bias and interpretability of machine learning models. For more information, visit his website.

Lydia T. Liu


Lydia T. Liu is a PhD student in Computer Science at the University of California, Berkeley, advised by Moritz Hardt and Michael I. Jordan. Her research aims to establish the theoretical foundations for machine learning algorithms to have reliable and robust performance, as well as positive long-term societal impact. She is interested in developing learning algorithms with multifaceted guarantees and understanding their distributional effects in dynamic or interactive settings. Lydia graduated with a Bachelor of Science in Engineering degree from Princeton University. She is the recipient of an ICML Best Paper Award (2018) and a Microsoft Ada Lovelace Fellowship. For more information, visit her website.

Max Simchowitz


Max Simchowitz is a PhD student in Electrical Engineering and Computer Science at UC Berkeley, co-advised by Benjamin Recht and Michael Jordan. He works on machine learning problems with temporal structure: either because the learning agent is allowed to make adaptive decisions about how to collect data, or because the agent’s the environment dynamically reacts to measurements taken. He received his A.B. in mathematics from Princeton University in 2015, and is a co-recipient of the ICML 2018 best paper award. You can find out more about his research on his website.

Pratyusha Kalluri


Pratyusha “Ria” Kalluri is a second year PhD student in Computer Science at Stanford, advised by Stefano Ermon and Dan Jurafsky. She is working towards discovering and inducing conceptual reasoning inside machine learning models. This leads her to work on interpretability, novel learning objectives, and learning disentangled representations. She believes this work can help shape a more radical and equitable AI future. Ria received her Bachelors degree in Computer Science at MIT in 2016 and was a Visiting Researcher at Complutense University of Madrid before beginning her PhD. For more information, visit her website.

Siddharth Karamcheti


Sidd is an incoming PhD student in Computer Science at Stanford University. He is interested in grounded language understanding, with a goal of building agents that can collaborate with humans and act safely in different environments. He is finishing up a one-year residency at Facebook AI Research in New York. He received his Sc.B. from Brown University, where he did research in human-robot interaction and natural language processing advised by Professors Stefanie Tellex and Eugene Charniak. You can find more information on his website.

Smitha Milli


Smitha is a 2nd year PhD student in computer science at UC Berkeley, where she is advised by Moritz Hardt and Anca Dragan. Her research aims to create machine learning systems that are more value-aligned. She focuses, in particular, on difficulties that arise from complexities of human behavior. For example, learning what a user prefers the system to do, despite “irrationalities” in the user’s behavior, or learning the right decisions to make, despite strategic adaptation from humans. For links to publications and other information, you can visit her website.

Open Phil AI Fellowship — 2018 Class

Grant investigator: Daniel Dewey

This page was reviewed but not written by the grant investigator.

The Open Philanthropy Project recommended a total of approximately $1,245,000 over five years in PhD fellowship support to seven machine learning researchers that together represent the 2018 class of the Open Phil AI Fellowship. This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. These fellows were selected from more than 180 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence.

We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI.

The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests.

The 2018 AI Fellows

Aditi Raghunathan


Aditi is a second year PhD student in Computer Science at Stanford University, advised by Percy Liang. She is interested in making Machine Learning systems provably reliable and fair, especially in the presence of adversaries. Aditi received her Bachelors degree in Computer Science and Engineering from IIT Madras in 2016. For links to publications and more information, please visit her website.

Chris Maddison


Chris is a DPhil student at the University of Oxford, supervised by Yee Whye Teh and Arnaud Doucet, and a Research Scientist at DeepMind. Chris is interested in the tools used for inference and optimization in scalable and expressive models. He aims to expand the range of such models by expanding the toolbox needed to work with them. Chris received his MSc. from the University of Toronto, working with Geoffrey Hinton. He received a NIPS Best Paper Award in 2014 and was one of the founding members of the AlphaGo project. For more information, visit his website.

Felix Berkenkamp


Felix is a PhD student in Computer Science at ETH Zurich, working with Andreas Krause and Angela Schoellig (University of Toronto). He is interested in enabling robots to safely and autonomously learn in uncertain real-world environments, which requires new reinforcement learning algorithms that respect the physical limitations and constraints of dynamic systems and provide theoretical safety guarantees. He received his Masters degree in Mechanical Engineering from ETH Zurich in 2015. You can find out more about his research on his website.

Jon Gauthier


Jon is a PhD student at the Massachusetts Institute of Technology in the Department of Brain and Cognitive Sciences, where he works with Roger Levy and Joshua Tenenbaum to build computational models of how people acquire and understand language. His research bridges between artificial intelligence, cognitive science, and linguistics in order to specify better concrete objectives for building language understanding systems. Before joining MIT, Jon studied at Stanford University and worked with Christopher Manning in the Stanford Natural Language Processing Group. He also spent time at Google Brain and OpenAI, where his advisors included Ilya Sutskever and Oriol Vinyals. You can find out more about Jon, including his blog and research articles, at his website.

Michael Janner


Michael is an incoming PhD student at UC Berkeley. He is interested in reproducing humans’ flexible problem-solving abilities in machines, in particular through compositional representations. In June 2018, he will receive his Bachelors degree in computer science from MIT, where he worked with Professors Joshua Tenenbaum and Regina Barzilay. More information can be found on his website.

Noam Brown


Noam is a PhD student in computer science at Carnegie Mellon University advised by Tuomas Sandholm. His research applies computational game theory to produce AI systems capable of strategic reasoning in imperfect-information multi-agent interactions. He has applied this research to creating Libratus, the first AI to defeat top humans in no-limit poker. Noam received a NIPS Best Paper award in 2017 and an Allen Newell Award for Research Excellence. Prior to starting a PhD, Noam worked at the Federal Reserve researching the effects of algorithm trading on financial markets. Before that, he developed algorithmic trading strategies. His papers and videos are available on his website.

Ruth Fong


Ruth is a PhD student in Engineering Science at the University of Oxford, where she is advised by Andrea Vedaldi. She is interested in understanding, explaining, and improving the internal representations of deep neural networks. Ruth received her Bachelors degree in Computer Science from Harvard University in 2015; she also earned a Masters degree in Neuroscience from Oxford in 2016 as a Rhodes Scholar. For more information about her research, visit her website.