The Open Philanthropy Project recommended a total of approximately $2,000,000 over five years in PhD fellowship support to eight promising machine learning researchers that together represent the 2019 class of the Open Phil AI Fellowship.1 These fellows were selected from more than 175 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence.
We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI.
The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests.
The 2019 Class of Open Phil AI Fellows
Aidan is a doctoral student of Yarin Gal and Yee Whye Teh at the University of Oxford. He leads the research group FOR.ai, focusing on providing resources, mentorship, and facilitating collaboration between academia and industry. On a technical front, Aidan’s research pursues new methods of scaling individual neural networks towards trillions of parameters, and hundreds of tasks. On an ethical front, his work takes a humanist stance on machine learning applications and their risks. Aidan is a Student Researcher at Google Brain, working with Jakob Uszkoreit; Previously at Brain, he worked with Geoffrey Hinton and Łukasz Kaiser. He obtained his B.Sc from The University of Toronto with supervision from Roger Grosse.
Andrew Ilyas is a first-year PhD student at MIT working on machine learning. His interests are in building robust and reliable learning systems, and in understanding the underlying principles of modern ML methods. Andrew completed his B.Sc and MEng. in Computer Science as well as B.Sc. in Mathematics at MIT in 2018. For more information, see his website.
Julius is a PhD student in Computer Science at MIT. He is interested in provable methods to enable algorithms and machine learning systems exhibit robust and reliable behavior. Specifically, he is interested in constraints relating to privacy/security, bias/fairness, and robustness to distribution shift for agents and systems deployed in the real world. Julius received masters degrees in computer science and technology policy from MIT, where he looked at bias and interpretability of machine learning models. For more information, visit his website.
Lydia T. Liu
Lydia T. Liu is a PhD student in Computer Science at the University of California, Berkeley, advised by Moritz Hardt and Michael I. Jordan. Her research aims to establish the theoretical foundations for machine learning algorithms to have reliable and robust performance, as well as positive long-term societal impact. She is interested in developing learning algorithms with multifaceted guarantees and understanding their distributional effects in dynamic or interactive settings. Lydia graduated with a Bachelor of Science in Engineering degree from Princeton University. She is the recipient of an ICML Best Paper Award (2018) and a Microsoft Ada Lovelace Fellowship. For more information, visit her website.
Max Simchowitz is a PhD student in Electrical Engineering and Computer Science at UC Berkeley, co-advised by Benjamin Recht and Michael Jordan. He works on machine learning problems with temporal structure: either because the learning agent is allowed to make adaptive decisions about how to collect data, or because the agent’s the environment dynamically reacts to measurements taken. He received his A.B. in mathematics from Princeton University in 2015, and is a co-recipient of the ICML 2018 best paper award. You can find out more about his research on his website.
Pratyusha “Ria” Kalluri is a second year PhD student in Computer Science at Stanford, advised by Stefano Ermon and Dan Jurafsky. She is working towards discovering and inducing conceptual reasoning inside machine learning models. This leads her to work on interpretability, novel learning objectives, and learning disentangled representations. She believes this work can help shape a more radical and equitable AI future. Ria received her Bachelors degree in Computer Science at MIT in 2016 and was a Visiting Researcher at Complutense University of Madrid before beginning her PhD. For more information, visit her website.
Sidd is an incoming PhD student in Computer Science at Stanford University. He is interested in grounded language understanding, with a goal of building agents that can collaborate with humans and act safely in different environments. He is finishing up a one-year residency at Facebook AI Research in New York. He received his Sc.B. from Brown University, where he did research in human-robot interaction and natural language processing advised by Professors Stefanie Tellex and Eugene Charniak. You can find more information on his website.
Smitha is a 2nd year PhD student in computer science at UC Berkeley, where she is advised by Moritz Hardt and Anca Dragan. Her research aims to create machine learning systems that are more value-aligned. She focuses, in particular, on difficulties that arise from complexities of human behavior. For example, learning what a user prefers the system to do, despite “irrationalities” in the user’s behavior, or learning the right decisions to make, despite strategic adaptation from humans. For links to publications and other information, you can visit her website.
- 1. This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized.