• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

Open Phil AI Fellowship — 2019 Class

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Open Phil AI Fellowship
  • Amount: $2,325,000

  • Award Date: May 2019

Table of Contents

     

    Grant investigator: Daniel Dewey

    This page was reviewed but not written by the grant investigator.

    The Open Philanthropy Project recommended a total of approximately $2,325,000 over five years in PhD fellowship support to eight promising machine learning researchers that together represent the 2019 class of the 2019 class of the Open Phil AI Fellowship1 .This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized. These fellows were selected from more than 175 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence.

    We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI.

    The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests.

    The 2019 Class of Open Phil AI Fellows

    Aidan Gomez


    Aidan is a doctoral student of Yarin Gal and Yee Whye Teh at the University of Oxford. He leads the research group FOR.ai, focusing on providing resources, mentorship, and facilitating collaboration between academia and industry. On a technical front, Aidan’s research pursues new methods of scaling individual neural networks towards trillions of parameters, and hundreds of tasks. On an ethical front, his work takes a humanist stance on machine learning applications and their risks. Aidan is a Student Researcher at Google Brain, working with Jakob Uszkoreit; Previously at Brain, he worked with Geoffrey Hinton and Łukasz Kaiser. He obtained his B.Sc from The University of Toronto with supervision from Roger Grosse.

    Andrew Ilyas


    Andrew Ilyas is a first-year PhD student at MIT working on machine learning. His interests are in building robust and reliable learning systems, and in understanding the underlying principles of modern ML methods. Andrew completed his B.Sc and MEng. in Computer Science as well as B.Sc. in Mathematics at MIT in 2018. For more information, see his website.

    Julius Adebayo


    Julius is a PhD student in Computer Science at MIT. He is interested in provable methods to enable algorithms and machine learning systems exhibit robust and reliable behavior. Specifically, he is interested in constraints relating to privacy/security, bias/fairness, and robustness to distribution shift for agents and systems deployed in the real world. Julius received masters degrees in computer science and technology policy from MIT, where he looked at bias and interpretability of machine learning models. For more information, visit his website.

    Lydia T. Liu


    Lydia T. Liu is a PhD student in Computer Science at the University of California, Berkeley, advised by Moritz Hardt and Michael I. Jordan. Her research aims to establish the theoretical foundations for machine learning algorithms to have reliable and robust performance, as well as positive long-term societal impact. She is interested in developing learning algorithms with multifaceted guarantees and understanding their distributional effects in dynamic or interactive settings. Lydia graduated with a Bachelor of Science in Engineering degree from Princeton University. She is the recipient of an ICML Best Paper Award (2018) and a Microsoft Ada Lovelace Fellowship. For more information, visit her website.

    Max Simchowitz


    Max Simchowitz is a PhD student in Electrical Engineering and Computer Science at UC Berkeley, co-advised by Benjamin Recht and Michael Jordan. He works on machine learning problems with temporal structure: either because the learning agent is allowed to make adaptive decisions about how to collect data, or because the agent’s the environment dynamically reacts to measurements taken. He received his A.B. in mathematics from Princeton University in 2015, and is a co-recipient of the ICML 2018 best paper award. You can find out more about his research on his website.

    Pratyusha Kalluri


    Pratyusha “Ria” Kalluri is a second year PhD student in Computer Science at Stanford, advised by Stefano Ermon and Dan Jurafsky. She is working towards discovering and inducing conceptual reasoning inside machine learning models. This leads her to work on interpretability, novel learning objectives, and learning disentangled representations. She believes this work can help shape a more radical and equitable AI future. Ria received her Bachelors degree in Computer Science at MIT in 2016 and was a Visiting Researcher at Complutense University of Madrid before beginning her PhD. For more information, visit her website.

    Siddharth Karamcheti


    Sidd is an incoming PhD student in Computer Science at Stanford University. He is interested in grounded language understanding, with a goal of building agents that can collaborate with humans and act safely in different environments. He is finishing up a one-year residency at Facebook AI Research in New York. He received his Sc.B. from Brown University, where he did research in human-robot interaction and natural language processing advised by Professors Stefanie Tellex and Eugene Charniak. You can find more information on his website.

    Smitha Milli


    Smitha is a 2nd year PhD student in computer science at UC Berkeley, where she is advised by Moritz Hardt and Anca Dragan. Her research aims to create machine learning systems that are more value-aligned. She focuses, in particular, on difficulties that arise from complexities of human behavior. For example, learning what a user prefers the system to do, despite “irrationalities” in the user’s behavior, or learning the right decisions to make, despite strategic adaptation from humans. For links to publications and other information, you can visit her website.

    Expand Footnotes Collapse Footnotes

    1.This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized.

    Related Items

    • Potential Risks from Advanced AI

      Open Phil AI Fellowship — 2022 Class

      Open Philanthropy recommended a total of approximately $1,840,000 over five years in PhD fellowship support to eleven promising machine learning researchers that together represent the 2022 class of the...

      Read more
    • Potential Risks from Advanced AI

      Open Phil AI Fellowship — 2021 Class

      Open Philanthropy recommended a total of approximately $1,000,000 over five years in PhD fellowship support to four promising machine learning researchers that together represent the 2021 class of...

      Read more
    • Potential Risks from Advanced AI

      Open Phil AI Fellowship — 2020 Class

      Open Philanthropy recommended a total of approximately $2,300,000 over five years in PhD fellowship support to 10 promising machine learning researchers that together represent the 2020 class of...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • We’re Hiring!
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    info@openphilanthropy.org
    Media Inquiries
    media@openphilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT