• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

AI Funding for Individuals — Work and Study Support

  • Focus Area: Potential Risks from Advanced AI
  • Amount: $113,372

  • Award Date: February 2022

Table of Contents

    Open Philanthropy recommended a total of $113,372 to support individuals pursuing work and study related to potential risks from advanced artificial intelligence.

    Recipients include:

    • Kaivu Hariharan, self-study on machine learning and AI safety, as well as running an AI alignment club at MIT
    • Pablo Villalobos Sánchez, research assistance for Jaime Sevilla

    The grant amount was updated in March 2023.

    Related Items

    • Potential Risks from Advanced AI

      AI Safety Support — SERI MATS Program

      Open Philanthropy recommended a grant of $513,576 to AI Safety Support to support their collaboration with Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an...

      Read more
    • Potential Risks from Advanced AI

      FAR AI — Interpretability Research

      Open Philanthropy recommended a grant of $50,000 to FAR AI to support their research on machine learning interpretability, in collaboration with Open Philanthropy AI Fellow Alex Tamkin. This falls within our...

      Read more
    • Potential Risks from Advanced AI

      Usman Anwar — Research Collaboration with David Krueger

      Open Philanthropy recommended a grant of £5,000 (approximately $6,526 at the time of conversion) to Usman Anwar to support his research on machine learning in collaboration with Professor David...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]ilanthropy.org
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License. If you'd like to translate this content into another language, please get in touch!