• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Effective Altruism Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth (Longtermism)
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Contact Us
    • Stay Updated
  • We’re hiring!

Center for AI Safety — Philosophy Fellowship and NeurIPS Prizes

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Center for AI Safety
  • Amount: $1,433,000

  • Award Date: February 2023

Table of Contents

    Open Philanthropy recommended a grant of $1,433,000 to the Center for AI Safety (CAIS) to support the CAIS Philosophy Fellowship, which is a research fellowship that will support philosophers researching topics related to AI safety. This grant also supported a workshop on adversarial robustness, as well as prizes for safety-related competitions at the 2022 NeurIPS conference.

    This falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Potential Risks from Advanced AI

      Center for AI Safety — General Support

      Open Philanthropy recommended a grant of $5,160,000 to the Center for AI Safety for general support. The Center for AI Safety does technical research and field-building aimed at...

      Read more
    • Potential Risks from Advanced AI

      Epoch — AI Worldview Investigations

      Open Philanthropy recommended a grant of $188,558 to Epoch to support its “worldview investigations” related to AI. This follows our June 2022 support and falls within our focus...

      Read more
    • Potential Risks from Advanced AI

      Neel Nanda — Interpretability Research

      Open Philanthropy recommended a grant of $70,000 to Neel Nanda to support his independent research on interpretability. His work is aimed at improving human understanding of neural networks and machine learning...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    • Stay Updated
    Mailing Address
    Open Philanthropy
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    © Open Philanthropy 2022 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License. If you'd like to translate this content into another language, please get in touch!