• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!

Center for International Security and Cooperation — AI Accident Risk and Technology Competition

Visit Grantee Site
  • Category: Longtermism
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Center for International Security and Cooperation
  • Amount: $67,000

  • Award Date: September 2020

Table of Contents


    Grant investigators: Luke Muehlhauser

    This page was reviewed but not written by the grant investigator. Center for International Security and Cooperation staff also reviewed this page prior to publication.


    Open Philanthropy recommended a planning grant of $67,000 to Stanford University’s Center for International Security and Cooperation (CISAC) to explore possible projects related to AI accident risk in the context of technology competition.

    This falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Longtermism

      Center for International Security and Cooperation — AI and Strategic Stability

      Open Philanthropy recommended a grant of $365,361 to the Center for International Security and Cooperation to support work studying hypothetical scenarios related to AI and strategic stability. This...

      Read more
    • Longtermism

      Center for International Security and Cooperation — Megan Palmer’s Biosecurity Research (2019)

      The Open Philanthropy Project recommended a gift of $1,625,000 over three years to Stanford University’s Center for International Security and Cooperation (CISAC) to support Megan Palmer’s work on...

      Read more
    • Longtermism

      Center for International Security and Cooperation — Megan Palmer's Biosecurity Research (2016)

      The Open Philanthropy Project recommended a grant of $643,415 to Stanford University’s Center for International Security and Cooperation (CISAC) to support Megan Palmer’s work on biosecurity. This grant...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022