• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Get Email Updates
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Get Email Updates
  • We’re hiring!

Berkeley Existential Risk Initiative — Core Support and CHAI Collaboration

Visit Grantee Site
  • Category: Longtermism
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Berkeley Existential Risk Initiative
  • Amount: $403,890

  • Award Date: July 2017

Table of Contents

    Grant investigator: Daniel Dewey
    This page was reviewed but not written by the grant investigator. BERI staff also reviewed this page prior to publication.

    The Open Philanthropy Project recommended a grant of $403,890 to the Berkeley Existential Risk Initiative (BERI) to support BERI’s work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This funding is intended to help BERI hire contractors and part-time employees who will assist CHAI in a variety of ways; for example, BERI has previously provided CHAI with web development and event coordination support, and in the future BERI may hire or contract (e.g.) research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff, who oversee BERI’s efforts at hiring and liaising with CHAI (and possibly with other “clients” in the future).

    Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future.

    This grant falls within our focus area of potential risks from advanced artificial intelligence.

    Document Source
    BERI Grant Proposal, 2017 Source
    BERI Budget for CHAI Collaboration, 2017 Source

    Related Items

    • Longtermism

      Berkeley Existential Risk Initiative — David Krueger Collaboration

      Open Philanthropy recommended a grant of $40,000 to the Berkeley Existential Risk Initiative to support its collaboration with Professor David Krueger. This falls within our focus area of potential...

      Read more
    • Longtermism

      Berkeley Existential Risk Initiative — MineRL BASALT Competition

      Open Philanthropy recommended a grant of $70,000 to the Berkeley Existential Risk Initiative to support the MineRL BASALT competition. The competition asks participants to build AI systems that...

      Read more
    • Longtermism

      Berkeley Existential Risk Initiative — AI Standards

      Open Philanthropy recommended a grant of $300,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    medi[email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022