• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!

Concrete Problems in AI Safety

  • Focus Area: Potential Risks from Advanced AI
  • Content Type: Blog Posts

Table of Contents

    Published: June 22, 2016 | by Holden Karnofsky

    Earlier this week, Google Research (in collaboration with scientists at OpenAI, Stanford and Berkeley) released Concrete Problems in AI Safety, which outlines five technical problems related to accident risk in AI systems. Four of the authors are friends and technical advisors of the Open Philanthropy Project.

    We’re very excited about this paper. We highly recommend it to anyone looking to get a sense for the tractability of reducing potential risks from advanced AI (a cause we’ve previously written about) – as well as for what sorts of research we would be most excited to fund in this cause.

    Related Items

    • Potential Risks from Advanced AI

      EE Times: Building a Framework to Trust AI

      Helen Toner, Director of Strategy at Georgetown University’s Center for Security and Emerging Technology, talks about what safe, reliable AI should look like.

      Read more
    • Potential Risks from Advanced AI

      Could Advanced AI Drive Explosive Economic Growth?

      This report evaluates the likelihood of ‘explosive growth’, meaning > 30% annual growth of gross world product (GWP), occurring by 2100. Although frontier GDP/capita growth has been constant...

      Read more
    • Potential Risks from Advanced AI

      Report on Whether AI Could Drive Explosive Economic Growth

      Since 1900, the global economy has grown by about 3% each year, meaning that it doubles in size every 20–30 years. I’ve written a report assessing whether significantly...

      Read more
    Back to Research & Updates
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022