• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!

Paul Christiano, PhD Student, Theory of Computing Group, University of California at Berkeley

  • Focus Area: Potential Risks from Advanced AI
  • Content Type: Conversations

Table of Contents

    Published: May 02, 2015

    The Open Philanthropy Project spoke with Mr. Christiano as part of its investigation into risks of artificial intelligence (AI). The key points from the conversion were that:

    1. Mr. Christiano believes that speeding up AI progress may make society less prepared for the transition to advanced AI, but that this risk is offset by other considerations (such as the possibility of using AI to address other existential risks), so he is highly uncertain of the overall effect on existential risk.
    2. If a funder concerned about AI safety began funding AI work, including work that might speed up the development of very advanced AI, then the funder’s entry into the field would probably reduce potential risks from AI. The funder’s entry might push the field in a more safety-oriented direction.
    3. Mr. Christiano would favor systemic improvements to the field (such as bringing in additional talented researchers). He believes it would generally increase the likelihood of success across projects in the field, including projects that reduce risks from unintended consequences of AI.

    Read more

    Related Items

    • Potential Risks from Advanced AI

      EE Times: Building a Framework to Trust AI

      Helen Toner, Director of Strategy at Georgetown University’s Center for Security and Emerging Technology, talks about what safe, reliable AI should look like.

      Read more
    • Potential Risks from Advanced AI

      Could Advanced AI Drive Explosive Economic Growth?

      This report evaluates the likelihood of ‘explosive growth’, meaning > 30% annual growth of gross world product (GWP), occurring by 2100. Although frontier GDP/capita growth has been constant...

      Read more
    • Potential Risks from Advanced AI

      Report on Whether AI Could Drive Explosive Economic Growth

      Since 1900, the global economy has grown by about 3% each year, meaning that it doubles in size every 20–30 years. I’ve written a report assessing whether significantly...

      Read more
    Back to Research & Updates
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022