• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Get Email Updates
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Get Email Updates
  • We’re hiring!

Daniel Kang, Jacob Steinhardt, Yi Sun, and Alex Zhai — Study of the Robustness of Machine Learning Models

  • Category: Longtermism
  • Focus Area: Potential Risks from Advanced AI
  • Amount: $2,351

  • Award Date: November 2018

Table of Contents


    Grant investigator: Daniel Dewey

    This page was reviewed but not written by the grant investigator. Daniel Kang, Jacob Steinhardt, Yi Sun, and Alex Zhai also reviewed this page prior to publication.


    Open Philanthropy contracted with Daniel Kang, Jacob Steinhardt, Yi Sun, and Alex Zhai for $2,351 to reimburse technology costs for their efforts to study the robustness of machine learning models, especially robustness to unforeseen adversaries. We believe this will accelerate progress in adversarial, worst-case robustness in machine learning.

    This falls within our focus area of potential risks from advanced artificial intelligence. This project was supported through a contractor agreement. While we typically do not publish pages for contractor agreements, we occasionally opt to do so.

    Related Items

    • Longtermism

      Rethink Priorities — AI Governance Research (2022)

      Open Philanthropy recommended a grant of $2,728,319 over two years to Rethink Priorities to expand its research on topics related to AI governance. This follows our July 2021 support and falls...

      Read more
    • Longtermism

      Longview Philanthropy — Nuclear Security Grantmaking

      Open Philanthropy recommended a grant of $500,000 over two years to Longview Philanthropy to support Carl Robichaud’s nuclear security grantmaking. This falls within our focus area of global catastrophic...

      Read more
    • Longtermism

      Berkeley Existential Risk Initiative — David Krueger Collaboration

      Open Philanthropy recommended a grant of $40,000 to the Berkeley Existential Risk Initiative to support its collaboration with Professor David Krueger. This falls within our focus area of potential...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022