• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Get Email Updates
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Get Email Updates
  • We’re hiring!

Stanford University — Machine Learning Security Research Led by Dan Boneh and Florian Tramer

Visit Grantee Site
  • Category: Longtermism
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Stanford University
  • Amount: $100,000

  • Award Date: July 2018

Table of Contents

     

    Grant investigator: Daniel Dewey

    This page was reviewed but not written by the grant investigator. Stanford University staff also reviewed this page prior to publication.

    The Open Philanthropy Project recommended a gift of $100,000 to Stanford University to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer. Machine learning security probes worst-case performance of learned models, and we consider work in this area a promising way of ensuring that models are “doing the right thing” in a generalizable way.

    Our main rationale for making this gift include:

    • We consider Florian Tramer a very strong PhD student who is currently conducting excellent machine learning security work.
    • We expect excellent machine learning security work to be very important for AI safety.
    • Generally speaking, we expect increased funding in areas relevant to AI safety—like machine learning security—to move the field in a direction we consider positive and aligned with our interests in mitigating potential risks from advanced AI; we therefore consider this gift a small nudge in that direction.

    This gift falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Longtermism

      Stanford University — Adversarial Robustness Research (Dimitris Tsipras)

      Open Philanthropy recommended a grant of $330,792 over three years to Stanford University to support early-career research by Dimitris Tsipras on adversarial robustness as a means to improve...

      Read more
    • Longtermism

      Stanford University — Adversarial Robustness Research (Shibani Santurkar)

      Open Philanthropy recommended a grant of $330,792 over three years to Stanford University to support early-career research by Shibani Santurkar on adversarial robustness as a means to improve...

      Read more
    • Longtermism

      Stanford University — AI Safety Seminar

      Open Philanthropy recommended a grant of $6,500 to Stanford University to support an artificial intelligence (AI) safety seminar led by Professor Dorsa Sadigh. This grant is intended to...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    Email
    Media Inquiries
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    © Open Philanthropy 2022