• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!

University of Southern California — Adversarial Robustness Research

Visit Grantee Site
  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: University of Southern California
  • Amount: $32,000

  • Award Date: August 2021

Table of Contents

    Open Philanthropy recommended a grant of $320,000 over three years to the University of Southern California to support early-career research by Robin Jia on adversarial robustness and out-of-distribution generalization as a means to improve AI safety.

    This falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Potential Risks from Advanced AI

      AI Safety Support — Research on Trends in Machine Learning

      Open Philanthropy recommended a grant of $42,000 to AI Safety Support to scale up a research group, led by Jaime Sevilla, which studies trends in machine learning. This...

      Read more
    • Potential Risks from Advanced AI

      Open Phil AI Fellowship — 2022 Class

      Open Philanthropy recommended a total of approximately $1,840,000 over five years in PhD fellowship support to eleven promising machine learning researchers that together represent the 2022 class of the...

      Read more
    • Potential Risks from Advanced AI

      OpenMined — Research on Privacy-Enhancing Technologies and AI Safety

      Open Philanthropy recommended a grant of $28,320 to OpenMined to support research on the intersection between privacy-enhancing technologies and technical infrastructure for AI safety. This falls within our focus area of potential...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022