• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!

Fund for Alignment Research — Language Model Misalignment

  • Focus Area: Potential Risks from Advanced AI
  • Organization Name: Fund for Alignment Research
  • Organization Name: Language Model Safety Fund
  • Amount: $425,800

  • Award Date: October 2021

Table of Contents

    Open Philanthropy recommended a grant of $425,800 to the Fund for Alignment Research, led by Ethan Perez, to support salaries and equipment for projects related to misalignment in language models. Perez plans to hire and supervise four engineers to work on these projects.

    This falls within our focus area of potential risks from advanced artificial intelligence.

    Related Items

    • Potential Risks from Advanced AI

      AI Safety Support — Research on Trends in Machine Learning

      Open Philanthropy recommended a grant of $42,000 to AI Safety Support to scale up a research group, led by Jaime Sevilla, which studies trends in machine learning. This...

      Read more
    • Potential Risks from Advanced AI

      Open Phil AI Fellowship — 2022 Class

      Open Philanthropy recommended a total of approximately $1,840,000 over five years in PhD fellowship support to eleven promising machine learning researchers that together represent the 2022 class of the...

      Read more
    • Potential Risks from Advanced AI

      OpenMined — Research on Privacy-Enhancing Technologies and AI Safety

      Open Philanthropy recommended a grant of $28,320 to OpenMined to support research on the intersection between privacy-enhancing technologies and technical infrastructure for AI safety. This falls within our focus area of potential...

      Read more
    Back to Grants Database
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022