• Grant
    Title
  • Organization
    Name
  • Focus
    Area
  • Amount
  • Date
Grant Title

FAR AI — AI Interpretability Research

Organization Name

FAR AI

Focus Area
Potential Risks from Advanced AI
Amount
$100,000
Date
Grant Title

University of Illinois — AI Alignment Research

Organization Name

University of Illinois

Focus Area
Potential Risks from Advanced AI
Amount
$80,000
Date
Grant Title

FAR AI — FAR Labs Office Space

Organization Name

FAR AI

Focus Area
Potential Risks from Advanced AI
Amount
$280,000
Date
Grant Title

San José State University — AI Research

Organization Name

California State University, San José

Focus Area
Potential Risks from Advanced AI
Amount
$39,000
Date
Grant Title

Longview Philanthropy — AI Policy Development at the OECD

Organization Name

Longview Philanthropy

Focus Area
Potential Risks from Advanced AI
Amount
$770,076
Date
Grant Title

Epoch — AI Worldview Investigations

Organization Name

Epoch

Focus Area
Potential Risks from Advanced AI
Amount
$188,558
Date
Grant Title

University of Tuebingen — Adversarial Robustness Research

Organization Name

University of Tuebingen

Focus Area
Potential Risks from Advanced AI
Amount
$575,000
Date
Grant Title

Cornell University — AI Safety Research

Organization Name

Cornell University

Focus Area
Potential Risks from Advanced AI
Amount
$342,645
Date
Grant Title

Brian Christian — Psychology Research

Organization Name
Focus Area
Potential Risks from Advanced AI
Amount
$37,903
Date
Grant Title

Alignment Research Engineer Accelerator — AI Safety Technical Program

Organization Name

Alignment Research Engineer Accelerator

Focus Area
Potential Risks from Advanced AI
Amount
$18,800
Date
Grant Title

Center for AI Safety — Philosophy Fellowship and NeurIPS Prizes

Organization Name

Center for AI Safety

Focus Area
Potential Risks from Advanced AI
Amount
$1,433,000
Date
Grant Title

Responsible AI Collaborative — AI Incident Database

Organization Name

Responsible AI Collaborative

Focus Area
Potential Risks from Advanced AI
Amount
$100,000
Date
Grant Title

Neel Nanda — Interpretability Research

Organization Name

Neel Nanda

Focus Area
Potential Risks from Advanced AI
Amount
$70,000
Date
Grant Title

University of Toronto — Alignment Research

Organization Name

University of Toronto

Focus Area
Potential Risks from Advanced AI
Amount
$80,000
Date
Grant Title

Adam Jermyn — Independent AI Alignment Research

Organization Name
Focus Area
Potential Risks from Advanced AI
Amount
$19,231
Date
Grant Title

UC Santa Cruz — Adversarial Robustness Research (2023)

Organization Name

University of California, Santa Cruz

Focus Area
Potential Risks from Advanced AI
Amount
$114,000
Date
Grant Title

University of British Columbia — AI Alignment Research

Organization Name

University of British Columbia

Focus Area
Potential Risks from Advanced AI
Amount
$100,375
Date
Grant Title

Purdue University — Language Model Research

Organization Name

Purdue University

Focus Area
Potential Risks from Advanced AI
Amount
$170,000
Date
Grant Title

Simon McGregor — AI Risk Workshop

Organization Name
Focus Area
Potential Risks from Advanced AI
Amount
$7,000
Date
Grant Title

FAR AI — General Support (2022)

Organization Name

FAR AI

Focus Area
Potential Risks from Advanced AI
Amount
$625,000
Date
Grant Title

FAR AI — Inverse Scaling Prize

Organization Name

FAR AI

Focus Area
Potential Risks from Advanced AI
Amount
$49,500
Date
Grant Title

Apart Research — AI Alignment Hackathons

Organization Name

Apart Research

Focus Area
Potential Risks from Advanced AI
Amount
$89,000
Date
Grant Title

FAR AI — Interpretability Research

Organization Name

FAR AI

Focus Area
Potential Risks from Advanced AI
Amount
$50,000
Date
Grant Title

Jérémy Scheurer — Independent AI Alignment Research

Organization Name
Focus Area
Potential Risks from Advanced AI
Amount
$110,000
Date
Grant Title

Georgetown University — Policy Fellowship (2022)

Organization Name

Georgetown University

Focus Area
Potential Risks from Advanced AI
Amount
$239,061
Date
Grant Title

Northeastern University — Large Language Model Interpretability Research

Organization Name

Northeastern University

Focus Area
Potential Risks from Advanced AI
Amount
$562,128
Date
Grant Title

AI Safety Hub — Safety Labs

Organization Name

AI Safety Hub

Focus Area
Potential Risks from Advanced AI
Amount
$47,359
Date
Grant Title

AI Safety Support — SERI MATS Program

Organization Name

AI Safety Support

Focus Area
Potential Risks from Advanced AI
Amount
$1,538,000
Date
Grant Title

Alignment Research Center — General Support (November 2022)

Organization Name

Alignment Research Center

Focus Area
Potential Risks from Advanced AI
Amount
$1,250,000
Date
Grant Title

Jacob Steinhardt — AI Alignment Research

Organization Name
Focus Area
Potential Risks from Advanced AI
Amount
$100,000
Date
Grant Title

Center for AI Safety — General Support (2022)

Organization Name

Center for AI Safety

Focus Area
Potential Risks from Advanced AI
Amount
$5,160,000
Date
Grant Title

Mordechai Rorvig — Independent AI Journalism

Organization Name
Focus Area
Potential Risks from Advanced AI
Amount
$110,000
Date
Grant Title

Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

Organization Name

Berkeley Existential Risk Initiative

Focus Area
Potential Risks from Advanced AI
Amount
$2,047,268
Date
Grant Title

Berkeley Existential Risk Initiative — General Support (2022)

Organization Name

Berkeley Existential Risk Initiative

Focus Area
Potential Risks from Advanced AI
Amount
$100,000
Date
Grant Title

Catherine Brewer — OxAI Safety Hub

Organization Name

OxAI Safety Hub

Focus Area
Potential Risks from Advanced AI
Amount
$11,622
Date
Grant Title

Thomas Liao — Foundation Model Tracker

Organization Name

Foundation Model Tracker

Focus Area
Potential Risks from Advanced AI
Amount
$15,000
Date
Grant Title

Conjecture — SERI MATS Program in London

Organization Name

Conjecture

Focus Area
Potential Risks from Advanced AI
Amount
$457,380
Date
Grant Title

Centre for the Governance of AI — Research Assistant

Organization Name

Centre for the Governance of AI

Focus Area
Potential Risks from Advanced AI
Amount
$19,200
Date
Grant Title

AI Alignment Awards — Shutdown Problem Contest

Organization Name

AI Alignment Awards

Focus Area
Potential Risks from Advanced AI
Amount
$75,000
Date
Grant Title

Centre for the Governance of AI — Compute Strategy Workshop

Organization Name

Centre for the Governance of AI

Focus Area
Potential Risks from Advanced AI
Amount
$50,532
Date
Grant Title

AI Safety Hub — Startup Costs

Organization Name

AI Safety Hub

Focus Area
Potential Risks from Advanced AI
Amount
$203,959
Date
Grant Title

Centre for Effective Altruism — Harvard AI Safety Office

Organization Name

Centre for Effective Altruism

Focus Area
Potential Risks from Advanced AI
Amount
$250,000
Date
Grant Title

Fund for Alignment Research — Language Model Misalignment (2022)

Organization Name

FAR AI

Focus Area
Potential Risks from Advanced AI
Amount
$463,693
Date
Grant Title

Arizona State University — Adversarial Robustness Research

Organization Name

Arizona State University

Focus Area
Potential Risks from Advanced AI
Amount
$200,000
Date
Grant Title

Redwood Research — General Support (2022)

Organization Name

Redwood Research

Focus Area
Potential Risks from Advanced AI
Amount
$10,700,000
Date
Grant Title

Daniel Dewey — AI Alignment Projects (2022)

Organization Name
Focus Area
Potential Risks from Advanced AI
Amount
$175,000
Date
Grant Title

Center for a New American Security — Work on AI Governance

Organization Name

Center for a New American Security

Focus Area
Potential Risks from Advanced AI
Amount
$4,816,710
Date
Grant Title

Carnegie Mellon University — Research on Adversarial Examples

Organization Name

Carnegie Mellon University

Focus Area
Potential Risks from Advanced AI
Amount
$343,235
Date
Grant Title

Stanford University — AI Alignment Research (Barrett and Viteri)

Organization Name

Stanford University

Focus Area
Potential Risks from Advanced AI
Amount
$153,820
Date
Grant Title

Berkeley Existential Risk Initiative — Language Model Alignment Research

Organization Name

Berkeley Existential Risk Initiative

Focus Area
Potential Risks from Advanced AI
Amount
$30,000
Date