Berkeley Existential Risk Initiative — CHAI Internships

Open Philanthropy recommended a grant of $500,000 over two years to support the Berkeley Existential Risk Initiative’s collaboration with the Center for Human-Compatible (CHAI) to work on research relevant to potential risks from artificial intelligence and machine learning. The funding will be used to support interns at CHAI by providing them with research stipends, software, and compute.

This follows our February 2022 support and falls within our focus area of potential risks from advanced artificial intelligence.

— AI Risk Management Frameworks (2024)

Image courtesy of the Berkeley Existential Risk Initiative

Open Philanthropy recommended a grant of $713,000 to the Berkeley Existential Risk Initiative to support the development of risk management frameworks and the analysis of AI standards. An additional grant to the Center for Long-Term Cybersecurity will support related work.

This follows our April 2022 support and falls within our focus area of potential risks from advanced artificial intelligence.

Berkeley Existential Risk Initiative — MATS (November 2023)

Photo courtesy of Berkeley Existential Risk Initiative

Open Philanthropy recommended two grants totaling $2,641,368 to the Berkeley Existential Risk Initiative to support its collaboration with AI Safety Support on the ML Alignment & Theory Scholars (MATS) program. The MATS program is an educational seminar and independent research program that provides talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, interpretability, and governance. The program also connects participants with the Berkeley AI safety research community.

This follows our June 2023 support and falls within our focus area of potential risks from advanced artificial intelligence.

Berkeley Existential Risk Initiative — University Collaboration Program

Open Philanthropy recommended a grant of $70,000 to the Berkeley Existential Risk Initiative (BERI) to support its university collaboration program. Selected applicants become eligible for support and services from BERI that would be difficult or impossible to obtain through normal university channels. BERI will use these funds to increase the size of its 2024 cohort.

This falls within our focus area of potential risks from advanced artificial intelligence.

Berkeley Existential Risk Initiative — Scalable Oversight Dataset

Photo courtesy of Berkeley Existential Risk Initiative

Open Philanthropy recommended a grant of $70,000 to the Berkeley Existential Risk Initiative to support the creation of a scalable oversight dataset. The purpose of the dataset is to collect questions that non-experts can’t answer even with the internet at their disposal; these kinds of questions can be used to test how well AI systems can lead humans to the right answers without misleading them.

This falls within our focus area of potential risks from advanced artificial intelligence.

Berkeley Existential Risk Initiative — SERI MATS 4.0

Photo courtesy of Berkeley Existential Risk Initiative

Open Philanthropy recommended a grant of $428,942 to the Berkeley Existential Risk Initiative to support their collaboration with Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with in-person alignment research communities.

This grant will support the MATS program’s fourth cohort. This follows our November 2022 support for the previous iteration of MATS, and falls within our focus area of potential risks from advanced artificial intelligence. We also made a separate grant to AI Safety Support for this cohort.

Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

Photo courtesy of BERI

Open Philanthropy recommended a grant of $2,047,268 to the Berkeley Existential Risk Initiative to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community.

This grant will support the MATS program’s third cohort. This follows our April 2022 support for the previous iteration of MATS, and falls within our focus area of potential risks from advanced artificial intelligence.

Berkeley Existential Risk Initiative — SERI MATS Program

Open Philanthropy recommended three grants totaling $195,000 to the Berkeley Existential Risk Initiative to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the SERI ML Alignment Theory Scholars (MATS) Program. MATS is a two-month program where students will research problems related to AI alignment while supervised by a mentor.

This follows our May 2021 support and falls within our focus area of potential risks from advanced artificial intelligence.