Berkeley Existential Risk Initiative — AI Standards (2022)
Table of Contents
Open Philanthropy recommended a grant of $210,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence. An additional grant to the Center for Long-Term Cybersecurity will support related work.
This follows our July 2021 support and falls within our focus area of potential risks from advanced artificial intelligence.