The Open Philanthropy Project seeks to hire people to specialize in key analyses relevant to potential risks from advanced artificial intelligence.
Applicants who have strong existing qualifications for these roles should apply directly for them. However, we also note that the Research Analyst role is a possible route to the roles listed here.
The application window for this role is now closed.
About the Open Philanthropy Project
The Open Philanthropy Project identifies outstanding giving opportunities, makes grants, follows the results, and publishes our findings. Our main funders are Cari Tuna and Dustin Moskovitz, a co-founder of Facebook and Asana.
We aim to do as much good as possible with the resources we have. Rather than choose focus areas based on funders’ personal passions, we stress openness to many possibilities and have chosen our focus areas based on importance, neglectedness and tractability. We’re particularly interested in high-risk, high-reward giving that may be too unconventional or long-term for other funders. Our current giving areas include global health and development, scientific research, criminal justice reform, farm animal welfare, biosecurity and pandemic preparedness, and potential risks from advanced artificial intelligence.
We currently give away over $100 million per year. We expect our giving to grow, in line with our current funders’ goal of giving away the vast majority of their wealth within their lifetimes. We also aspire to become a key source of giving recommendations for other major philanthropists.
One of our focus areas is potential risks from advanced artificial intelligence. We currently see our work on this cause as being particularly bottlenecked by staff capacity.
As such, we are seeking to hire people to focus on any of the following areas:
There is a growing set of researchers working on topics related to AI alignment, which we summarize as the problem of creating AI systems that will reliably do what their users want them to do even when AI systems become much more capable than their users across a broad range of tasks.
Relevant research includes work on learning from human preferences from OpenAI and DeepMind; the Machine Intelligence Research Institute’s work on highly reliable agent design; the Center for Human-Compatible AI’s work on inverse reinforcement learning and other topics; Ought’s work on amplification; work done by dedicated alignment-problem-focused teams at DeepMind and the Future of Humanity Institute; and work on adversarial examples, robustness to distributional shift, and interpretability at a variety of institutions.
We’re looking for at least one person who can work full-time on understanding the many ongoing lines of relevant research: what the basic motivation is for each, how it might prove helpful for AI alignment, what its potential limitations are, and what sorts of researchers are best suited to advance it further. Doing so could be useful for:
- Advising Open Philanthropy about what lines of research to highlight and encourage via our Fellows Program, in-person workshops, and grantmaking.
- Helping Open Philanthropy determine when a particular funding approach beyond the mechanisms we already have in place (such as a prize, or a dedicated attempt to reach out to researchers of a specific profile) might be helpful to bring attention and progress to a particularly promising and neglected line of research.
- Helping safety-motivated researchers find lines of research that are valuable and fit their skills and interests.
- Helping to create more common knowledge among researchers of the most promising-seeming agendas, and the major possible critiques of the most prominent work.
The ideal candidate for this role would have very strong technical abilities, and would likely have strong potential as an academic or industry AI researcher if they were to pursue that route. However, this hire would not perform or publish research. They would focus on understanding, critically evaluating, and communicating about a variety of lines of research, rather than focus on advancing the frontier of a particular one.
This hire would collaborate closely with Daniel Dewey, who focuses on identifying potential grantees and designing grant mechanisms (e.g., our Fellows Program) and relies on part-time technical advisors to assess the relevance of research.
We think that the right person in this role could play an extremely important role in the growth of the AI alignment field. Although there are an increasing number of research groups, academic workshops, and students focused on AI alignment, we’ve found little systematic work pulling different framings and approaches to AI alignment together into a more complete picture, and we’re aware of only a few people with in-depth knowledge of more than one AI alignment subproblem. This kind of knowledge would be directly useful for our funding and field-building work, and we expect that it would be very useful to AI alignment researchers who are just getting started or who are trying to make high-level decisions about which research directions to pursue.
Our current views and priorities are heavily informed by the belief that there is at least a 10% likelihood of transformative AI being developed within the next 20 years. We would greatly value the ability to form better-grounded views on this topic.
We’re looking for at least one person who can work full-time on considering the probability distribution over when transformative AI will be developed, and on understanding and documenting the most important considerations for this question. We don’t think it’s feasible to have highly reliable views on this topic, due to the long time frame and lack of clearly applicable frameworks, but we seek to form maximally well-informed and well-calibrated views, via methods including:
- Thoroughly studying and understanding the available literature on improving the accuracy of judgments and forecasts, with a particular eye toward what practices are most likely to be useful for the exercise of making forecasts about transformative AI.
- Having in-depth conversations with leading AI researchers to understand their views, and studying relevant technical topics (particularly machine learning, cognitive science and neuroscience) as needed. Particularly relevant topics are likely to include the “equivalent raw computing power” of the human brain; the question of what increases in computing power can be expected as companies invest more in developing hardware optimized for AI; and the question of what intellectual functions can currently be performed by humans (and other animals) but not by AI systems.
- Studying relevant historical patterns, such as whether past AI forecasts have been systematically over-aggressive.
The ideal candidate for this role would have strong technical abilities and the ability to engage critically with academic literature on a variety of topics (particularly AI and machine learning, cognitive science and neuroscience). They would also have a strong interest in forecasting and an outstanding ability to explain and document their thinking for other Open Philanthropy Project staff.
We note that we are unaware of anyone who currently specializes full-time in work along these lines, which may mean that the value of entering this area is particularly high at the moment.
AI governance and strategy
In addition to supporting technical work on AI alignment, we seek to support work on AI governance and strategy: considering the potential social and geopolitical issues raised by a potential world in which transformative AI is close to being developed, and thinking through potential methods (such as industry coalitions or formal and informal international agreements) for raising the probability that transformative AI is developed carefully and/or in a cooperative context. (Also see the Future of Humanity Institute’s Governance of AI Program.)
We seek people who can work full-time on AI governance and strategy, both in terms of making direct intellectual contributions to advance/seed the field and in terms of helping advise our funding in this space. The ideal candidate for this role would have a strong background in technology policy, government, national security, political science, and/or international relations, as well as a strong understanding of potential risks from advanced AI and sufficient technical ability to engage with AI researchers at a high level.
Key qualifications for each area of focus are listed above. To recap:
- The ideal hire for AI alignment would have very strong technical abilities, and would likely have strong potential as an academic or industry AI researcher if they were to pursue that route. However, this hire would not perform or publish research. They would focus on understanding, critically evaluating, and communicating about a variety of lines of research.
- The ideal hire for AI timelines would have strong technical abilities and the ability to engage critically with academic literature on a variety of topics (particularly AI and machine learning, cognitive science and neuroscience). They would also have a strong interest in forecasting and an outstanding ability to explain and document their thinking for other Open Philanthropy Project staff.
- The ideal hire for AI governance and strategy would have a strong background in technology policy, government, national security, political science, and/or international relations, as well as a strong understanding of potential risks from advanced AI and sufficient technical ability to engage with AI researchers at a high level.
In addition, the best candidates for any of these roles will have:
- Passion for the Open Philanthropy Project’s core values of impact maximization and openness. Familiarity with effective altruism is a plus.
- Comfort thinking in terms of expected value and using systematic, quantitative frameworks.
- Strong self-direction and self-motivation.
- Comfort with open-ended questions, where no clear precedents or guidelines exist.
- Directness and openness in giving and receiving feedback.
- Comfort with intense discussion and debate, including challenging one’s manager.
- A drive to question and improve everything about the organization rather than taking it as given.
Applicants who have extremely strong existing qualifications for these roles should apply directly for them. However, we also note that the Research Analyst role is a possible route to the roles listed here.
These positions are in San Francisco, where Open Philanthropy is based. They are office-based jobs that can partly be done remotely.
We will sponsor applications for work authorization as needed, though we cannot guarantee that such applications will be accepted.
As part of our dedication to equal employment opportunity and the diversity of our staff, the Open Philanthropy Project does not discriminate on the basis of race, color, national origin, ethnicity, gender, protected veteran status, disability, sexual orientation, gender identity, religion, or any other basis protected by federal, state, or local law. We especially encourage applications from women and minorities.
If you need assistance or an accommodation due to a disability, you may contact us at firstname.lastname@example.org.
The application window for this role is now closed.