Open Philanthropy is hiring for several different grantmaking, research, and operations roles across multiple focus areas within our Global Catastrophic Risks (GCR) team. Details for specific roles are included under the focus area subheadings below. The official deadline has now passed. However, we are still open to applications to join our GCR team, and if you’ve received this link then we would welcome your application.
1. About the Global Catastrophic Risks team
The Global Catastrophic Risks team, formerly known as the Longtermism team, works across focus areas aimed at reducing the chances of events that might cause major harm on a global scale. It is broadly divided into the following focus areas, which are described in more detail under the subheadings for each team.
- Biosecurity and Pandemic Preparedness
- Global Catastrophic Risks Capacity Building
- Potential Risks from Advanced Artificial Intelligence (divided into AI Governance and Policy and Technical AI Safety)
- Global Catastrophic Risks Cause Prioritization (not a formal focus area, but still a unit we are hiring for)
2. About working at Open Philanthropy
The information below applies to all roles and has been included here to avoid repetition. More specific details about compensation, location, and other key information are listed underneath individual roles.
Our benefits package includes:
- Excellent health insurance (we cover 100% of premiums within the US for you and any dependents).
- Dental, vision, and life insurance for you and your family.
- Four weeks of PTO recommended per year.
- Four months of fully paid family leave.
- A generous and flexible expense policy — we encourage staff to expense the ergonomic equipment, software, and other services that they need to stay healthy and productive.
- Support for remote work (for roles that can be done remotely) — we’ll cover a remote workspace outside your home if you need one, or connect you with an Open Phil coworking hub in your city.
We can’t always provide every benefit we offer US staff to international hires, but we’re working on it (and will usually provide cash equivalents of any benefits we can’t offer in your country).
Across roles, we value staff who are able to communicate clearly and honestly about what they think, are comfortable giving and receiving feedback, and are interested in taking ownership of their work and proactively seeking ways to help Open Philanthropy meet its goals. For more information about the qualities we look for in employees at Open Philanthropy, see our operating values.
We aim to employ people with many different experiences, perspectives, and backgrounds who share our passion for accomplishing as much good as we can. We are committed to creating an environment where all employees have the opportunity to succeed, and we do not discriminate based on race, religion, color, national origin, gender, sexual orientation, or any other legally protected status.
3. About the application process and timeline
You can apply to as many roles as you like using the application form. To do this, select which roles you are interested in, and the form will prompt you to complete any questions that are specific to those roles. If you apply to multiple roles, being advanced in the process will mean by default that you are still being considered for multiple roles. Conversely, a rejection at any stage of the process will apply to all roles you applied for unless stated otherwise.
Note that the application form does not have an autosave function. You may wish to complete your answers in a separate document to avoid losing your progress.
The evaluation process for many roles is likely to be time-intensive and involve several work tests and interviews. We pay honoraria for time spent completing our work tests.
We may not make hires for all of the roles listed below if no applicant meets our threshold for making an offer, including for roles which don’t have specific disclaimers to this effect. However, there is no such thing as a “perfect” candidate for any of our roles. If you are on the fence about applying because you are unsure whether you are qualified, we would strongly encourage you to apply.
We have extended the application deadline to Monday, November 27th at 11:59pm PST. Please apply before that time to be considered. We may give priority to candidates who are able to move through the process sooner.
If you need assistance or an accommodation due to a disability or have any other questions about applying, please contact jobs@openphilanthropy.org.
4. AI Governance and Policy (AIGP)
This program sits under our broader focus area of Potential Risks from Advanced Artificial Intelligence, and aims to distribute >$100 million in grants each year. The team supports work related to AI governance research (to improve our collective understanding of how to achieve beneficial and effective AI governance) and AI governance practice and influence (to improve the odds that good governance ideas are actually implemented by companies, governments, and other actors). You can read more about the program’s goals, priorities, and work so far here.
4.1 AIGP Program Associate / Senior Program Associate (Generalist)
About the role
You would most likely report to Senior Program Officer Luke Muehlhauser, who leads the AI governance and policy (AIGP) team. You would help expand the scope of our AI governance work by investigating grant opportunities, vetting empirical claims, and helping to develop our program strategy.
Your work would include a mix of:
- Grant investigations. You will work to identify promising new grant opportunities, analyze whether grants will cost-effectively advance our priorities, write up your reasoning for making specific grants, and follow up on the progress of the grants we recommend (e.g. running check-in calls with grantees and advising Open Philanthropy on whether to renew the grant).
- Maintaining relationships in the field. You will have regular check-ins with a variety of stakeholders and experts in the AI governance space in order to stay abreast of new developments, and summarize updates for the team. You will also respond to incoming requests for advice on e.g. who to hire or what research questions to pursue.
- Research to inform program strategy. You will help clarify and further develop the program’s strategy by e.g. fact-checking key claims underpinning our work, or collecting and analyzing the arguments for and against a particular strategy.
- Seize inbound opportunities for impact. In part due to the unique position of Open Philanthropy in the ecosystem (neutral-ish third party + funder + relationships with senior people at labs and in government), AIGP receives many inbound opportunities for impact (e.g. to serve on various boards or oversee various projects) that current AIGP staff must decline due to limited bandwidth. You could seize some of these opportunities as they come to us.
Who might be a good fit
While there are no formal degree or experience level requirements for this role, we expect competitive candidates to have demonstrated understanding of and interest in our AI governance work, and preferably have some existing work experience in the field.
You might be a good fit if you:
- Have strong analytical and critical thinking skills, especially the ability to quickly grasp complex issues, find the best arguments for and against a proposition, and skeptically evaluate claims. You should also feel comfortable thinking in terms of expected value and reasoning quantitatively about tradeoffs and evidence.
- Will bring pragmatism and can-do energy to a role where we often need to make quick, imperfect decisions with the information we have.
- Are able to take ownership of poorly scoped projects and think from first principles about how to accomplish their goals (or notice that the goals themselves should change).
- Are a flexible thinker, and able to quickly absorb and adapt to new information. The team is working in the context of a quickly evolving AI governance landscape, and priorities and projects may change rapidly as a result.
- Communicate in a clear, information-dense and calibrated way, both in writing and in person. The ability to write quickly, cleanly and with good reasoning transparency will be particularly important, as the team shares ideas and feedback in writing by default.
- Are broadly sympathetic to the basic goals, premises, and uncertainties that inform our work on this program (while perhaps disagreeing with us on many details), such as those articulated in Our AI governance grantmaking so far, Most Important Century, AI could defeat…, and Without specific countermeasures…
It’s a bonus but not required to have:
- Basic background familiarity with machine learning.
- Past experience in US/UK/EU policymaking or government work, and/or past experience working on AI governance at a company contributing to the cutting edge of AI R&D.
- A strong network of contacts within the AI governance and policy field.
- A graduate degree.
Specializations
While this role is well-suited to generalists, we are also open to exploring opportunities for applicants to delve into one or more specializations, depending on their interests, skills, and background.
These specializations would involve:
- Having a good understanding of the key considerations, debates, dynamics, actors, and major projects/programs in your specialization.
- Identifying major gaps/bottlenecks in your specialization, and try to fill them.
- Maintaining a list of top priorities in your specialization (and justifications for them), and seeking to achieve the goals implied by those priorities.
- Building and maintaining the relationships necessary to achieve these objectives.
- Being the default grant investigator for grants mainly related to your specialization.
Though this list is non-exhaustive, some examples of specializations we are particularly interested in include:
- US AI Policy Development: Research and recommendations identifying impactful and tractable US government policy changes, ensuring these policies are vetted and developed in enough detail to be advocated and implemented. Note: due to its high priority, we are separating US policy advocacy (e.g., advocating for the adoption of beneficial policies by AI labs and the US government) into a distinct role.
- EU AI Policy: Research and recommendations identifying impactful and tractable policy changes in the EU and EU national governments, ensuring these policies are vetted and developed in enough detail to be advocated and implemented.
- UK AI Policy: Research and recommendations identifying impactful and tractable UK government policy changes, ensuring these policies are vetted and developed in enough detail to be advocated and implemented.
- China: Understanding trends and developments in the PRC’s AI ecosystem, and enhancing relationships and understanding between the West and the PRC on issues around very advanced AI.
- International AI Governance: Research and recommendations identifying tractable paths to effective, broad, and multilateral AI governance, and working to improve coordination and cooperation among key state actors.
- Frontier Lab Policy: The development and implementation of frontier AI lab policies and practices aimed at reducing risks, such as model evaluations, incident reporting, and third-party audits.
- Biosecurity and AI: Examination of the risks and threat models at the intersection of biotechnology and AI, along with the development of safeguards to prevent misuse of AI systems in biological attacks.
- AI Governance Talent Pipelines: Building and maintaining pipelines that identify and channel talented individuals towards the most impactful roles in AI governance.
- Law: Developing legal frameworks for AI, exploring relevant legal issues such as liability and antitrust, encouraging solid legal drafting of impactful AI policies, and understanding the legal aspects of various AI policy proposals.
Other details
- Location: This is a full-time, permanent position with flexible work hours and location. Occasional travel to San Francisco and/or Washington, D.C. may be required, and you will generally need to be available for calls during most of the U.S. workday.
- We are happy to consider candidates based outside of the U.S., and to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- Compensation: The starting compensation for this role will be based on a range of $116,059.34 to $143,658.71 per year, which would include a base salary of $100,921.16 to $124,920.62 and an unconditional 401(k) grant of $15,138.17 to $18,738.09.
- These compensation figures assume a remote location; there would be geographic adjustments upwards for candidates based in the San Francisco Bay Area or Washington, D.C.
- All compensation will be distributed in the form of take-home salary for internationally based hires.
- Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.
4.2 Senior Program Associate, US AI policy advocacy
About the role
You would play the lead role in our work on US AI policy advocacy, responsible for developing our grantmaking strategy and priorities, making grants, and building relationships with grantees and others working on US AI policy advocacy. Once a strategy is established, you would be responsible for recruiting and managing relevant team members as needed to help you execute it effectively.
Successful US AI policy advocacy may involve a wide range of strategies, including grantmaking, working with lobbyists, and coordinating media campaigns. In the first year, you would likely direct $10M-$30M in funding for US AI advocacy, depending on available opportunities, with potential for substantial growth thereafter if the impact seems compelling.
Some of your key responsibilities would be to:
- Work closely with the rest of our AI team, key grantees, and other allies working on AI policy development and advocacy to:
- Prioritize and develop a plan of action for working on US AI policy options that are both high-impact and tractable, such as the tentative policy ideas here.
- Develop strategies to advocate for good policies to be adopted.
- Implement promising strategies rapidly and effectively.
- Source, investigate, and develop promising grant opportunities and contractors to promote high-quality US AI policy and expand US AI policy advocacy as an impact area.
- Establish and maintain relationships with current and prospective grantees, funders, and other stakeholders in the field.
- Follow up with grantees periodically and keep abreast of their progress to inform our evaluation efforts.
- Represent Open Philanthropy at relevant external meetings and conferences.
Who might be a good fit
This is a nascent field and we don’t expect to find someone who has been working for many years on AI policy advocacy. We have a few possible profiles in mind for potential dream candidates, such as:
- An AI governance professional with a moderate (at least) understanding of US policymaking processes, the raw aptitudes to grow into an effective funder of advocacy work (see below), and the humility to draw heavily from the wisdom of more experienced US policy advocates (who have probably mostly worked on non-AI policy issues).
- A seasoned policy professional who has spent years working in Washington D.C., with a strong track record of driving influential policy changes, either from within government or as an advocate or lobbyist. This person might have little professional experience working on AI issues specifically, but should have significant familiarity with potential existential risks from AI and popular ideas for how to mitigate them.
Regardless of one’s degree of US policy advocacy experience, we’re looking for:
- Familiarity with TAI-motivated governance and policy ideas, and ideally an existing vision for how Open Phil could use advocacy funding to realize those goals.
- Experience working quickly, proactively, and forcefully to deliver on major goals. This is not an academic position.
- While not required, successful advocacy experience is a major plus.
- Commitment to maximizing impact per dollar spent in terms of expected value.
- Creativity and willingness to think broadly about paths to impact, including making rapid strategy pivots in response to new information.
- Sufficient analytical and quantitative skill to assess the cost-effectiveness of potential grant opportunities and to critically evaluate assessments by others.
- Strong interpersonal skills, and the ability to work effectively with ideologically and culturally diverse partner organizations.
- While not required, management experience is a plus.
- Strong written and oral communication skills, especially the ability to explain your views clearly.
- Ability to travel periodically (e.g. for meetings and conferences).
Other details
- Location: This role would be based in Washington, D.C., and may from time to time require travel to other locations.
- We are happy to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- Compensation: The starting compensation for this role will be $143,658.71 per year, which would include a base salary of $124,920.62 and an unconditional 401(k) grant of $18,738.09.
- Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.
4.3 Senior Program Associate, Technical AI Governance Mechanisms
About the role
By technical AI governance, we mean technical work that primarily aims to improve the efficacy of AI governance interventions. This work could involve, but is not necessarily limited to:
- Compute governance, e.g., governing access to significant quantities of the kinds of computing hardware that is used for training powerful AI systems. See Grunewald (2023).
- Technical mechanisms for improving AI coordination and regulation, e.g., hardware security features on AI chips that enable monitoring and verification. See Shavit (2023).
- Privacy-preserving transparency mechanisms, e.g., establishing protocols for auditors to access and evaluate private AI models without exposing sensitive data or intellectual property. See OpenMined (2023).
- Technical standards development, e.g., translating AI safety work into standards that can be adopted by organizations and projects aiming to build powerful AI systems. See O’Keefe et al. (2022).
- Model evaluations, e.g., assessing AI systems for potential extreme risks such as offensive cyber capabilities, thereby informing decisions about model training, deployment, and security measures. See Shevlane et al. (2023).
- Information Security, e.g., efforts to enhance the security of AI systems by bolstering information security practices within AI development and deployment. See Zabel & Muehlhauser (2019).
You would play a key role in growing our grantmaking related to technical AI governance, and be responsible for developing our grantmaking strategy and priorities, making grants, connecting technical experts with policymakers, and building relationships with grantees and others in the AI governance space.
Some of your key responsibilities would be to:
- Work closely with the rest of our AI team, key grantees, and other allies working on AI governance to ensure that the necessary technical expertise can inform research and policy development.
- Source, investigate, and develop promising grant opportunities and contractors to create high-quality technical AI governance research and expand technical AI governance as an impact area.
- Establish and maintain relationships with stakeholders in the field, including current and prospective grantees and funders, in addition to providing them with technical advisory support.
- Follow up with grantees periodically and keep abreast of their progress to inform our evaluation efforts.
Who might be a good fit
Technical AI governance is still relatively nascent, so we expect exciting candidates to come from a wide range of technical backgrounds, potentially including (but not limited to):
- Hardware engineering (with a particular focus on the computing hardware used for large-scale AI training and inference)
- Computer science (experience with machine learning is particularly valuable)
- Electrical engineering
- Information security
- Cryptography
We have a few possible profiles in mind for dream candidates, such as:
- A hardware engineer at a leading semiconductor company or cloud compute provider who has recently become interested in leveraging their knowledge to help develop hardware-focused governance options that reduce risks from advanced AI systems.
- A machine learning researcher or engineer who wants to pivot to improving technical AI governance research and policy.
- An information security specialist who has played a pivotal role in safeguarding sensitive data and systems at a major tech company, and who is now eager to apply their expertise to the broader challenge of ensuring the safety and security of advanced AI systems.
You might be a good fit if you have a technical background akin to those listed above, and also:
- Have strong analytical and critical thinking skills, especially the ability to quickly grasp complex issues, find the best arguments for and against a proposition, and skeptically evaluate claims. You should also feel comfortable thinking in terms of expected value and reasoning quantitatively about tradeoffs and evidence.
- Will bring pragmatism and can-do energy to a role where we often need to make quick, imperfect decisions with the information we have.
- Are able to take ownership of poorly scoped projects and think from first principles about how to accomplish their goals (or notice that the goals themselves should change).
- Are a flexible thinker, and able to quickly absorb and adapt to new information. The team is working in the context of a quickly evolving AI governance landscape, and priorities and projects may change rapidly as a result.
- Communicate in a clear, information-dense and calibrated way, both in writing and in person. The ability to write quickly, cleanly and with good reasoning transparency will be particularly important, as the team shares ideas and feedback in writing by default.
- Are broadly sympathetic to the basic goals, premises, and uncertainties that inform our work on this program (while perhaps disagreeing with us on many details), such as those articulated in Our AI governance grantmaking so far, Most Important Century, AI could defeat…, and Without specific countermeasures…
Other details
- Location: This is a full-time, permanent position with flexible work hours and location. Occasional travel to San Francisco and/or Washington, D.C. may be required, and you will generally need to be available for calls during most of the U.S. workday.
- We are happy to consider candidates based outside of the U.S., and to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- Compensation: The starting compensation for this role will be $143,658.71 per year, which would include a base salary of $124,920.62 and an unconditional 401(k) grant of $18,738.09.
- These compensation figures assume a remote location; there would be geographic adjustments upwards for candidates based in the San Francisco Bay Area or Washington, D.C.
- All compensation will be distributed in the form of take-home salary for internationally based hires.
- Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.
5. Technical AI Safety
This program sits under our broader focus area of Potential Risks from Advanced Artificial Intelligence. The team aims to support technical research that we think would reduce catastrophic risks from advanced artificial intelligence.
5.1 Research Associate / Senior Research Associate and Program Associate / Senior Program Associate (Generalist), Technical AI Safety
About the role
The responsibilities for the (Senior) Research Associate position and the (Senior) Program Associate position largely overlap. You would report to Ajeya Cotra and support her with key responsibilities relating to technical AI safety. These responsibilities fall on a spectrum, with (Senior) Program Associates focusing more on grantmaking and (Senior) Research Associates focusing more on analysis. While our ideal candidate would be able to make meaningful contributions in both areas, we’re open to candidates who lean strongly towards one side or the other and may hire multiple candidates to ensure that different bases are covered.
The core function of the (Senior) Program Associate role is to make grants to technical researchers to work on research projects that we think would reduce catastrophic risks from AI. Adding extra capacity to this team (which currently consists of 2 FTEs) could significantly increase the quantity and quality of grants we’re able to make. For instance, we would like to open up new inbound funding request forms like a PhD funding application form or RFPs for particular subfields of AI research, but we have inadequate grantmaker capacity to process the volume of responses such forms would receive. Some central tasks of the role could include:
- Evaluating grant applications that come in through our intake forms.
- Networking with AI safety researchers to find potential grantees.
- Explaining our AI risk worldview and our priorities within AI research to potential grantees.
- Designing intake forms and evaluation processes for grants.
- Sharing feedback with grantees, both in writing and conversation.
- Managing relationships with grantees, e.g. responding to their questions or dealing with any sensitive issues that come up.
The core function of the (Senior) Research Associate role is to identify which research directions and projects we should fund within technical safety and why, and to judge how successful different research directions and major projects have been so far. We believe this could help the team by identifying new categories of promising grants. Some central tasks of the role could include:
- Digging into academic literatures and sorting through what’s more and less relevant / promising to our interests.
- Speaking to technical researchers in the field about their models of what’s helpful and why and their ‘project wishlists’.
- Creatively thinking about what our AI threat models imply about what novel directions and projects should be helpful and why.
- Drafting ‘house takes’ on different research areas and key upstream questions.
- Drafting requests for proposals based on this thinking.
- Stack-ranking research outputs, research groups and research areas according to their expected contribution to reducing AI risk.
Who might be a good fit
You might be a good fit for the (Senior) Program Associate role if you:
- Have a good grip on arguments for catastrophic risks from AI and fluency in arguments for and against certain technical research directions helping or not helping, and can convey and update these views through conversation with others.
- Have good grantmaking judgment: you can distinguish and focus on the most important considerations, have good instincts about where to do due diligence and where to focus on efficiency, and form reasonable holistic perspectives on people and organizations.
- Are conscientious and well-organized.
- Are comfortable in technical conversations with researchers who are potential or current grantees.
- Have good written communication skills (you’ll need to produce internal grant writeups, and you may also draft public blog posts).
You might be a good fit for the (Senior) Research Associate role if you:
- Have strong technical competence and understanding of ML, deep understanding of AI threat models and the ability to continually improve that understanding through reading / thinking / conversation, the ability to read technical papers and extract key insights, and the ability to follow the weeds of a technical conversation and extract key insights.
- Have sufficiently strong writing skills to produce both internal reports that help Ajeya efficiently understand key concepts and public-facing technical reports.
- Ideally, have the ability to elaborate on AI threat models themselves and/or come up with informative research directions or experiment ideas based on threat models that aren’t well-explored in the literature.
Other details
- These are full-time, permanent positions with flexible work hours and location. Our ideal candidate would be based in the San Francisco Bay Area, with Boston as our second preference, but we are open to hiring strong candidates on a full-time remote basis.
- We are happy to consider candidates based outside of the U.S., and to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- The starting compensation for these roles will be based on a range of $149,204.33 to $189,091.51 per year, which would include a base salary of $129,742.89 to $169,091.51 and an unconditional 401(k) grant of $19,461.43 to $20,000.00.
- These compensation figures assume a San Francisco Bay Area location, and as such include an upward geographic adjustment; compensation would be slightly lower for candidates based elsewhere.
- All compensation will be distributed in the form of take-home salary for internationally based hires.
- We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.
5.2 Senior Program Associate (specializing in a subfield), Technical AI Safety
About the role
This role is similar to that of a Senior Program Associate (Generalist), but focused on a particular subfield of technical research. The core function of the role is to make grants to technical researchers to work on research projects within a particular subfield that we think would reduce catastrophic risks from AI.
Examples of subfields that a Senior Program Associate could specialize in include:
- Interpretability, i.e. research like this, this or this that sheds light on the internal mechanisms explaining model behaviors.
- Adversarial attacks and defenses, i.e. research like this, this, or this that reveals and studies rare failure modes of AI systems.
- Theoretical AI alignment research, i.e. research like this or this that aims to formally describe and address misalignment.
This list of subfields is not meant to be exhaustive. We are also open to applicants proposing another subfield they would like to specialize in; over the course of the application process, we expect to ask such candidates to make the case that their chosen subfield contains promising research directions that could reduce catastrophic risks from AI.
Who might be a good fit
You might be a good fit if you:
- Have an interest in reducing catastrophic risks from AI.
- Have a strong understanding of your chosen subfield and an ability to make arguments for and against particular research projects in that subfield.
- Have good grantmaking judgment: you can distinguish between important and unimportant considerations, focus naturally on what’s important, have good instincts about where to do due diligence and where to focus on efficiency, and form reasonable holistic perspectives on people and organizations.
- Are conscientious and well-organized.
- Are comfortable in technical conversations with researchers in your chosen subfield who are potential or current grantees.
- Have good written communication skills (you’ll need to produce internal grant writeups, and you may also draft public blog posts).
Other details
- Location: This is a full-time, permanent position with flexible work hours and location. Our ideal candidate would be based in the San Francisco Bay Area, with Boston as our second preference, but we are open to hiring strong candidates on a full-time remote basis.
- We are happy to consider candidates based outside of the U.S., and to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- Compensation: The starting compensation for this role will be $189,091.51 per year, which would include a base salary of $169,091.51 and an unconditional 401(k) grant of $20,000.00.
- These compensation figures assume a San Francisco Bay Area location, and as such include an upward geographic adjustment; compensation would be slightly lower for candidates based elsewhere.
- All compensation will be distributed in the form of take-home salary for internationally based hires.
- Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.
5.3 Program Operations Associate, Technical AI Safety
About the role
The Program Operations Associate will work to set up systems and solve operational bottlenecks for the Technical AI Safety team so that they can reduce turnaround times and make more grants with the same amount of grantmaking staff capacity.
Your core responsibilities would include:
- Building and maintaining systems for sourcing and evaluating grants, submitting them for internal approval, and handing them off to the grants logistics team. This includes building and adding to a database of current, past, and potential grantees; building application forms in Airtable or a similar program; organizing information related to a grant (e.g. email threads, Drive folders, conversation notes and recordings, or budget spreadsheets); tracking where each live grant is in the pipeline and reminding grantees and grant investigators about relevant deadlines; connecting grantees to our grants logistics team and working with them to answer any questions they have; etc.
- Helping manage a large volume of inbound emails from grantees and other external parties, while handling confidential and potentially sensitive information carefully and professionally. As you gain context on the team’s work, you will increasingly use your own judgment to prioritize and respond to messages on behalf of others.
- Becoming familiar with the team’s key administrative constraints and proactively coming up with solutions for them (e.g. proposing a change to the team’s scheduling or application-processing software, or automating a piece of repetitive work).
- Doing ad hoc research or organizational tasks as they come up (e.g. proofreading documents, working with a contractor to format research reports into LaTeX or build a website, helping to organize an event, designing and sending out a feedback survey, or working with the grants team or legal team to handle a grantee’s request). This includes occasionally providing executive assistant-style support, such as scheduling meetings and booking travel.
In general, it’s hard to predict everything you might be asked to tackle in this role. We’re a rapidly growing organization, and we expect all staff to be flexible about what they work on and put contributing to our mission first (this could mean supporting organizational functions or staff members in other GCR teams as needed, for up to half of your time).
Over time, a strong performer could grow into a more broadly scoped role supporting us with operationally demanding projects.
Who might be a good fit
You might be a good fit for this work if you:
- Share our mission of reducing catastrophic risks from advanced artificial intelligence, and are familiar with, and interested in, effective altruism, long-termism, existential risk, or related ideas.
- Have excellent project management, organization and prioritization skills, with the ability to anticipate and avert problems and simplify complexity while working in a dynamic, evolving environment.
- Are detail-oriented and conscientious, with a track record of rapidly executing on your priorities without sacrificing quality.
- Are motivated by the idea of doing whatever will be most helpful for Ajeya and the team, even when it’s not glamorous or involves some amount of repetition.
- Are a strong communicator in person and in writing. You convey information and decision processes effectively, and tend towards an information-dense, transparent communication style.
- Are eager to learn and not afraid to ask questions to check your understanding.
Other details
- Location: We strongly prefer hires to be based in the San Francisco Bay Area.
- We’ll cover the costs of relocation to the Bay Area for the candidate we hire.
- Generally speaking, we are not able to sponsor visas for this role, and therefore require candidates to have current US work authorization. However, in exceptional circumstances, we may be able to sponsor a visa application.
- Compensation: The starting compensation for this role will be $115,963.80, which would be distributed as a base salary of $100,838.08 and an unconditional 401(k) grant of $15,125.71 for U.S. hires.
- These compensation figures assume a location in the San Francisco Bay Area, and as such include an upward geographic adjustment; compensation would be slightly lower for candidates based elsewhere.
- Start date: We’d like a candidate to start as soon as possible after receiving an offer, though there is some flexibility.
6. Biosecurity and Pandemic Preparedness
The Biosecurity and Pandemic Preparedness (BPP) team works to reduce severe risks from biology, whether from natural pandemics or the abuse of emerging biotechnologies. We have recommended over $200 million in grants to date, informed by internal prioritization research meant to target funding to the most impactful areas. Historically, we’ve funded diverse activities such as travel grants to the 9th Review Conference of the Biological Weapons Convention, the technical development of gene synthesis screening solutions, and advocacy for government investment in the development of advanced personal protective equipment.
6.1 Security Associate/Lead
About the role
The Security Associate/Lead would be responsible for addressing our most pressing security needs, thus enabling our core grantmaking and research activities. We currently face significant information and operational security bottlenecks, related to both improving existing capabilities and building new capabilities. The Security Associate/Lead would be in charge of iteratively creating and refining solutions in consultation with team members.
In this role, your work will fall into some of these categories:
- Providing secure IT systems, or coordinating the provision thereof
- Piloting, iterating, and finalizing solutions to security pain points
- Responding to security-related incidents and improving systems in response
- Communicating solutions to team members and collaboratively troubleshooting issues
Who might be a good fit
You might be a good fit for this work if you:
- Share our mission of preventing global biological catastrophes.
- Approach systems and solutions with a “security mindset,” with a constant eye to recognizing and managing risks.
- Have strong project management skills.
- Recognize when “perfect” risks becoming the enemy of “good enough,” while understanding what “good enough” actually means – especially in navigating situations with competing priorities / tradeoffs.
- Approach problems in a “user-focused” way, prepared to iterate on solutions with team members.
- Learn quickly, especially in areas where you aren’t already an expert.
- Ideally have experience in information security roles, though this is not required.
- Ideally are capable of navigating complex IT systems.
Other details
- Location: This role would be based in Washington, D.C., and may from time to time require travel to other locations.
- Generally speaking, we are not able to sponsor visas for this role, and therefore require candidates to have current US work authorization. However, in exceptional circumstances, we may be able to sponsor a visa application (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
- Compensation: The starting compensation for this role will be based on a range of $114,245.81 to $145,934.82, which would be distributed as a base salary of $99,344.19 to $126,899.84 and an unconditional 401(k) grant of $14,901.63 to $19,034.98 for U.S. hires.
- These compensation figures assume a location in Washington, D.C.
- Start date: We’d like a candidate to start as soon as possible after receiving an offer, though there is some flexibility.
6.2 Operations Associate/Lead
About the role
The Operations Associate/Lead will be responsible for helping us move with agility toward our research and grantmaking goals, which means solving our most pressing operational challenges, as well as ensuring that our day-to-day processes run smoothly.
In this role, your work will fall into some of these categories:
- Office management. As a principal area of responsibility, you will own setting up a BPP team office in Washington, DC, managing its day-to-day operations, and making iterative improvements to ensure the team’s work is productive. This will include strategic problem-solving, managing contractors in the provision of relevant services (e.g., routine office maintenance / operations), and direct hands-on work relating to the office (e.g., procuring supplies and equipment). This work may also extend to other office spaces in other locations in the future.
- Process management. You will establish, maintain, and iterate on team processes, which may include processes related to information security, internal knowledge management, task management, performance tracking, ensuring legal compliance, hiring, and onboarding.
- Reactive logistical support. You will be responsible for responding to ad hoc operational needs, e.g., organizing space for a one-off 15-person meeting in a European city or shipping a package across the country within 12 hours. Some tasks in this category may have short turnaround times and require effort outside of normal working hours.
- Whatever else is needed to raise the team’s effectiveness. We are looking for a self-starter who is eager to support us in working as effectively as possible. Ideally, the person hired into this role will continually spot new opportunities for improving our team’s output, and will work creatively and independently to address unmet needs.
Overall, the Operations Associate/Lead will be expected to own a wide variety of tasks – including many in which they are unlikely to have expertise – and move quickly toward solutions, even amid high uncertainty. The role will demand both strategic ownership and high attention to detail.
Who might be a good fit
You might be a good fit for this work if you:
- Share our mission of preventing global biological catastrophes.
- Work in a responsive, agile way, stay calm when conditions change, and are able to improvise and pivot quickly.
- Pride yourself on how organized and diligent you are.
- Have a track record of managing complex projects, such as teams of employees or volunteers; facilities; long-term efforts; or events.
- Recognize when “perfect” risks becoming the enemy of “good enough,” while understanding what “good enough” actually means – especially in navigating situations with competing priorities.
- Approach problems in a “user-focused” way, prepared to iterate on solutions with team members.
- Approach systems and solutions with a “security mindset,” with a constant eye to recognizing and managing risks.
- Be excited to handle ~anything that needs to get done, since it’s impossible to anticipate everything that might be needed from this role.
Other details
- Location: This role would be based in Washington, D.C., and may from time to time require travel to other locations.
- Generally speaking, we are not able to sponsor visas for this role, and therefore require candidates to have current US work authorization. However, in exceptional circumstances, we may be able to sponsor a visa application (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
- Compensation: The starting compensation for this role will be based on a range of $114,245.81 to $145,934.82, which would be distributed as a base salary of $99,344.19 to $126,899.84 and an unconditional 401(k) grant of $14,901.63 to $19,034.98 for U.S. hires.
- These compensation figures assume a location in Washington, D.C.
- Start date: We’d like a candidate to start as soon as possible after receiving an offer, though there is some flexibility.
- We may hire a candidate on a full-time, permanent basis or on a contract basis, depending on our needs at the time of hiring.
6.3 Executive Assistant
About the role
The Executive Assistant will work to provide direct time-saving support to Program Officer Andrew Snyder-Beattie and solve other bottlenecks for the team as a whole.
Your core responsibilities would include:
- Providing executive assistant-style support to Senior Program Officer, Andrew Snyder-Beattie.
- Performing ongoing administrative and logistical tasks, such as scheduling meetings, submitting expense reports, and booking travel.
- Doing ad hoc research or organizational tasks (e.g. proofreading documents, downloading and organizing data in a spreadsheet, etc.).
- Helping manage a large volume of inbound emails from grantees and other external parties. As you gain context on the team’s work, you will increasingly use your own judgment to prioritize and respond to messages on behalf of others.
- Managing and organizing team documents.
- Handling confidential and potentially sensitive information carefully and professionally.
- Handling some personal tasks for members of the team, such as managing personal calendars and scheduling household appointments.
- Becoming familiar with the team’s key administrative constraints and proactively coming up with solutions for them (e.g. proposing a change to the team’s scheduling software, or automating a piece of repetitive work).
In general, it’s hard to predict everything you might be asked to tackle in this role. We’re a rapidly growing organization, and we expect all staff to be flexible about what they work on and put contributing to our mission first (this could mean supporting organizational functions or staff members in other GCR teams as needed, for up to half of your time).
Over time, a strong performer could move into a generalist or specialist role on our operations team, or grow into a more broadly scoped role supporting us with operationally demanding projects.
Who might be a good fit
You might be a good fit for this work if you:
- Share our mission of preventing global biological catastrophes.
- Have excellent project management, organization and prioritization skills, with the ability to anticipate and avert problems and simplify complexity while working in a dynamic, evolving environment.
- Are detail-oriented and conscientious. You have a track record of rapidly executing on your priorities without sacrificing quality.
- Are service-minded and comfortable with some amount of repetition in your work; you are motivated by the idea of doing whatever will have the most impact, even when it’s not glamorous.
- Are a strong communicator in person and in writing. You convey information and decision processes effectively, and tend towards an information-dense, transparent communication style.
- Are eager to learn and not afraid to ask questions to check your understanding.
- Are familiar with, and interested in, effective altruism, longtermism, biosecurity and pandemic preparedness, or related ideas.
Other details
- Location: This role would ideally be based in Washington, D.C., with Boston as a second preference location, but could also be carried out remotely. Regardless of primary location, it may from time to time require travel to other locations.
- Generally speaking, we are not able to sponsor visas for this role, and therefore require candidates to have current US work authorization. However, in exceptional circumstances, we may be able to sponsor a visa application.
- Compensation: The starting compensation for this role will be $95,537.98, which would be distributed as a base salary of $83,076.50 and an unconditional 401(k) grant of $12,461.48 for U.S. hires.
- These compensation figures assume a location in Washington, D.C.
- Start date: We’d like a candidate to start as soon as possible after receiving an offer, though there is some flexibility.
6.4 Research Associate / Fellow or Lead Researcher
About the role
To effectively make and evaluate grants, we need to understand both the drivers of risk and potential interventions. As a researcher on the BPP team, your job will be to conduct and/or oversee quantitative research in these areas and help set our strategy for reducing the most severe pandemic risks.
This research could involve investigating wide-ranging topics, from the molecular underpinnings of medical countermeasures to the history of state biological weapons programs. As such, there’s no specific discipline an ideal candidate would come from. We’re looking for someone who is curiosity-driven and excited to tackle problems and engage with experts across unfamiliar fields.
We pride ourselves on doing agile, quantitative research that is truly action-guiding, as opposed to merely “research for research’s sake.” Insights from our research have the potential to inspire significant new directions in our grantmaking.
We are looking for candidates with strong backgrounds in the natural sciences, such as having completed a PhD in a technical discipline. Depending on the qualities of a given hire, the role may involve greater or lesser components of research management, i.e., delegating components of a research question to contractors and employees and synthesizing the results into a coherent answer.
Note that we have a high bar for hiring for this role, and think that there is only a ~20% chance that we will end up making a hire.
Who might be a good fit
Across both roles, you might be a good fit if you:
- Share our mission of preventing global biological catastrophes.
- Have a strong scientific background. We would be particularly excited by candidates with a background in molecular biology or adjacent fields, but think competitive candidates could have a more diverse range of backgrounds, including in physics, chemistry, medical sciences, or engineering research. A relevant PhD is useful but not required.
- Are quantitatively minded. You feel comfortable doing back-of-the-envelope calculations, thinking probabilistically, and doing basic data analysis, in order to come to quantitative conclusions. You care about getting the numbers right and are aware of the limitations of your results.
- Can work pragmatically to answer the question at hand. You are happy to dive into the details of data sets and do the hard work of getting numbers out, and to draw on work from a diverse range of disciplines and sources. You are also able to recognize when approaches to a problem are unlikely to be fruitful, or unlikely to yield decision-relevant results, and to change your approach accordingly.
- Are a good communicator. You can explain your research findings to others (such as teammates and scientific experts) clearly and transparently, including your degree of confidence in the results and any major remaining uncertainties.
- Experience managing complex research projects is desirable but not necessary.
Other details
- Location: This role would preferably be primarily in-person in Washington D.C. While significant remote work is also possible, some significant fraction of time in DC would be required. Regardless of primary location, the role may from time to time require travel to other locations.
- We are happy to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- Compensation: The starting compensation will vary depending on the seniority of the hire and be based on a range of $122,364.26 to $189,567.63, which would be distributed as a base salary of $106,403.71 to $169,567.63 and an unconditional 401(k) grant of $15,960.56 to $20,000.00 for U.S. hires.
- These compensation figures assume a Washington, D.C. location; there would be cost-of-living adjustments downward for other locations (or upward for the San Francisco Bay Area).
- Start date: We’d like a candidate to start as soon as possible after receiving an offer, though there is some flexibility.
6.5 Program Associate or Senior Program Associate
About the role
You would increase our capacity to make ambitious grants in the biosecurity space. You would join to work in one of three areas: as a Generalist Grantmaker, a Community Manager / Grantmaker, or an Life Sciences Governance Grantmaker, all of which have been marked as separate options in the application form. We have a high bar for all of these roles and think that there is only a ~20% chance we will end up making a hire for any given one of them.
The Generalist Grantmaker would seek to make grants across all areas in the biosecurity space. We currently have to turn down many potentially strong grant proposals due to lack of capacity to investigate, and having another generalist on the team would help resolve that bottleneck.
The Community Manager / Grantmaker would make grants and undertake other activities aimed at expanding and empowering the community of current and aspiring biosecurity professionals focused on reducing catastrophic risks. Talent development in this area has historically been a highly impactful grantmaking area for the team, and this grantmaker would substantially increase our capacity here.
The Life Sciences Governance Grantmaker would make grants aimed at managing access to technologies and resources susceptible to misuse, e.g. gene synthesis screening, the governance of relevant AI models, regulations on dual-use research of concern, or know-your-customer regulations for cloud laboratories. We see this work as one of the most important biosecurity priorities and want to expand our capacity to make grants of this kind.
Who might be a good fit
You might be a good fit if you:
- Share our mission of preventing global biological catastrophes.
- Have strong analytical and critical thinking skills, especially the ability to quickly grasp complex issues, find the best arguments for and against a proposition, and skeptically evaluate claims. You should also feel comfortable thinking in terms of expected value and reasoning quantitatively about tradeoffs and evidence.
- Will bring pragmatism and can-do energy to a role where we often need to make quick, imperfect decisions with the information we have.
- Are able to take ownership of poorly scoped projects and think from first principles about how to accomplish their goals (or notice that the goals themselves should change).
- Are a flexible thinker, and able to quickly absorb and adapt to new information.
- Communicate in a clear, information-dense and calibrated way, both in writing and in person. The ability to write quickly, cleanly and with good reasoning transparency will be particularly important.
Other details
- Location: This role would preferably be primarily in-person in Washington D.C. While significant remote work is also possible, some significant fraction of time in DC would be required. Regardless of primary location, the role may from time to time require travel to other locations.
- We are happy to consider candidates based outside of the U.S., and to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- Compensation: The starting compensation for this role will be based on a range of $122,364.26 to $151,734.60, which would be distributed as a base salary of $106,403.71 to $131,943.13 and an unconditional 401(k) grant of $15,960.56 to $19,791.47 for U.S. hires.
- These compensation figures assume a Washington, D.C. location; there would be cost-of-living adjustments downward for other locations.
- Start date: We’d like a candidate to start as soon as possible after receiving an offer, though there is some flexibility.
6.6 “Cause X” Contractor
About the role
You would conduct research into possible catastrophic risks that lie outside our current focus areas of AI and biosecurity. This could include risks from nuclear weapons, nanotechnology, climate change, or other areas that we aren’t currently considering.
Who might be a good fit
- For nuclear weapons, a background in physics and/or technology would be useful.
- For nanotechnology, a background in physics or chemistry would be useful.
- For climate change, a background in climate science would be useful, as would experience in studying e.g. nuclear winter or geoengineering.
Other details
- Location: This role could be based anywhere.
- Compensation: This would be determined at an hourly rate mutually agreed upon by Open Philanthropy and the prospective contractor
- Start date: Flexible.
7. Global Catastrophic Risks Capacity Building
Open Phil’s Global Catastrophic Risks (GCR) Capacity Building team[1]Formerly known as the Effective Altruism Community Growth (Longtermism) team works to increase the amount of attention and resources put towards problems that threaten to eliminate or drastically worsen the future of sentient life, such as the possibility of an existential catastrophe this century. Open Phil’s other GCR focus areas aim to make direct progress on those problems. We share these goals, but don’t work on them directly: instead, we focus on ensuring these problems get the resources they need.
We think that one of the most important constraints on addressing these problems is the limited set of people who have the motivation, context, and skills necessary to contribute, relative to what is available in more mature fields. We want to subsidize and nurture the growth of nascent fields — particularly AI safety, because of our concern that transformative AI might be developed in a small number of years or decades. Consequently, many of the projects our team supports aim to communicate important ideas about existential risks, and to equip people to contribute to reducing those risks by connecting them with opportunities to acquire skills, professional relationships, and strategically useful information.
In 2022, we directed over $100 million across hundreds of grants — more than double the previous year’s funding. We make a wide range of grants, from larger grants to established projects like 80,000 Hours, LessWrong/Lightcone Infrastructure, and the Centre for Effective Altruism to hundreds of smaller grants to support individual career transitions; a few recent examples include student groups focused on technical alignment research at Harvard and MIT, online courses in AI safety and biosecurity accelerating hundreds of participants at various stages of their careers, and scholarship programs aimed at supporting careers in reducing global catastrophic risks.
7.1 Program Operations Associate/Lead
About the role
This role is focused on operations within our team, rather than operations at Open Phil generally. In this role, you would help maintain, evaluate, and improve our existing programs. You would add generalist operational capacity to the team, making it run more efficiently and freeing up other team members’ time to focus on investigating new grants and projects. You’d also help to ensure that grantee applications are managed efficiently. Program Operations leads would take on a more complex leadership role, described below.
Examples of tasks a Program Operations Associate might take on:
- Assisting with existing programs by e.g. vetting applicants, answering candidates’ questions about applications, and promoting open calls on social media.
- Keeping resources organized by e.g. taking notes based on check-in call recordings or connecting grantees with contacts of ours that can assist them in various ways (like tax lawyers).
- Helping with other administrative and logistical tasks, such as organizing events or arranging travel for program-related visits.
While we are looking for candidates who would be happy doing this role as currently scoped for at least 2 years, we anticipate that a strong hire will take on more autonomy and responsibility over time.
As a Program Operations Lead, you would take on more complex responsibilities, such as:
- Spinning off some grantmaking areas into self-contained programs or organizations. For example, you might be in charge of spinning off an independent organization that would specialize in grantmaking to support the translation of key content, alongside a grantee who was interested in leading such an organization.
- Helping those programs establish their own grantmaking teams and systems.
- Restructuring our survey, first run in 2020, of how people working on promising x-risk reduction projects got into their work, and administering and sharing results from this on an ongoing basis.
- This survey was an important input for program strategy and is among our most important evaluation tools. So far, we’ve run the survey every few years, but we think there could be substantial value from switching to a model where we survey people on an ongoing basis. This could be easier and would enable us to learn more quickly about projects with promising results so that we can fund them and give them feedback that they are getting traction (or, reduce funding for less promising projects more quickly, and give the grantees feedback that it might be a better use of their time to move on).
- Organizing events like retreats for certain subsets of grantees, other stakeholders, or our team.
- Supporting promising but inexperienced grantees scale up their own operations, by helping identify weaknesses, providing suggestions for improvement, and connecting them with high-quality outside services.
- Managing other staff focused on improving program operations.
Who might be a good fit
You might be an especially strong fit for this role if you are:
- Detail-oriented, organized, and conscientious. For the senior version of the role, you feel confident in your ability to keep on top of a large number of open items and next steps while making sure nothing gets dropped. You have strong project management skills.
- Optimization-focused; you constantly seek out improvements and question the status quo in pursuit of better solutions or systems. For more senior versions of the role, you are inclined to take full responsibility and ownership of the outcome of a task, including poorly-scoped tasks that require first-principles thinking.
- Flexible and creative about problem-solving. You approach tasks in a “user-focused” way, and are prepared to iterate on solutions with team members.
- Adept at communicating in a clear, information-dense and calibrated way, with good reasoning transparency, both in writing and in person. You should be willing to ask questions if you are confused and push back on conclusions you don’t understand or disagree with.
- Service-minded and comfortable with some amount of repetition in your work; motivated by the idea of doing whatever will have the most impact, even when it’s not glamorous.
- Are comfortable using spreadsheets and Airtable, or are willing to learn to use them effectively.
Other details
- Location: We strongly prefer hires to be based in the San Francisco Bay Area.
- We’ll support candidates with the costs of relocation to the Bay.
- We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
- Compensation: The starting compensation for this role will be based on a range of $115,963.80 to $148,129.33 per year, which would include a base salary of $100,838.08 to $128,808.11 and an unconditional 401(k) grant of $15,125.71 to $19,321.22.
- These compensation figures assume you’d be working from the San Francisco Bay Area; there would be small cost-of-living adjustments downward for other locations.
- Start date: The start date is flexible, and we may be willing to wait for an extended period of time for the best candidate, though we’d prefer someone to start as soon as possible after receiving an offer.
7.2 Chief of Staff
About the role
You would report to Program Director Claire Zabel and work closely with her to lead key projects, accelerate the GCR Capacity Building team’s progress on strategic priorities, and help the team stay organized and focused.
Some other tasks that you would do early on might include:
- Tracking and implementing smaller action items, like helping an important contact get set up for a visit to our office.
- Setting up our survey, first run in 2020, of how people working on promising GCR reduction projects got into their work, and administering and sharing results from this on an ongoing basis.
- This survey was an important input for program strategy and is among our most important evaluation tools. So far, we’ve run the survey every few years, but we think there could be substantial value from switching to a model where we survey people on an ongoing basis. This could be easier and would enable us to learn more quickly about projects with promising results so that we can fund them and give them feedback that they are getting traction.
Examples of ways you might help the team stay organized and focused, which we would expect to take a minority of your time:
- Developing meeting agendas, presentations, and briefing documents
- Owning follow-up tasks from the meetings you join
- Identifying process improvements and shortcuts to save time
- Overseeing an executive assistant to coordinate events (e.g. a dinner with grantees)
In the longer run, you would have an important role in improving and evaluating our existing programs, so that we can evaluate grants more efficiently and ultimately increase the quantity and quality of our grantmaking. Examples of this kind of work could include:
- Sharing our revamped Career Development and Transition Funding Program with people who could become leaders at priority projects, so that we can fund them, make relevant introductions, and help them transition to this kind of work. If you did this well, you could increase the likelihood that team leads at top labs and policy organizations are people who have the relevant skills and experience to help with projects related to averting existential catastrophes.
- Revamping our university organizer fellowship. In our survey, university groups were among the two most common ways (accounting for ~9–23%) that respondents entered promising existential risk reduction career paths. We don’t have the capacity to offer much support besides funding right now, and the University Group Accelerator Program at the Centre for Effective Altruism generally only covers EA groups (not AI groups) and doesn’t have a great network in AI fields. You could run retreats and matchmaking programs to form a strong network of people doing this work and supporting new organizers at the most important schools.
- Headhunting people to lead new programs spun out of our current projects, like the university organizer fellowship or our program supporting the translation of writings that have inspired people to seek out existential risk-focused career paths. We think these kinds of programs would generally do better as focused organizations, and we’d like to ensure we find strong teams to carry out that work.
Who might be a good fit
You might be a good fit for this work if you:
- Are optimization-focused; you constantly and creatively seek out improvements and question the status quo in pursuit of better solutions or systems, and are inclined to take full responsibility and ownership of the outcome of a task, potentially including poorly-scoped tasks that require first-principles thinking. You approach tasks in a “user-focused” way, and are prepared to iterate on solutions with team members.
- Are detail-oriented, organized, and conscientious. You feel confident in your ability to keep on top of a large number of open items and next steps while making sure nothing gets dropped. You have strong project management skills.
- Share our goals related to reducing existential risk, and are excited about helping other people pursue those goals. You are interested in our basic premise that capacity-building work can lead more people with diverse and useful skillsets to pursue high-priority work that reduces existential risk.
- Are comfortable using spreadsheets and Airtable, or are willing to learn to use them effectively.
- Have strong interpersonal and communications skills, and are able to represent the team with both external and internal stakeholders. You communicate internally in a clear, information-dense and calibrated way, with good reasoning transparency, both in writing and in person.
Other details
- Location: This role would be based in the San Francisco Bay Area.
- We’ll support candidates with the costs of relocation to the Bay.
- We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
- Compensation: The baseline compensation for this role is $143,658.71, which would be distributed as a base salary of $124,920.62 and an annual unconditional 401k grant contribution of $18,738.09 for U.S. hires.
- These compensation figures assume you’d be working from the San Francisco Bay Area; there would be cost-of-living adjustments downward for other locations.
- Start date: The start date is flexible, and we may be willing to wait for an extended period of time for the best candidate, though we’d prefer someone to start as soon as possible after receiving an offer.
7.3 Program Associate / Senior Program Associate (Generalist)
About the role
Program Associates help drive the program’s goals forwards, often by investigating funding opportunities and developing ideas that lead to grants. After their first year, people in this role can expect to become significantly responsible for grants totalling >$10M per year (or more, for Senior Program Associates), or for non-grantmaking projects (such as research and evaluation work) that seem to us to have similar expected value.
In this role, you might:
- Investigate grant opportunities. Essentially, these are focused, practical research projects aimed at answering the question “should Open Philanthropy fund this person or project, or not (and if so, at what level, for what length of time, etc.)?” You’ll work to evaluate opportunities we encounter and to identify new opportunities, and then to analyze how well these advance our priorities and what other effects they might have on the talent pipelines we are trying to support. You may be in charge of following up on the progress of these grantees.
- Design, implement, and advertise new grantmaking initiatives that involve making funding available for a particular class of activity (e.g. our scholarship for international undergraduates and our course development grants program). These are programs designed to support large clusters of high-priority grants with structural similarities, so increase the number of such grants we make (e.g. by making it apparent that we are keen to support such work) and increase the efficiency with which we make them (e.g. by having a formula for determining the size of a potential travel stipend, so that it doesn’t need to be considered anew each time).
- Build and maintain relationships in the field, and facilitate the exchange of feedback between us and our grantees and other stakeholders.
- Do research to inform program strategy, and help evaluate existing grantmaking initiatives, e.g. by assessing cost-effectiveness.
- Conduct and communicate research on the key bottlenecks and gaps in the existing existential risk talent pipelines, both internally within the team and externally with other stakeholders. “Pitch” potential candidates for founding new priority projects, via explaining the reasons why we believe them to be high priority and a good fit for the candidate in question.
Who might be a good fit
You might be a particularly good fit for this role if you:
- Share our goals related to reducing existential risk, and are excited about helping other people pursue those goals. You’re interested in our basic premise that capacity-building work can lead more people with diverse and useful skillsets to pursue high-priority work that reduces existential risk.
- Have strong analytical and critical thinking skills, especially the ability to quickly grasp complex issues, find the best arguments for and against a proposition, and skeptically evaluate claims. You should feel comfortable thinking in terms of expected value and reasoning quantitatively and probabilistically about tradeoffs and evidence.
- Are interested in getting to know different people and projects, and evaluating their strengths and weaknesses, like an investor would.
- Are resourceful and creative, and enjoy thinking about unorthodox ways to build strong fields and communities.
- Are optimization-focused; you constantly seek out improvements and question the status quo in pursuit of better solutions or systems. For more senior versions of the role, you are inclined to take full responsibility and ownership over the outcome of a task, including poorly-scoped tasks that require first-principles thinking.
- Communicate in a clear, information-dense and calibrated way, with good reasoning transparency, both in writing and in person. You should be willing to ask questions if you are confused and push back on conclusions you disagree with or don’t understand.
- For more junior versions of the role, are service-minded and comfortable with some amount of repetition in your work; you are motivated by the idea of doing whatever will have the most impact, even when it’s not glamorous.
Other details
- Location: We strongly prefer hires to be based in the San Francisco Bay Area, but may be willing to consider exceptional candidates in other places.
- We’ll support candidates with the costs of relocation to the Bay.
- We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
- Compensation: The starting compensation for this role will be based on a range of $124,204.33 to $154,091.51 per year, which would include a base salary of $108,003.76 to $134,091.51 and an unconditional 401(k) grant of $16,200.56 to $20,000.00.
- These compensation figures assume you’d be working from the San Francisco Bay Area; there would be cost-of-living adjustments downward for other locations.
- Start date: The start date is flexible, and we may be willing to wait for an extended period of time for the best candidate, though we’d prefer someone to start as soon as possible after receiving an offer.
7.4 Program Associate / Senior Program Associate, AI Safety Capacity-Building
About the role
In this role, you would do similar work to that outlined in the Program Associate / Senior Program Associate (Generalist) description outlined above, but with a particular focus on opportunities that make it easier for people to find and skill up for high-priority roles in reducing existential risk from AI.
We handle a large and increasing number of grant applications in AI safety-specific capacity-building, both on the technical and governance sides of the problem. While Open Phil has teams focused on technical AI safety and AI governance, those teams have historically made mostly object-level grants, intended to directly make progress on reducing those risks, rather than helping people become informed about and well-equipped to do helpful work.
We think this role is a highly-leveraged use of knowledge and experience in AI and AI safety; our best guess is that such people will be able to specialize in understanding the relevant talent pipelines and provide better and more fine-grained feedback to grantees. We also think their experience and focus on these opportunities will lead to more informed grant investigations.
Example projects might include evaluating grants to research mentorship programs, grants to individuals for career transitions, or grants supporting education programs for mid-career people in disparate fields who want to learn about AI in order to work on its risks. Prior grants that would be within this role’s remit include a conference gathering AI safety researchers together; AI safety student groups at universities; the AI Safety Fundamentals online course by BlueDot Impact; the Alignment Research Engineer Accelerator (ARENA) and similar focused programs that help individuals interested in AI safety improve their technical skills in machine learning; and an expert survey on progress in artificial intelligence by AI Impacts.
Who might be a good fit
You might be an especially strong fit for this role if you:
- Have the qualities listed above for the Generalist Program Associate / Senior Program Associate role.
- Are fairly well-versed in the basics of AI safety technical and/or governance work.
- Have a good grip on arguments for catastrophic risks from AI and some proficiency in arguments for and against the helpfulness of certain technical research directions.
- For example, this could manifest as being able to comfortably describe the problem of deceptive alignment, why it might or might not occur, and a few strategies currently being researched to prevent it, without necessarily knowing many details of those strategies.
- In AI governance work, this might look like having some preexisting familiarity with and independent opinions about e.g. the AI policy ideas in Luke Muehlhauser’s 12 tentative ideas for US AI policy.
- As another benchmark, we’re looking for a level of familiarity similar to what someone who’s familiar with most of the content of the AI Safety Fundamentals Alignment 201 Course and/or the AI Safety Fundamentals Governance Course would have. (But that doesn’t mean candidates need to have taken these specific courses, we’re just using them as an example of the level of preexisting context we are looking for.)
- Have a good grip on arguments for catastrophic risks from AI and some proficiency in arguments for and against the helpfulness of certain technical research directions.
Other details
- Location: We strongly prefer hires to be based in the San Francisco Bay Area, but may be willing to consider exceptional candidates in other places.
- We’ll support candidates with the costs of relocation to the Bay.
- We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
- Compensation: The starting compensation for this role will be based on a range of $124,204.33 to $154,091.51 per year, which would include a base salary of $108,003.76 to $134,091.51 and an unconditional 401(k) grant of $16,200.56 to $20,000.00.
- These compensation figures assume you’d be working from the San Francisco Bay Area; there would be cost-of-living adjustments downward for other locations.
- Start date: The start date is flexible, and we may be willing to wait for an extended period of time for the best candidate, though we’d prefer someone to start as soon as possible after receiving an offer.
7.5 Program Associate / Senior Program Associate, University Groups
About the role
You would do similar work to that outlined in the Program Associate / Senior Program Associate (Generalist) description outlined above, but with a particular focus on managing the University Group Organizer Fellowship. In our survey, first run in 2020, of how people working on promising GCR reduction projects got into their work, university groups were among the two most common ways (accounting for ~9–23% of highlighted very important influences) respondents entered promising existential risk reduction career paths. In addition, we’ve recently seen strong early results from relatively new types of groups, such as groups in which members invest a significant amount of time into focused reading and research on technical AI alignment. We’re therefore interested in testing this model at more universities, and more broadly in identifying and funding groups focused on specific priority areas or talent gaps. For these reasons, we view university group funding as one of our most important grantmaking programs, and think that having someone focused full-time on this area could be very valuable.
In addition to the Generalist points outlined above, your work might involve:
- Running retreats for university group organizers. Our sense from previous such retreats is that they often made a substantial impact by enabling organizers to learn from experienced professionals and grow their networks, facilitating information exchange between groups, and increasing organizers’ motivation. At the same time, we think there is significant room for growth with respect to these retreats.
- Mentoring promising university group organizers, or helping to coordinate mentorship.
- Identifying founders for new university groups.
- Helping establish new models of university groups that may be especially impactful in some contexts, such as groups focused on AI alignment, by identifying groups with promising early results, encouraging information sharing between organizers, and possibly actively headhunting for founders of such groups at additional universities or scaling our grantmaking for such groups.
- Building and maintaining systems to help university group organizers coordinate with each other, learn about relevant topics, and access support; to improve monitoring and evaluation of our grants in this area (e.g. by sending organizers regular follow-up surveys); and to increase the efficiency of our grant investigations in this area.
Who might be a good fit
You might be an especially strong fit for this role if you:
- Have the qualities listed above for the Generalist Program Associate / Senior Program Associate role.
- Are detail-oriented, organized, and conscientious. You feel confident in your ability to keep on top of a large number of open items and next steps while making sure nothing gets dropped. You have strong project management skills
- Are interested in university group organizing, and ideally have experience doing it. You are excited about a role that involves building and maintaining relationships with organizers, e.g. via talking with many organizers and aspiring organizers about their work. This is likely to include travel to visit different campuses, and to attend retreats and events.
Other details
- Location: We strongly prefer hires to be based in the San Francisco Bay Area.
- We’ll support candidates with the costs of relocation to the Bay.
- We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
- Compensation: The starting compensation for this role will be based on a range of $124,204.33 to $154,091.51 per year, which would include a base salary of $108,003.76 to $134,091.51 and an unconditional 401(k) grant of $16,200.56 to $20,000.00.
- These compensation figures assume you’d be working from the San Francisco Bay Area; there would be cost-of-living adjustments downward for other locations.
- Start date: The start date is flexible, and we may be willing to wait for an extended period of time for the best candidate, though we’d prefer someone to start as soon as possible after receiving an offer.
8. Global Catastrophic Risks Cause Prioritization
The GCR Cause Prioritization team is a new unit at Open Philanthropy. The team works closely with senior leadership and GCR program officers to conduct research that improves our GCR grantmaking and high-level strategy. The team strives to identify and evaluate new categories of GCR grantmaking. As part of that inquiry, the team is developing a more robust framework for assessing and comparing GCR opportunities. This framework will allow senior leadership to better understand our impact and more optimally allocate resources across different GCR program areas. In addition, the team partners directly with GCR program officers to address research questions that will increase strategic clarity within grantmaking portfolios and unlock new giving opportunities. The team also occasionally partners with Open Philanthropy’s long-running “worldview investigations” unit, which tackles speculative, high-level questions relevant to our overall macro-strategy.
8.1 Research Fellow and Strategy Fellow
About the roles
The responsibilities for the research fellow position and strategy fellow position largely overlap. The main difference is one of emphasis: while research fellows primarily focus on direct research, strategy fellows are sometimes tasked with managing non-research projects (such as running a contest or a request for proposals, or overseeing a hiring round).
Both positions are charged with four primary responsibilities:
- Searching for new program areas. We believe there are promising giving opportunities that don’t currently fall within the purview of our existing program areas. This line of work combines abstract modeling with concrete investigation into the tractability of new interventions to reduce catastrophic risk. A large chunk of this research is informed by conversations with relevant experts.
- Evaluating existing program areas. We pursue many different approaches to reducing catastrophic risk, and it’s not always clear how these strategies compare to one another. This line of work combines backward-looking evaluation of past grants as well as forward-looking vetting of programmatic theories of change. Because reductions in catastrophic risk are difficult to measure, we expect this line of work to develop and assess proxy indicators that are easier to analyze.
- Advancing research agendas within program areas. Our program areas are often faced with a backlog of research questions and a deficit of researcher hours to throw at them. The GCR cause prioritization team lends research capacity to program staff who need help working through internal research agendas. These projects often involve fine-grained questions that require programmatic context.
- Informing high-level strategy. Reducing catastrophic risk is a complicated business, and many of our most zoomed-out decisions rely on complicated dynamics that are difficult to capture. For example, where we should set our grantmaking cost-effectiveness bar and how quickly we should spend down our assets depend on the timeline on which threats emerge and the evolution of the opportunity set to tackle those threats (among many other factors). This line of work investigates the considerations that are most crucial to our strategic decision-making.
In addition to the primary functions described above, we expect each member of the team to be able and willing to comment thoughtfully on the work that is produced by other members of the team, even when that work falls outside one’s area of expertise.
Who might be a good fit
You might be a good fit for this role if:
- [For the strategy fellow position] You have a background in consulting or equivalent strategic experience
- You have a strong quantitative skill set, including the ability to incorporate uncertainty into your quantitative models.
- You are comfortable building “back-of-the-envelope calculations” in domains in which data are sparse.
- You are familiar with at least one of our main GCR program areas (AI alignment, AI governance, biosecurity and pandemic preparedness, and global catastrophic risks capacity building).
- You have sufficient interpersonal skills to interview academics, policymakers, and practitioners across many domains and can maintain a diverse network of experts to inform your research.
- You are excited about working in a fast-paced research environment that covers a wide range of potential research topics.
- You exhibit good epistemic judgment, including a willingness to update on new information and a commitment to reasoning transparency.
- You write quickly, and your prose is simple and easy to read.
- You are independent, organized, and self-motivated.
Other details
- Location: You can work from anywhere. We are happy to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- Compensation: The baseline compensation for this role is $143,658.71, which would be distributed as a base salary of $124,920.62 and an unconditional 401(k) grant of $18,738.09 for U.S. hires.
- These compensation figures assume a remote location; there would be geographic adjustments upwards for candidates based in the San Francisco Bay Area or Washington, D.C.
- All compensation will be distributed in the form of take-home salary for internationally-based hires.
- Start date: We’d like a candidate to start as soon as possible after receiving an offer.
9. Application Form
Footnotes
1 | Formerly known as the Effective Altruism Community Growth (Longtermism) team |
---|