Last year, we published a set of suggestions for individual donors looking for organizations to support. This year, we are repeating the practice and publishing updated suggestions from Open Philanthropy Project staff who chose to provide them.
The same caveats as last year apply:
- These are reasonably strong options in causes of interest, and shouldn’t be taken as outright recommendations (i.e., it isn’t necessarily the case that the recommender thinks they’re the best option available across all causes). Note that interested staff wrote separately about where they personally donated, as part of GiveWell’s post on staff members’ personal donations.
- In many cases, we find a funding gap we’d like to fill, and then we recommend filling the entire funding gap with a single grant. That doesn’t leave much scope for making a recommendation to individuals. The cases listed below, then, are the cases where, for one reason or another, we haven’t decided to recommend filling an organization’s full funding gap, and we believe it could make use of fairly arbitrary amounts of donations from individuals. A couple of situations in which this can apply:
- There are cases where we feel an organization would specifically benefit from a broad base of support, often because this would improve its credibility.
- There are cases where we make a grant to fill what we see as an organization’s most important funding needs, but feel the organization could still do productive work with more funding. In these cases, there is often a wide range of funding that could be justified, and we often determine our exact grant size non-systematically and somewhat arbitrarily. In the process of putting together this post, we’ve reflected on this fact, and we are likely to put more work into refining (and writing about) our principles for such situations in the future.
- Our explanations for why these are strong giving opportunities are very brief and informal, and we don’t expect individuals to be persuaded by them unless they put a lot of weight on the judgment of the person making the recommendation.
Suggestions are alphabetical by cause (with some assorted and “meta” suggestions last).
- Biosecurity and Pandemic Preparedness - recommendation by Jaime Yassif
- Criminal Justice Reform - recommendations by Chloe Cockburn
- Farm Animal Welfare - recommendations by Lewis Bollard
- Potential Risks from Artificial Intelligence - recommendations by Daniel Dewey
- Assorted recommendations by Nick Beckstead
Biosecurity and Pandemic Preparedness - recommendation by Jaime Yassif
Blue Ribbon Study Panel on Biodefense
What is it? The Blue Ribbon Study Panel on Biodefense is a bipartisan group of former high-level policymakers and government officials with experience and interest in public health preparedness, biosecurity, and biodefense. It is co-chaired by former Senator Joe Lieberman and Governor Tom Ridge, who was the first U.S. Secretary of Homeland Security.
In 2015, the Panel issued a report that comprehensively assessed U.S. biodefense efforts and provided 33 recommendations to improve them. They also began developing congressional and Executive Branch support for these proposals, which culminated in several congressional hearings discussing their recommendations. Going forward, the Panel plans to continue its efforts to improve U.S. biodefense policy, including implementation of these recommendations.
Why we recommend it: We think supporting this general type of work can make an impact because the U.S. government spends more money on biosecurity than any other organization in the world; accordingly, U.S. biodefense policy and programs have a large influence on global biosecurity.
We think the Panel’s work, if successful, has the potential to impact the U.S. government in two ways:
- Creating policy change via new legislation and/or Executive Branch action.
- Bringing the importance of biodefense to the attention of policymakers and creating champions for these issues in Congress.
While U.S. biodefense goals are not perfectly aligned with the Open Philanthropy Project’s goal to reduce global catastrophic risks from pandemic pathogens, there is substantial overlap between these two mission areas, and progress on this front seems feasible.
The Study Panel’s track record to date gives us some confidence that the next phase of its work will be effective. We believe that the 33 recommendations in its 2015 report are practical and largely uncontroversial, and have been generally well received within government. We generally agree with the majority of the Panel’s recommendations, though a handful of them are lower priority from our perspective. We also see some preliminary evidence that the Panel’s work has already had an impact on policy.
The Study Panel plans to identify three to five of its 33 recommendations as focus areas for the next phase of its advocacy work. We think this strategy makes sense and that the following planned activities for the coming year are particularly important:
- Outreach to members of Congress and the new Administration. The Study Panel has hired a government relations firm to support its staff, increase its presence on the Hill, build new relationships with policymakers, and strengthen existing relationships–with the goal of increasing the likelihood of policy impact.
- Hosting public meetings
- Producing reports that describe recommended policy changes and offer guidance on how to implement them
- The Study Panel has hired two communications firms to help with public outreach, including through traditional and social media.
Why we haven’t fully funded it:
We provided a $300,000 grant in 2015 to support the Panel’s initial work, and we renewed our support with a $1.3M grant in 2016 for activities through the end of 2017. When we made our most recent grant, we expected that other organizations would help meet the Panel’s remaining funding needs for this time period, but that hasn’t happened. We are considering topping up our grant to the Panel, but even with increased funding from Open Philanthropy we think they can still absorb additional funds.
A grant top-up from us would provide the Panel with the amount of money they requested for a defined set of activities that are focused on improving U.S. biodefense policy. But there isn’t a hard cut-off for the amount of money they can absorb to continue doing useful work. Valuable activities, like public appearances by Panel members and communications & outreach, can continue to be scaled up with additional funding without placing an undue burden on the core staff running the organization.
How to donate: Make checks payable to: Potomac Institute for Policy Studies. Send to Potomac Institute for Policy Studies, ATTN: Robert Zambreny, 901 N. Stuart Street, Suite 1200, Arlington, VA 22203. Please include a note stating that your donation is for the purpose of supporting the Blue Ribbon Study Panel on Biodefense. Donors can also call Robert Zambreny at 703-525-0770 to donate using other methods of payment.
Criminal Justice Reform - recommendations by Chloe Cockburn
Alliance for Safety and Justice
This recommendation is substantially the same as last year’s.
What is it? The Alliance for Safety and Justice is a national organization that aims to reduce incarceration and racial disparities in incarceration in states across the country, and replace mass incarceration with new safety priorities that prioritize prevention and protect low-income communities of color. ASJ aims to build on the successful strategies of Californians for Safety and Justice and its sister organization, Vote Safe, the 501c4 that launched and ran the successful Proposition 47 campaign in 2014 and Proposition 57. Californians for Safety and Justice’s leadership, Ms. Lenore Anderson and Mr. Robert Rooks, launched Alliance for Safety and Justice to take the best of what they’ve achieved in California and support other state advocates in winning substantial reductions in state incarceration. The Alliance for Safety and Justice aims to build durable capacity in partner states for sentencing reform; develop a national networking center of gravity to strengthen reform efforts in as many states as possible across the country; and popularize new safety priorities through crime survivor organizing and strategic communications.
Why I recommend it: We have unprecedented national attention to justice reform, yet we have seen only slight decreases in incarceration in the states (CA and NY aside, and racial disparities and spending are still extreme). The failure to convert attention to wins is due in part to criminal justice reformers’ failure to articulate a forward-thinking vision of safety without extreme incarceration, as well as the very limited capacity at the state level to get durable wins – most states don’t have an organization on the ground focused on reducing incarceration at all, let alone one with the capacity to successfully win and sustain reforms. There is almost no civic engagement capacity built on this issue, there are limited mainstream partnerships, and limited political influence (no organized candidate and campaign influencers). ASJ is an ambitious, large-scale effort to address exactly these problems, with the best possible leadership for the job. Lenore’s and Robert’s work on the successful California Proposition 47 and 57 campaigns was impressive.
Why we haven’t fully funded it: Given the amount we’re aiming to allocate to criminal justice reform as a whole, my portfolio has too many competing demands for us to offer more. In addition, I think having a diversified donor base would be good for ASJ, so at this point $X from an individual probably helps them more than an additional $X from us.
Writeup: see our general support grant from earlier this year.
How to donate: Click here and choose “Alliance for Safety and Justice” from the drop-down.
What is it? Cosecha is a group organizing undocumented immigrants in 50-60 cities around the country. Its goal is to build mass popular support for undocumented immigrants, in resistance to incarceration/detention, deportation, denigration of rights, and discrimination. The group has become especially active since the Presidential election, given the immediate threat of mass incarceration and deportation of millions of people. Their goal is to raise $500,000 in the next few months; they have raised $200,000 so far. Incremental amounts of money are put to good use, and the overall impact may be very large. The organization was built to escalate and absorb energy as trigger events in the world push more people to become active on issues like immigration.
Why I recommend it: I’m a big fan of organizing, but I admit that most organizers don’t have a precise explanation of how their methods work and what the impacts are. Carlos Saavedra, who leads Cosecha, stands out as an organizer who is devoted to testing and improving his methods, who has deeply studied the cycles of social movements in the United States and in other countries, and is honed in on strategies and tactics that show evidence of impact. He is a rigorous, skeptical thinker, taking leadership in a space of low predictability and high energy. Based on his approach, and the fact that I think Cosecha can do a lot of good to prevent mass deportations and incarceration, I think his work is a good fit for likely readers of this post.
Why we haven’t fully funded it: Given the amount we’re aiming to allocate to criminal justice reform as a whole, my portfolio has too many competing demands for us to offer more.
Writeup: we will be publishing a grant page later, but probably no detailed writeup.
How to donate: Click here.
Farm Animal Welfare - recommendations by Lewis Bollard
Animal Charity Evaluators
What is it? Animal Charity Evaluators (ACE) seeks to find and promote the most effective ways to help animals via research and donor outreach. Its charity recommendations provide guidance to small donors looking to support effective animal groups, while its outreach seeks to build a community of effective altruists committed to animal issues.
Why I recommend it: ACE does a lot on a small budget (~$300K in projected expenses this year), and serves an important role within the animal movement by providing critical research, guiding small donors, and advocating for a focus on efficacy. Although I originally had major reservations about the quality of ACE’s research – especially its reliance on flawed studies – I’ve been impressed by ACE’s willingness to update based on criticisms and new information. And while I don’t agree with all of its research or recommendations, I think its top charities list provides good guidance for small donors. I’m also increasingly confident in its ability to put more funds to good use, especially in pursuit of its goals to attract more donors toward supporting effective animal advocacy.
Why we haven’t fully funded it: I will soon start working on an investigation and recommendation for an ACE grant, but even assuming that goes through I see value to ACE having a broad support base to (a) signal to groups that donors care about its recommendations, (b) raise its profile, and attract more donors, and (c) allow it to invest in longer-term development, e.g. higher salaries (i.e. without fear of expanding with a fragile support base).
Writeup forthcoming? Likely not.
How to donate: you can donate online here.
Compassion in World Farming USA
What is it? Compassion in World Farming USA is one of four groups responsible for the major recent US corporate wins for layer hens and broiler chickens. (The others are The Humane League, the Humane Society of the US Farm Animal Protection campaign, and Mercy for Animals.) It’s now focused almost exclusively on winning further corporate welfare reforms for broiler chickens.
Why I recommend it: I personally plan to support all four of the groups mentioned above, but think the case is especially strong for CIWF USA for small donors. First, we’re already funding roughly half its budget, so we’re restricted in supporting it significantly more (see below). Second, it’s small so donations can go further. Third, it’s exclusively focused on the corporate outreach strategy I’m most excited about, whereas the other groups pursue multiple strategies.
Why we haven’t fully funded it: In April, we made a two-year $550K grant to CIWF, which filled much of its room for more funding at the time. I think it’s now likely ready to absorb more funds, and we’re limited in our ability to provide all of them by the public support test and a desire to avoid being the overwhelming funder of any group.
How to donate: you can donate online here.
Potential Risks from Artificial Intelligence - recommendations by Daniel Dewey
Machine Intelligence Research Institute
What is it? See our grant writeup for a description of the organization and their work.
Why I recommend it:
Highlights of factors in favor of supporting MIRI from our recent grant writeup, listed in order of the importance I would assign them:
- “MIRI constitutes a relatively ‘shovel-ready’ opportunity to support work on potential risks from advanced AI because it is specifically focused on that set of issues and has room for more funding… If we had decided to pursue maximal growth for MIRI, we would have awarded a grant of approximately $1.5 million per year, and would likely have committed to two years of support.” As far as we can tell, there are very few such opportunities in this area.
- “MIRI strikes us as assigning an unusually high probability to catastrophic accidents and as being pessimistic about the difficulty of implementing robust and general safety measures. We believe it is likely beneficial for some people in the field to be focused on understanding the ways standard approaches could go wrong, which may be something MIRI is especially well-suited to do. In general, it seems valuable to promote this kind of intellectual diversity in the field.”
- “Though we have strong reservations about MIRI’s past research, we see our evaluation as uncertain. If MIRI’s research is higher-potential than it currently seems to us, there could be great value in supporting MIRI, especially since it is likely to draw less funding from traditional sources than most other kinds of research we could support.”
- “We believe that MIRI has had positive effects (independent of its technical research) in the past that would have been hard for us to predict, and has a good chance of doing so again in the future.”
- “We see a possibility that MIRI’s research could improve in the near future, particularly because some research staff are now pursuing a more machine learning-focused research agenda.”
There are a few additional reasons that I think MIRI is a reasonably strong option for individual donors:
- In my view, there is no other organization so fully focused on global catastrophic risks from advanced AI.
- I believe that MIRI is unusually good at focusing on the interventions they think will be most effective; though I disagree with their judgement, I think this property is very valuable.
- Given high uncertainty about this area in general and MIRI in particular, I think it is even more important than usual to maintain a diverse funding culture and to ask individual donors to make their own judgements. I remain uncertain about what level of support for MIRI is best, and I wouldn’t recommend interpreting Open Phil’s funding as decisive evidence.
Why we haven’t fully funded it:
We have strong reservations about MIRI’s past research. Quoting our writeup:
“While we are not confident we fully understand MIRI’s research, we currently have the impression that (i) MIRI has made relatively limited progress on the Agent Foundations research agenda so far, and (ii) this research agenda has limited potential to decrease potential risks from advanced AI in comparison with other research directions that we would consider supporting. We view (ii) as particularly tentative, and some of our advisors thought that versions of MIRI’s research direction could have significant value if effectively pursued.”
We had trouble weighing the strengths against the reservations, and the ultimate size of our grant was fairly arbitrary and put high weight on accurate signaling about our views:
“we felt a case could be made for any figure between $0 and $1.5 million per year (the latter being enough that MIRI would no longer prioritize fundraising and would expand core staff as fast as possible, as discussed above). We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs. “
It’s worth noting that we are likely to fund MIRI at the same level next year, but two years from now it’s likely that our funding will either increase or decrease.
How to donate: Donate here.
Future of Humanity Institute
What is it? The Future of Humanity Institute is a small academic research group at the University of Oxford that studies “big-picture questions for human civilization”, including strategic and technical questions about how advanced AI could pose existential risks and how these risks could be mitigated. Nick Bostrom (FHI’s director) and other FHI staff were among the first to research potential risks from advanced AI. Bostrom and FHI also played a significant role in bringing further attention and funding to these problems, largely through the 2014 book Superintelligence: Paths, Dangers, Strategies.
Why I recommend it:
- FHI is one of few “shovel-ready” opportunities in potential risks from advanced AI.
- Bostrom’s Superintelligence, written with the help of FHI staff, is the best existing analysis of potential risk from loss of control of very powerful AI systems. It played a significant role in bringing further attention, funding, and legitimacy to this topic.
- FHI has been collaborating with Google DeepMind on potential risks from advanced AI (e.g. Stuart Armstrong’s work with Laurent Orseau).
- FHI has recently invested in machine learning expertise by hiring Owain Evans, and by hosting David Krueger (from Yoshua Bengio’s lab) and Jan Leike (formerly full-time at FHI, now moved to DeepMind). I think providing a good environment for good AI and machine learning researchers to work on potential risks from advanced AI is very promising, especially if these researchers move on to high-profile academic and industry labs.
- FHI has little unrestricted funding, and has trouble getting funding for things that academic grants typically underfund (e.g. personal assistants, external contractors, hires with fewer formal qualifications).
Why we haven’t fully funded it:
We’re in the process of recommending a grant to FHI, but I would prefer to maintain diversity in their sources of funding, and I think individual donors are well-placed to do so. We are trying to give FHI substantially more financial flexibility than it’s had in the past, but additional donations beyond our grant would provide further flexibility, and we think this has value.
How to donate: donate here.
Assorted recommendations by Nick Beckstead
My suggestions for individual donors are as follows (in descending order of preference, organized by category):
- Very meta suggestions:
- If you already know what to give to and you don’t think your decision would change if you thought about it more or let someone more informed decide on your behalf, give there.
- If you know someone who is likely make a better decision than you would on your own, give them your money and let them decide what to do with it. If you think that person should be me, donate to the “EA Giving Group” DAF (as I am doing, as explained here). This might be a good fit for people who have some combination of the following properties: interest in effective altruism and/or global catastrophic risks, context needed to assess the DAF’s (still early) track record, trust in my judgment (I’m one of two decision-makers for the DAF), limited time/context available to make donation decisions themselves. If you want to make a contribution to this DAF, then fill out this form.
- If you are uncertain where to donate and uncertain sure who to trust to donate on your behalf, participate in a donor lottery and then only think carefully about donations if you win. As explained in the linked post, there is currently an easy way to try this out.
- My first object-level tier of recommendation is “give to one of 80,000 Hours, CEA, FHI, or MIRI” (alphabetical order). For each of these organizations, Open Phil either has given them a grant or is in the process of deciding whether to give them a grant. I don’t have very strong opinions about which are better uses of additional funds, and room-for-more-funding considerations and uncertainty about likely level of support from Open Phil play a significant role in my uncertainty about which to recommend. My recommendations there are (in order for the two categories below, categories presented in order of preference):
- Potential risks from advanced AI: Donate to MIRI. (I currently see more of a funding gap there than at FHI. FHI is my second choice for this category. Note that I used to work at FHI.)
- Effective Altruism Community: Donate to 80,000 Hours. (The Centre for Effective Altruism (CEA) is my second choice in this category. Note that I am a board member of CEA, and 80,000 Hours is part of CEA.)
- Note: There’s an internal debate about how conservative vs. aggressive to be on grants supporting organizations like these with, I think, legitimate arguments on both sides. I tend to favor larger grants to organizations in these categories than other decision-makers at Open Phil. That is large part of the reason that I think there are, or are likely to be, any funding gaps in these areas. My inclusion of CEA and 80,000 Hours is also potentially sensitive to timing because I have not yet made a recommendation to Open Phil. It’s plausible we’ll support them at a level where I don’t see additional funding as particularly urgent.
- Biosecurity and pandemic preparedness: Donate to whatever Jaime and Howie recommend.
- Nuclear weapons policy: Donate to Ploughshares Fund.
I’ve limited my suggestions to organizations that focus on effective altruism and global catastrophic risks (and not short-to-medium-term factory farming or global poverty) because those are a couple of the areas I’m most excited about and know most about.
Machine Intelligence Research Institute
My reasoning is largely the same as Daniel Dewey’s (above), but I would add a few points in favor of donating to MIRI:
- Paul Christiano and Carl Shulman–a couple of individuals I place great trust in (on this topic)–have argued to me that Open Phil’s grant to MIRI should have been larger. (Note that these individuals have some connections to MIRI and are not wholly impartial.) Some other people I significantly trust on this topic are very non-enthusiastic about MIRI’s work, but having a couple of people making the argument in favor carries substantial weight with me from a “let many flowers bloom”/”cover your bases” perspective. (However, I expect that the non-enthusiastic people will be less publicly vocal, which I think is worth keeping in mind in this context.)
- My understanding is that MIRI is meaningfully funding-constrained right now, with at least a couple promising researchers they could be hiring on a trial basis but are not due to lack of funding.
- In contrast, FHI seems relatively less funding constrained at the moment.
- My impressions about potential risks from advanced AI have grown closer to Eliezer Yudkowsky’s over time, and I don’t think it would be too surprising if that movement on my end continues. I see additional $ to MIRI as an appropriate response to potential/anticipated future updates.
You can donate here.
What is it? 80,000 Hours provides free career advice (primarily aimed at young people) on its website and through workshops. Its advice is offered from an effective altruist perspective.
Disclaimer: I’m a board member of the Centre for Effective Altruism, which 80,000 Hours is part of, so that’s a potential source of bias.
Why I recommend it:
- I think they have a good track record of causing people to change their career plans based on receiving their advice, and they have had a good rate of growth over the last couple of years (with their monthly rate of “impact-adjusted plan changes” tripling annually). A couple of years ago, I spent some time looking at some of the plan changes they were claiming, and I got the impression that they were meaningful and not empty statistics.
- I think it’s very important that their target audience have access to good advice and make good decisions about their careers, and I am aware of little other high-quality work on this problem.
- I feel that I have a good understanding of their work, and that their Executive Director is good at explaining their plans and progress.
- They are trying to expand a number of programs this year, and I expect that they’ll have more room for more funding than usual.
Why we haven’t fully funded it:
- We considered funding them several months ago, but decided to wait until they had completed some pilot projects and re-assess around this time.
- Over the next month or two, we’ll consider whether to make a grant.
- If we do make a grant, I think there’s a good chance they could productively use additional funds from other donors for a couple of reasons. First, my enthusiasm for supporting specific grants to support the effective altruism community has been higher than other decision-makers’ at Open Phil, and we’ve given less than I’ve been inclined to recommend in some other cases. Second, we would not want to be more than 50% of 80,000 Hours’ funding in any case (for coordination/dependence reasons).
Writeup: not available.
How to donate: Donate here.
My reasons for recommending them this year are the same as my reasons for recommending them last year. Open Phil is likely to consider a grant to the Ploughshares Fund this year, though.