The Open Philanthropy Blog

Although we have typically emphasized the importance for effective philanthropy of long-term commitment to causes and getting the right people in place, the most obvious day-to-day decision funders face is whether to support specific potential giving opportunities. As part of our internal guidance for program officers, we’ve collected a series of questions that we like to ask ourselves about potential funding opportunities, including:

Read More

Our thinking on prioritizing across different causes has evolved as we’ve made more grants. This post explores one aspect of that: the high bar set by the best global health and development interventions, and what we’re learning about the relative performance of some of our other grantmaking areas that seek to help people today.

Read More

Note: This is an experiment with a different style of blog post, aiming to more casually share thoughts from a broader set of staff. We’re interested in feedback on this format.

Earlier this year, the Open Philanthropy Project awarded a five-year grant and made an additional investment in Sherlock Biosciences to support the development of a diagnostic platform to quickly, easily, and inexpensively identify any human virus present in a patient sample.

Development of this technology would represent a significant advance in viral diagnosis, and could both reduce threats from viral pandemics and also benefit health care broadly. In one implementation of the test, which might be suitable for use in field clinics or for home use, samples can be tested in less than an hour using just a strip of paper.

We believe that the broad potential of Sherlock’s technologies is matched by co-founders and a team of deeply experienced scientists, entrepreneurs, and clinicians who are aligned with our goal of making a universal viral diagnostic system available worldwide. The new company, recently spun out of the Broad Institute of MIT and Harvard, is developing technologies licensed from the Broad Institute and Harvard University’s Wyss Institute.

Read More

This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year.

Read More

We have had a lot of new staff join Open Philanthropy over the last year. In this post, I’d like to introduce the new members of our team. We’re excited to have them!

More new staff are joining soon, and I will be introducing them in coming months.

Read More

We’re now supporting History of Philanthropy work via a grant to the Urban Institute. One output of this project is a literature review on the social impact of - and role of philanthropic funding in - the Pugwash Conferences on Science and World Affairs (sometimes abbreviated as “Pugwash”), which “brought together notable scientists from both sides of the iron curtain in order to discuss nuclear disarmament in an informal but serious atmosphere” starting in 1957. This case is particularly interesting from the perspective of global catastrophic risk reduction, as Pugwash and its founder won the 1995 Nobel Peace Prize “for their efforts to diminish the part played by nuclear arms in international politics and, in the longer run, to eliminate such arms.”

Read More

In February 2018, Open Philanthropy announced new openings for “generalist” Research Analyst (RA) roles, and we have since hired 5 applicants from that hiring round. This was one of our top priorities for 2018.

In this post I summarize our process and some lessons learned. I am hoping that in addition to our general audience, this post might be useful to others looking to hire a similar talent profile, and to potential future generalist RA applicants to Open Philanthropy.

Read More

Today, Georgetown University announced our support for the launch of a new think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies. The Center for Security and Emerging Technology (CSET) is led by Jason Matheny, former Assistant Director of National Intelligence and Director of Intelligence Advanced Research Projects Activity (IARPA), the U.S. intelligence community’s research organization.

Read our full grant page here.

Read More

Last year, the year before, and the year before that, we published a set of suggestions for individual donors looking for organizations to support. This year, we are repeating the practice and publishing updated suggestions from Open Philanthropy Project staff who chose to provide them.

The same caveats as in previous years apply:

  • These are reasonably strong options in causes of interest, and shouldn’t be taken as outright recommendations (i.e., it isn’t necessarily the case that the person making the suggestion thinks they’re the best option available across all causes).
  • In many cases, we find a funding gap we’d like to fill, and then we recommend filling the entire funding gap with a single grant. That doesn’t leave much scope for making a suggestion for individuals. The cases listed below, then, are the cases where, for one reason or another, we haven’t decided to recommend filling an organization’s full funding gap, and we believe it could make use of fairly arbitrary amounts of donations from individuals.
  • Our explanations for why these are strong giving opportunities are very brief and informal, and we don’t expect individuals to be persuaded by them unless they put a lot of weight on the judgment of the person making the suggestion.
Read More

In October 2016, we wrote:

we are contracting [a developer] to build a simple online application for credence calibration training: training the user to accurately determine how confident they should be in an opinion, and to express this confidence in a consistent and quantified way.

That online application is now available:

Play “Calibrate Your Judgment”

Note that you must sign in with a GuidedTrack, Facebook, or Google account, so that the application can track your performance over time.

We expect many users will find this program to be the most useful free online calibration training currently available.

That said, we think there are several ways in which a calibration app could be more engaging and useful than ours, if someone were to invest substantially more development effort than we did. Some reflections on the challenges we encountered, and some lessons we learned, are available in this Google doc.

Spencer Greenberg, the lead developer of Calibrate Your Judgment, has released a paper describing the scoring rules used in the app, here.

Update April 23: In response to feedback, we have now improved the set of questions used for the app’s confidence intervals module, by removing hundreds of ill-formed or confusingly worded questions. We hope this leads to a better and more useful experience for users.

Read More

Pages