• Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • EA Community Growth (Global Health and Wellbeing)
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Scientific Research
      • South Asian Air Quality
    • Longtermism
      • Biosecurity & Pandemic Preparedness
      • Effective Altruism Community Growth
      • Potential Risks from Advanced AI
    • Other Areas
      • Criminal Justice Reform
      • History of Philanthropy
      • Immigration Policy
      • Land Use Reform
      • Macroeconomic Stabilization Policy
  • Grants
  • Research & Updates
    • Research Reports
    • Blog Posts
    • Notable Lessons
    • In the News
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Team
    • Stay Updated
  • We’re hiring!

Potential risks from advanced artificial intelligence: the philanthropic opportunity

  • Content Type: Blog Posts

Table of Contents

    Published: March 24, 2016

    [[OPTIONAL IF WE DECIDE TO BE REALLY INTENSE ABOUT EMPHASIZING THAT WE AREN’T ALARMISTS. WOULD REQUIRE TWEAKING OF FOLLOWING PARAGRAPH]] It seems to us that AI-relevant research is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. By and large, I expect the consequences of this progress to be extremely positive. Improvements in AI have enormous potential to improve the speed and accuracy of medical diagnosis; reduce traffic accidents by making autonomous vehicles more viable; speed up science that can save lives, reduce poverty and help move toward sustainable energy use; help people communicate with better search and translation; and contribute on a huge number of other fronts to improving the world economy’s efficiency and productivity. At the same time, when considering global catastrophic risks that are a good fit for the Open Philanthropy Project to work on, we believe that potental risks from advanced artificial intelligence — while relatively unlikely in the scheme of things — are a highly appropriate focus area given our goals.

    We’re planning to make potential risks from artificial intelligence a major priority this year. We feel it presents an outstanding philanthropic opportunity — with extremely high importance, high neglectedness, and reasonable tractability (our three criteria for causes) — for someone in our position. We believe that the faster we can get fully up to speed on key issues and explore the opportunities we currently see, and faster we can lay the groundwork for informed, effective giving both this year and in the future — which could make a major difference to how much impact we ultimately have.

    With all of this in mind, we’re placing a larger “bet” on this cause, this year, than we are placing even on other focus areas — not in terms of funding (we aren’t sure we’ll identify very large funding gaps this year, and are more focused on laying the groundwork for future years), but in terms of senior staff time, which at this point is a scarcer resource for us. Consistent with our philosophy of hits-based giving, we are doing this not because we have confidence in how the future will play out and how we can impact it, but because we see a risk worth taking. In about a year, we’ll formally review our progress and reconsider how senior staff time is allocated.

    This post will discuss:

    • Why we consider this cause to present such an outstanding philanthropic opportunity – in terms of importance, neglectedness, tractability, and some factors specific to the Open Philanthropy Project.
    • A broad outline of our planned activities in this cause for the coming year.
    • Some risks and reservations we’re keeping in mind about the decision to prioritize this cause as highly as we are.

    My views on this cause have evolved considerably over time; a future post will discuss the evolution of my thinking in detail, but this post focuses on the case for prioritizing this cause today.

    Importance

    It seems to me that AI-relevant research is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science.

    I’m not in a position to support this claim highly systematically, but:

    • We have done a substantial amount of investigation and discussion of various aspects of scientific research, as discussed in our recent annual review.
    • In a previous post, I addressed what I see as the most noteworthy other possible major developments in the next 20 years.

    In a previous post, I argued for assigning a substantial chance (at least 10%) to the development of strong AI within the next 20 years, with enormous global consequences.

    By and large, I expect the consequences of this progress — whether or not “strong AI” is developed soon — to be extremely positive. Improvements in AI have enormous potential to improve the speed and accuracy of medical diagnosis; reduce traffic accidents by making autonomous vehicles more viable; speed up science that can save lives, reduce poverty and help move toward sustainable energy use; help people communicate with better search and translation; and contribute on a huge number of other fronts to improving the world economy’s efficiency and productivity. As I’ve written before, I believe that economic and technological development have historically been highly beneficial, often despite the fact that any particular development was subject to substantial pessimism before it played out. I also expect that if and when strong AI is very close to development, many people will be intensely aware of both the potential benefits and risks, and will work to maximize the odds that strong AI is used to develop and deploy technologies that make people across the world healthier, wealthier, and more empowered in a variety of other ways.

    But I also think there are substantial risks – in particular:

    • Strong AI, as I’ve previously defined it, will enable/accelerate the development of one or more enormously powerful technologies. In the wrong hands, this could make for an enormously powerful tool of authoritarian, terrorist, or simply power-seeking individuals or institutions. I think the potential damage in such a scenario is nearly limitless (if strong AI causes substantial enough acceleration of a powerful enough technology), and could include long-lasting or even permanent effects on the world as a whole. I refer to this class of risk as “misuse risks.” I do not think we should let misuse scenarios dominate our thinking about the potental consequences of AI, any more than for any other powerful technology, but I do think it is worth asking whether there is anything we can do today to lay the groundwork for avoiding these sorts of risks in the future.
    • I’ve also become convinced that there is a substantial class of potential “accident risks” that could rise (like misuse risks) to the level of global catastrophic risks. We’ve described these risks previously, and in the course of many conversations with people in the field – including over 20 conversations we recently conducted with leading researchers and others in the last couple of months – we’ve seen substantial concern that they could arise and no clear arguments that they will be easy to address. I think the idea of a globally catastrophic accident from AI only makes sense for certain kinds of AI – not for all things I would count as strong AI. I believe this sort of risk is unlikely overall, but would be high-stakes enough that it’s worth doing what we can to reduce it.

    If the above reasoning is right (and I believe much of it is highly debatable, particularly when it comes to my previous post’s arguments as well as the importance of accident risks), I believe it implies that this cause is not just important but somewhat of an outlier in terms of importance as we’ve defined it.

    Here I mean that it scores significantly higher by this criterion than the vast majority of causes, not that it stands entirely alone. I think there are a few other causes that have comparable importance, though none that I think have greater importance, as we’ve defined it.

    The underlying stakes, as I defined them previously, would be qualitatively higher than those of any issues we’ve explored or taken on under the U.S. policy category, to a degree that I think more than compensates for a “10% chance that this is relevant in the next 20 years” discount.

    This is a judgment call. I have presented the information I think is necessary in order to see the tradeoff as I do – our estimates of potential policy impact on the one hand, our vision of a possible worldwide global transition on the other.

    When considering other possible transformative developments — something that I think we are now in a position to have a relatively broad, though far from complete, view of — I can’t think of anything else that seems equally likely to be comparably transformative on a similar time frame, while also presenting such a significant potential difference between best- and worst-case imaginable outcomes.

    One reason that I’ve focused on a 20-year time frame is that I think this kind of window should, in a sense, be considered “urgent” from a philanthropist’s perspective. I see philanthropy as being well-suited to low-probability, long-term investments; I believe there are many past cases in which it took a very long time for philanthropy to pay off,

    We’ve been accumulating case studies via our History of Philanthropy project, and we expect to publish an updated summary of what we know by the end of 2016. For now, there is some information available at our History of Philanthropy page and in a recent blog post.

    especially when its main value-added was supporting the gradual growth of organizations, fields and research that would eventually make a difference.

    The above has focused on potential risks of strong AI. There are also many potential AI developments short of strong AI that could be very important. For example:

    • Autonomous vehicles could become widespread relatively soon.
    • Continued advances in computer vision, audio recognition, etc. could dramatically alter what sorts of surveillance are possible, with a wide variety of potential implications.
    • Advances in robotics could have major implications for the future of warfare and even policing.
    • AI advances could dramatically transform the economy by leading to the automation of many tasks – including driving and various forms of manufacturing – currently done professionally by many people.

    We are highly interested in these potential developments. In the course of working on this cause, we expect to see opportunities to fund work relevant to them — a further argument for allocating time to this cause. With that said, if my previous arguments are correct, they would imply — in my view — that most of the “importance” (as we’ve defined it) in this cause comes from the relatively unlikely but enormously high-stakes possibility of strong AI.

    Neglectedness

    Both artificial intelligence generally and potential risks have received increased attention in recent years.

    See our previous post regarding artificial intelligence generally. See our writeup on a 2015 grant to support a request for proposals regarding potential risks.

    We’ve put substantial work into trying to ensure that we have a thorough landscape of the researchers, funders, and key institutions in this space; the current state of our knowledge is best summarized by this landscape document, which is largely consistent with the landscape we published last year. In brief:

    • There is a substantial and growing field, with a significant academic presence and significant corporate funding as well, around artificial intelligence and machine learning research.
    • [[THIS WAS ONE OF THE MOST DIFFICULT-FEELING PARTS TO WRITE OF THIS WHOLE BLOG POST SERIES. FEEDBACK WELCOME]] Within this field, there is substantial interest in potential risks; in particular, many strong academics applied for the Future of Life Institute request for proposals that we co-funded last year, and some labs have expressed an informal interest in working to reduce potential risks. That said, the field does not appear to include institutions that are focused on reducing potential risks, and (based on the views of our technical advisors, on which more below) we believe that the amount of dedicated technical work focused on reducing potential risks is relatively small compared to the extent of open technical questions.
    • There are a few organizations focused on reducing potential risks, either by pursuing particular technical research agendas or by doing non-technical work on highlighting key considerations. (An example of the latter is the Future of Humanity Institute’s work on Superintelligence.) Most of these organizations are connected to the effective altruism community. Based on conversations we’ve had over the last few months, we believe these organizations have substantial room for more funding. There tends to be fairly little intersection between the people working at these organizations and people with substantial experience in mainstream research on artificial intelligence and machine learning.
    • Ideally, we’d like to see top computer science researchers leading the way on research to reduce potential risks. Under the status quo, the intersection between these two categories – top computer science researchers and people focused on reducing potential risks – is smaller than we’d like to see.
    • We’d also like to see a greater variety of institutions working on nontechnical questions. For example, there are questions around how to minimize misuse of military drones as the underlying technology advances, and working on these questions may be an important step toward having good frameworks for minimizing misuse of strong AI in the future.
    • We don’t see any major private funders that have strong staff capacity (one or more full-time equivalent staff) focused on reducing potential risks from advanced artificial intelligence. There are government funders interested in this topic, but they appear to operate under heavy constraints. As some illustration of this point, we were one of two funders on last year’s Future of Life Institute request for proposals, which we consider the best opportunity so far to support top computer science researchers working to reduce potential risks. We believe the other funder on this grant has limited time for engaging with this cause, and is now largely focused on a particular lab.

    Bottom line – we consider this cause to be highly neglected by philanthropists, and we see major gaps in the relevant fields that a philanthropist could potentially help to address (though it might be quite difficult to do so).

    Tractability

    It’s been the case for a long time that we see this cause as important and neglected, and that our biggest reservation is tractability. We see strong AI as very much a future technology – perhaps sooner than twenty years away, but unlikely to be much sooner, and perhaps more than 100 years away. Working to reduce risks from a technology that is so far in the future, and about which so much is still unknown, could easily be futile.

    With that said, this cause is not as unique in this respect as it might appear at first. We believe that one of the things philanthropy is best-positioned to do is provide steady, long-term support as fields and institutions grow. This activity is necessarily slow. It requires being willing to support groups based on their core values, rather than immediate plans for impact, in order to lay the groundwork for an uncertain future. We’ve written about this basic approach in the context of policy work, and we believe there is ample precedent for it in the history of philanthropy; and it is the approach we favor for several of our
    policy focus areas, such as immigration policy and macroeconomic stabilization policy.

    When taking this approach, it is important to have the right goals. In the near term, our goal for potential risks of advanced artificial intelligence is (as discussed below) to support an increase in the quantity and quality of people – particularly people with strong computer science backgrounds – dedicated to thinking through how to reduce potential risks. We think this is an appropriate goal when having such a long time horizon.

    A few other considerations have recently raised my estimate of how tractable this cause is:

    Apparent existence of important technical challenges for reducing accident risks.

    • We’ve previously put significant weight on an argument along the lines of, “By the time strong AI is developed, the important approaches to AI will be so different from today’s that any technical work done today will have a very low likelihood of being relevant.” However, as argued previously, we now think there is a noticeable chance that strong AI will be developed in the next 20 years, and that the above-quoted argument carries substantially less weight when focusing on that unlikely but quite high-stakes scenario. In particular, we believe there are particular challenges associated with reducing accident risks in reinforcement learning and deep learning systems, and that progress on these challenges has a real chance of being highly important.
    • More broadly, we’ve been speaking with our technical advisors, their contacts, and computer science researchers about the question: “Are there technical challenges today that could turn out to be relevant to reducing risk in the future?” It appears to us that there are a variety of such challenges. These include inverse reinforcement learning (designing AI systems to learn the values of other agents, including humans); prevention of wireheading (designing a reinforcement learner that has no risk of seeking to maximize its reward by hacking into its own reward channel); achieving transparency, formal verification, security, and other desirable properties in deep-learning-based systems; and challenges laid out in a series of post by Paul Christiano. Going into the details of these challenges is beyond the scope of this post, and a better fit for our technical advisors than for us, but we do expect more public content to be available on these topics by the end of the year, and it’s our early impression that there is a good deal of worthwhile technical work to do – to the point where it would be highly desirable to see a greater number of researches focused on these sorts of problems.

    Further reflection on whether there is anything worth doing today to reduce future misuse risks.

    We think it’s worth being careful about funding work aiming to reduce misuse risks. For example, our current impression is that government regulation of AI would be highly premature and counterproductive. If we funded people to think and talk about misuse risks, we’d worry that they’d have incentives to attract as much attention as possible to the issues they worked on, and thus to raise the risk of such premature/counterproductive regulation.

    With that said, it seems to us that there is some potential work worth doing on this dimension:

    • First, we think that technical work related to accident risks – along the lines discussed above – could be useful for reducing misuse risks as well. Currently, it appears to us that different people in the field have extremely different intuitions about how serious and challenging accident risks are. If it turns out that accident risks are large and hard to reduce, this makes work on such risks extremely valuable. If, by contrast, it turns out that there are highly promising paths to reducing accident risk – to the point where the risk looks a lot less serious – this development could result in a refocusing of attention on misuse risks.
    • Second, we believe that potential risks have now received enough attention – some of it alarmist – that the risk of premature regulation and/or intervention by government agencies is currently a very live risk. We’d be interested in supporting institutions that could provide credible, independent, public analysis of whether and when government regulation/intervention would be advisable, even if it means simply making the case against such things for the foreseeable future. We think such analysis would likely improve the quality of discussion and decision-making, relative to what will happen in their absence.

    Shifting views on the general viability of long-term forecasting and planning.

    I’ve long worried that it’s simply too difficult to make meaningful statements (even probabilistic ones) about the future course of technology and its implications. A 2008 post by Scott Aaronson captured a view that I’ve long held in the background: “as a goal recedes to infinity, the probability increases that as we approach it, we’ll discover some completely unanticipated reason why it wasn’t the right goal anyway … Is there any example of a prognostication about the 21st century written before 1950, most of which doesn’t now seem quaint?” However, I’ve gradually changed my view on this topic, and I believe that the answer to Scott’s question – both literally and conceptually – is a qualified “yes.” That is, I believe that past attempts to make long-term plans around future technological developments have had enough success (while still being short of anything that I’d call “reliable”) that it no longer seems as futile to engage in this sort of work as it once did. Much of what has changed my views comes from reading I’ve done on personal time, and it will be challenging to assemble and present the key data points, but we hope to do so at some point this year.

    Bottom line. We think there are major questions around whether there is work worth doing today to reduce potential risks from advanced artificial intelligence. We do see a reasonable amount of potential good that could be accomplished if there were more people and institutions focused on the relevant issues; given the importance and neglectedness of this cause, we think that’s sufficient to make this a very high-priority cause.

    Some Open-Phil-specific considerations

    Networks

    We consider this a challenging cause. We think it would be easy to do harm while trying to do good. For example:

    • Trying to raise the profile of potential risks could (and, we believe, has to some degree) contribute to excessive media alarmism, which in turn could raise the risks of premature and counterproductive regulation.
    • Too aggressively encouraging particular lines of research without sufficient input and buy-in from top computer scientists could be not only unproductive but counterproductive, if it led to people generally taking risk-focused research less seriously. And since top computer scientists tend to be extremely busy, getting thorough input from them can be challenging in itself.

    We think it is important for someone working in this space to be as well-connected as possible to the people who have thought most deeply about the key issues. In our view, this means both the top computer scientists and the people/organizations most focused on reducing long-term risks.

    We believe the Open Philanthropy Project is unusually well-positioned to achieve this:

    • We are well-connected in the effective altruism community, which includes many of the people and organizations that have been most active in analyzing and raising awareness of potential risks from advanced artificial intelligence. In particular, Daniel Dewey has previously worked at the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.
    • We are also reasonably well-positioned to coordinate with top computer scientists. Daniel has some existing relationships due to his work on last year’s request for proposals from the Future of Life Institute. As mentioned previously, we also have strong relationships with several junior researchers at top institutions. We have recently been reaching out to many top computer scientists to discuss our plans for this cause, and have generally been within a couple of degrees of separation via our networks.

    Time vs. money

    One consideration that has made us hesitant to prioritize this cause is the fact that we see relatively little in the way of truly “shovel-ready” giving opportunities. We list our likely priorities in the next section; we think they are likely to be very time-consuming for staff, and we expect it will be a long time before they lead to much in the way of concrete giving opportunities.

    By default, we prefer to prioritize causes where the opposite dynamic holds, because we consider ourselves short on capacity relative to funding at this stage in our development.

    However, we think the case for this cause is compelling enough to outweigh this consideration, and we think a major investment of senior staff time this year could leave us much better positioned to find the best giving opportunities in the future.

    Our plans

    For the last couple of months, we have focused on:

    • Talking to as many people as possible in the relevant communities, particularly top computer scientists, in order to get feedback on our thinking, deepen our understanding of the relevant issues, and improve the general quality of our network.
    • Producing this series of blog posts and generally refining our communications strategy for this topic.
    • Investigating the few potential “shovel-ready grants” (by which we mean grants we can investigate and recommend with relatively low time investments) we’re aware of. We will be publishing more about these later.
    • Working with several of our technical advisors to get a better sense of what the most important concrete, known technical challenges are. Our hope is to get to the point of being able to offer substantial funding for working on the most important challenges, as determined by top computer scientists. But first, we need to ensure that there has been a thorough dialogue about what the most important challenges are.
    • Working with technical advisors to flesh out the key considerations around likely timelines to strong AI. We expect to continue this work, hopefully with an increasingly broad set of computer scientists engaging in the discussions, in order to continue refining our take on this important topic.
    • Having initial conversations about what sorts of misuse risks we should be most concerned about, and what sorts of future interventions might reduce them. We want to have thought in depth about these issues before we start seeking and evaluating potential grantees for this area.
    • Seeking past cases in which philanthropists helped support the growth of technical fields, to see what we can learn.

    Ideally, we will ultimately find strong giving opportunities in the following categories:

    • ”Shovel-ready” grants to existing organizations and researchers focused on reducing potential risks from advanced artificial intelligence.
    • Supporting substantial work by top computer scientists on the most important technical challenges related to reducing accident risk. This could take the form of funding academic centers, requests for proposals, convenings and workshops, and/or individual researchers.
    • Supporting thoughtful, nuanced, independent analysis by well-connected, knowledgeable individuals seeking to help inform discussions of how to reduce misuse risks (including the question of whether and when any government regulation/intervention is called for, which we feel it will not be in the near future).
    • ”Pipeline building”: supporting programs, such as fellowships, that can increase the total number of people who are deeply knowledgeable about both the relevant computer science and the relevant debates about potential risks from advanced artificial intelligence. We would particularly like to see a greater number of top computer scientists who are also deeply versed in issues relevant to potential risks.
    • Other giving opportunities that we come across, including those that pertain to AI-relevant issues other than those we’ve focused on in this post (some such issues are listed above).

    However, getting to this point will likely require a great deal more work and discussion – internally, with close technical advisors, and with the relevant communities more broadly. It could be a long time before we are recommending large amounts of giving in this area, and we think that allocating significant senior staff time to the cause will speed our work considerably.

    Some overriding principles for our work

    [[POSSIBLY THIS SECTION SHOULD GO HIGHER. POSSIBLY IT IS REDUNDANT W/OTHER STUFF.]]

    As we work in this space, we think it’s especially important to follow a few core principles:

    1. Remember that the potential benefits of AI likely outweigh the potential risks, by a great deal. Our work is focused on potential risks, because we think that’s the aspect of AI research that seems most neglected at the moment. But it is important not to overstate the risks, and it is important not to lose sight of how much the world has to gain from increasingly capable artificial intelligence. [[I COULD REPEAT THE LIST OF GOOD THINGS HERE]]

    2. Look to computer scientists to lead the way in identifying the most important problems and the highest-quality research. We rely heavily on our technical advisors, and we expect that any computer science research we fund will involve computer-scientist-led processes for selecting grantees. For example, the request for proposals we co-funded last year employed an expert review panel for selecting grantees. We wouldn’t have been participated if it had involved selecting grantees ourselves with nontechnical staff.

    3. Seek a lot of input, and reflect a good deal, before committing to major grants and other activities. As stated above, we consider this a challenging cause, where well-intentioned actions could easily do harm. We are seeking to be thoroughly networked and to seek substantial advice on our activities from a range of people, both computer scientists and people focused on reduction of potential risks.

    [[WHAT ELSE?]]

    Risks and reservations

    We have many risks of reservations about prioritizing this cause as highly as we are. We have discussed most of them in this post and the ones preceding it. Here we list the major ones in one place. In this section, our goal is to provide a consolidated list of risks and reservations, but not necessarily to give our comprehensive take on each.

    • We’ve argued for assigning at least a 10% probability to the development of strong AI within the next 20 years. We feel we have thought deeply about this question and collected what information we can, but we recognize that that information is extremely limited, and the case we’ve presented is highly debatable.
    • We think the case that this cause is neglected is fairly strong, but leaves plenty of room for doubt. In particular, the cause has received attention from some high-profile people, and multiple well-funded AI labs and many AI researchers have expressed interest in doing what they can to reduce potential risks. It’s possible that they will end up pursuing essentially all relevant angles, and that the activities we’ve listed above will prove superfluous.
    • We do not want to exacerbate what we see as an unfortunate pattern, to date, of media alarmism about potential risks from advanced artificial intelligence. We think this could lead to premature and counterproductive regulation, among other problems. We hope to communicate about our take on this cause with enough nuance to increase interest in reducing risks, without causing people to view AI as more threatening than positive.
    • We’re mindful of the fact that it might be futile to make meaningful predictions, form meaningful plans, and do meaningful work to reduce fairly far-off and poorly-understood potential risks.
    • We’re extremely uncertain of how significant accident risks are. We think it’s possible that preventing truly catastrophic accidents will prove to be relatively easy, and that early work will look in hindsight like a poor use of resources.
    • We see a risk that our thinking is distorted by being in an “echo chamber,” and that our views on the importance of this cause are overly reinforced by our closest technical advisors and by the effective altruist community. We’ve written previously about why we don’t consider this a fatal concern, but it does remain a concern.

    With all of the above noted, we think it important that a philanthropist in our position be willing to take major risks, and prioritizing this cause is one that feels very worth taking.

    Related Items

    • New Shallow Investigations: Telecommunications and Civil Conflict Reduction

      We recently published two shallow investigations on potential focus areas to the Effective Altruism Forum. Shallow investigations, which are part of our cause selection process, are mainly intended...

      Read more
    • Incoming Program Officer for Effective Altruism Community Building (Global Health and Wellbeing): James Snowden

      Earlier this year, I wrote that Open Philanthropy was looking for someone to help us direct funding for our newest cause area: We are searching for a program...

      Read more
    • Report on Social Returns to Productivity Growth

      Historically, economic growth has had huge social benefits, lifting billions out of poverty and improving health outcomes around the world. This leads some to argue that accelerating economic...

      Read more
    Back to Research & Updates
    Open Philanthropy
    Open Philanthropy
    • Careers
    • Press Kit
    • Governance
    • Privacy Policy
    Mailing Address
    182 Howard Street #225
    San Francisco, CA 94105
    Email
    [email protected]
    Media Inquiries
    [email protected]
    Anonymous Feedback
    Feedback Form

    Sign Up to Follow Our Work

    Join Our Mailing List

    © Open Philanthropy 2022