• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

Global Catastrophic Risks

  • Category: Global Catastrophic Risks
  • Content Type: Cause Investigations
  • Content Type: Research Reports
  • Content Type: Shallow Investigations

Table of contents

In a nutshell

What is the problem?
What are possible interventions?
Who else is working on it?

1. What is the problem?

2. What are possible interventions?

3. Who else is working on this?

4. Questions for further investigation

5. Our process

6. Sources

Published: February 01, 2014

This is a writeup of a shallow investigation, a brief look at an area that we use to decide how to prioritize further research.


In a nutshell

What is the problem?

Some relatively low-probability risks, such as a major pandemic or nuclear war, could carry disastrous consequences, making their overall expected harm substantial. Because such occurrences are unprecedented and relatively unlikely, they may not receive adequate attention from other actors. While we do not have credible estimates of the likelihood of these risks, some seem to be non-trivial.

What are possible interventions?

A philanthropist could focus on individual risks, such as nuclear or biological weapons, or on global catastrophic risks in general. We feel that we have a somewhat better understanding of the potential impact of philanthropy on individual risks, but do not have a sense of whether it would be better to focus on individual risks or on the general category of global catastrophic risks. This page focuses on the latter.

Who else is working on it?

A few small organizations, with budgets totaling a few million dollars a year, focus on the general topic of global catastrophic risks (as opposed to individual risks).


1. What is the problem?

We use the term “global catastrophic risk” on this page to refer to risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction).1 Such risks might include an asteroid striking earth, an extremely large volcanic eruption, extreme climate change, or, conceivably, a threat from a novel technology, such as intelligent machines, an engineered pathogen, or nanotechnology.2

We are not aware of any reliable estimates of the overall magnitude of these global catastrophic risks.3 Naive estimates suggest that the probabilities of “natural” global catastrophic risks such as those from extremely large asteroid impacts or volcanic eruptions are likely to be low.4 We would guess, though with very limited confidence, that risks of global catastrophe from novel human technology are significantly higher, and are likely to grow in the coming decades.5

Some prominent philosophers have argued that global catastrophic risks are especially worthy of attention, suggesting that cutting short the potentially extraordinarily long future of humanity would be worse than nearly any other outcome.6

Even if the potential impacts of a global risk are not large enough to significantly curtail human flourishing over centuries, it may still be a good fit for philanthropic attention, because other actors (governments, for-profits, etc.) may not have sufficient incentive to address highly uncertain, low-probability risks whose potential consequences would be widely shared.

We have discussed a number of potential global catastrophic risks in separate shallow investigations:

  • Anthropogenic climate change
  • Near-Earth asteroids
  • Large volcanic eruptions
  • Nuclear weapons
  • Antibiotic resistance
  • Biosecurity risks (e.g. pandemics, bioterrorism)
  • Risks from atomically precise manufacturing

On this page, we focus on groups and interventions that are explicitly aiming to address global catastrophic risks as a whole, rather than focusing on one particular type of risk. We do not have a strong view on whether work on particular risks or work across risks is likely to be more effective.


2. What are possible interventions?

We do not have a good sense of which interventions focused on the general category of global catastrophic risk (as opposed to a particular risk) might be most effective.

Some areas of focus might include, amongst others:7

  • Decreasing the likelihood of major global conflicts.
  • Improving resilience to unexpected shocks of all kinds, such as by increasing the amount of food and other supplies that are stockpiled globally or by strengthening support networks between countries.
  • Safeguarding people and knowledge to increase the chances that civilization could be rebuilt in the wake of a global catastrophe.
  • Regulating novel technology to avoid a potentially catastrophic deployment.
  • Supporting research to better understand the level and distribution of global catastrophic risks and the potential returns to specific or cross-cutting efforts to mitigate such risks.
  • Advocating for other actors to take greater action on global catastrophic risks.

We do not have a strong understanding of how additional funding in this area would translate into reductions in risk, or of the track record of existing organizations in this field.


3. Who else is working on this?

A few organizations are explicitly focused on reducing global catastrophic risks broadly. Such organizations include:8

  • Future of Humanity Institute (affiliated with the University of Oxford)
  • Cambridge Centre for the Study of Existential Risk (affiliated with the University of Cambridge)
  • Global Catastrophic Risk Institute
  • Machine Intelligence Research Institute
  • Institute for Ethics and Emerging Technologies
  • Lifeboat Foundation
  • Global Challenges Foundation

Our understanding is that these groups are quite small in terms of staff and budget, with the Future of Humanity Institute and Machine Intelligence Research Institute being the largest, each with an annual budget of about $1.1 million.9

The Skoll Global Threats Fund, which made grants worth roughly $10 million in 2011, primarily on climate change, has also supported work on other potential global catastrophic risks, including nuclear weapons and pandemics.10

We would guess that some government bodies are also tracking and devoting resources to addressing multiple risks, though we don’t have a sense of the magnitude of resources involved.


4. Questions for further investigation

Our research in this area has been relatively limited, and many important questions remain unanswered by our investigation.

Amongst other topics, further research on this cause might address:

  • Which interventions focused on the general category of global catastrophic risk might be most effective in reducing the total amount of global catastrophic risk?
  • Is it possible to generate more credible estimates of the overall likelihood of particular global catastrophic risks, or of the sum thereof?
  • Should a philanthropist concerned about global catastrophic risks focus on one or more particular risks, or on cross-cutting global catastrophic risk research, advocacy, or preparation?
  • What degree of ethical weight does the far future warrant? How should we understand the value of preserving the possibility of a very long future?

5. Our process

Our investigation to date has been rather cursory, mainly consisting of conversations with two individuals with knowledge of the field:

  • Carl Shulman, Research Associate, Future of Humanity Institute
  • Seth Baum, Executive Director, Global Catastrophic Risk Institute

In addition to these conversations, we also reviewed documents that were shared with us.


6. Sources

DOCUMENT SOURCE
Notes from a conversation with Carl Shulman on September 25, 2013 Source
Notes from a conversation with Seth Baum on October 2, 2013 Source
Barnosky et al. 2011 Source (archive)
Bostrom 2013 Source (archive)
Skoll Global Threats Fund 2011 Form 990 Source (archive)
MIRI 2012 Form 990 Source (archive)
Expand Footnotes Collapse Footnotes

1. “There are two commonly used definitions of “existential risk.” The first definition is a threat that could cause the extinction of all humans. The second definition is broader: an event that could greatly diminish the accomplishments of humanity and humanity’s descendants. The second definition is the one used by Nick Bostrom of the Future of Humanity Institute at Oxford.
There is also a range of definitions for the term “global catastrophic risk” (GCR). Many people use the term in a similar way to the second definition of existential risk. Some definitions of GCR include events, such as World War II, that cause a lot of damage but do not have a significant long-term effect on humanity’s development when considered from a very high level, or “astronomical” perspective.” Notes from a conversation with Seth Baum on October 2, 2013

“The term global catastrophic risk (GCR) has been given multiple definitions. Some use the term very broadly to refer to problems such as financial crises and disasters that kill many people but only a small percentage of humanity. More stringent definitions focus on threats that could kill a large portion of humans or disrupt industrial civilization. Existential risks are catastrophes that end humanity’s existence or have a drastic permanent disruptive effect on the future potential of human-derived civilization.” Notes from a conversation with Carl Shulman on September 25, 2013

2. “Humanity has survived what we might call natural existential risks for hundreds of thousands of years; thus it is prima facie unlikely that any of them will do us in within the next hundred. This conclusion is buttressed when we analyse specific risks from nature, such as asteroid impacts, supervolcanic eruptions, earthquakes, gamma-ray bursts, and so forth: Empirical impact distributions and scientific models suggest that the likelihood of extinction because of these kinds of risk is extremely small on a time scale of a century or so.
In contrast, our species is introducing entirely new kinds of existential risk—threats we have no track record of surviving. Our longevity as a species therefore offers no strong prior grounds for confident optimism. Consideration of specific existential-risk scenarios bears out the suspicion that the great bulk of existential risk in the foreseeable future consists of anthropogenic existential risks—that is, those arising from human activity. In particular, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology. As our powers expand, so will the scale of their potential consequences—intended and unintended, positive and negative. For example, there appear to be significant existential risks in some of the advanced forms of biotechnology, molecular nanotechnology, and machine intelligence that might be developed in the decades ahead. The bulk of existential risk over the next century may thus reside in rather speculative scenarios to which we cannot assign precise probabilities through any rigorous statistical or scientific method. But the fact that the probability of some risk is difficult to quantify does not imply that the risk is negligible.” Bostrom 2013 pgs 15-16.

3. “Dr. Baum is not aware of credible estimates of the total probability of a GCR occurring within the next decade or two. There have been a number of texts with qualitative sweeps of potential risks without careful probabilistic analysis, such as those by Martin Rees, Nick Bostrom, and Richard Posner. Martin Rees predicted a 50% chance of humanity surviving the next century, but that estimate is sensitive to many poorly characterized details.” Notes from a conversation with Seth Baum on October 2, 2013

“The value of existential risk reduction depends on the trajectory of risk. If the level of risk per period is constant over time, then the chance of civilization surviving will decay exponentially with time. The Stern Review (published by the British Government in 2006) of the economics of climate change used an assumption that every year humanity has a 1 in 1000 chance of going extinct. This amounts to a 0.1% discount rate, or a mean lifetime for civilization of about 1000 years. While the Stern Review still found that future generations total welfare would be many times greater than that of the present generation, this pessimism results in a drastically lower estimate of future welfare than one which allows for rates of annual risk to fall.
There are a number of reasons to expect the risk of extinction could fall over time. First, many such risks are associated with technological transitions, and as time passes and humanity approaches the limit of potential technologies there will be fewer technological surprises. Second, as Steven Pinker has argued, violence has declined and large-scale cooperation has increased over human history, and that trend may continue, which would reduce existential risk. Third, space colonization, which would separate humans by vast distances, would make it more difficult for a stochastic process to destroy the entire civilization. Finally, technological advances could create new ways to prevent disruptions. For example, surveillance or lie detection could improve, making it more difficult for rogue actors or mutual distrust to produce catastrophes.
Mr. Shulman expects Earth-derived civilization to still exist in a million years.
….
Note that there is a selection effect on many published predictions of existential risk: those who believe the risks are larger are more likely to write about them.
Many estimates of GCRs do not include an estimate of the risk of extinction, given that a catastrophe occurred. For example, there are estimates of the risk of an asteroid impact, but rarely is the risk of extinction from that asteroid impact estimated.” Notes from a conversation with Carl Shulman on September 25, 2013

4. “Of the four billion species estimated to have evolved on the Earth over the last 3.5 billion years, some 99% are gone. That shows how very common extinction is, but normally it is balanced by speciation. The balance wavers such that at several times in life’s history extinction rates appear somewhat elevated, but only five times qualify for ‘mass extinction’ status: near the end of the Ordovician, Devonian, Permian, Triassic and Cretaceous Periods. These are the ‘Big Five’ mass extinctions (two are technically ‘mass depletions’). Different causes are thought to have precipitated the extinctions (Table 1), and the extent of each extinction above the background level varies depending on analytical technique, but they all stand out in having extinction rates spiking higher than in any other geological interval of the last ~540 million years and exhibiting a loss of over 75% of estimated species.” Barnosky et al. 2011

If such mass extinctions (which seem to have typically played out over geological times scales) constitute global catastrophic risks from a human perspective, then a naïve estimate might be that such a global catastrophe occurs roughly every 100 million years, suggesting a baseline risk of around one in a million over the next century. We could readily believe estimates of “natural” rates of global catastrophic risks for humans that are an order of magnitude (or two) higher or lower than this estimate.

5. “In contrast, our species is introducing entirely new kinds of existential risk—threats we have no track record of surviving. Our longevity as a species therefore offers no strong prior grounds for confident optimism. Consideration of specific existential-risk scenarios bears out the suspicion that the great bulk of existential risk in the foreseeable future consists of anthropogenic existential risks—that is, those arising from human activity. In particular, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology. As our powers expand, so will the scale of their potential consequences—intended and unintended, positive and negative. For example, there appear to be significant existential risks in some of the advanced forms of biotechnology, molecular nanotechnology, and machine intelligence that might be developed in the decades ahead. The bulk of existential risk over the next century may thus reside in rather speculative scenarios to which we cannot assign precise probabilities through any rigorous statistical or scientific method. But the fact that the probability of some risk is difficult to quantify does not imply that the risk is negligible.” Bostrom 2013 pgs 15-16.

“Dr. Baum believes that on balance the risk from GCRs will increase over time as technology develops, but holds that belief with low confidence. There are many unknown details, and it is possible that the risks will increase or decrease. Biomedical research might lead to deadly viruses and/or it might produce incredibly effective vaccines or bio-surveillance systems that very effectively prevent the spread of disease. Looking specifically at emerging technologies, the risks seem to be growing on balance.
Another complicating factor when predicting the direction of risk is that smaller catastrophes could draw attention to an issue and help prevent future events of that type.” Notes from a conversation with Seth Baum on October 2, 2013

6. “A number of philosophers, including Peter Singer and Derek Parfit, have argued that a premature end to the human future would be very bad, because it is potentially extremely populous and is likely to have a higher standard of living than the present and past.
There are others who argue that preventing future people from coming into existence is not negative.
There are some cases where benefits to future generations are invoked in support of incurring current costs. Examples include concern for long-term pension stability, supporting research, development, and investment, or policies involving immediate sacrifices to prevent future climate change.
However, it is rare for actors to systematically apply the view espoused by Singer and Parfit, the overwhelming importance of future generations, across policy issues. In debates about climate change some economists such as William Nordhaus have argued against placing high value on future generations, since this would seem to require that current people should also make large sacrifices in other areas, such as high saving and investment rates, and that these sacrifices are unacceptable to current people.” Notes from a conversation with Carl Shulman on September 25, 2013

“But even this reflection fails to bring out the seriousness of existential risk. What makes existential catastrophes especially bad is not that they would show up robustly on a plot like the one in Figure 3, causing a precipitous drop in world population or average quality of life. Instead, their significance lies primarily in the fact that they would destroy the future. The philosopher Derek Parfit made a similar point with the following thought experiment:
‘I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:
1. Peace.
2. A nuclear war that kills 99 per cent of the world’s existing population.
3. A nuclear war that kills 100 per cent.
2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater. The Earth will remain habitable for at least another billion years. Civilisation began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilised human history. The difference between 2 and 3 may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second’ (Parfit, 1984, pp. 453–454). To calculate the loss associated with an existential catastrophe, we must consider how much value would come to exist in its absence. It turns out that the ultimate potential for Earth-originating intelligent life is literally astronomical.” Bostrom 2013 pgs 17-18, quoting Derek Parfit.

7. “There are a number of precautions that humanity could take now or in the future to protect against a wide range of GCRs. Such precautions include:

  • Increasing grain stores
  • Building bunkers
  • Space colonization
  • Encouraging political transparency
  • Promoting altruism and global humanitarianism (including not privileging the well-being of people of one’s own country over others)
  • Promoting goodwill between nations
  • Energy conservation (both to reduce climate change and competition among nations)
  • Increasing local self-sufficiency

The general consensus is that grain stores or bunkers on Earth are likely to be more cost-effective than increasing the number of people in space, at least over time scales of decades and possibly up to time scales of hundreds of millions of years. Beyond that, space colonization will become necessary. Preparations will likely be more politically feasible if they can be justified as measures for more common, smaller-scale disasters such as hurricanes and droughts.” Notes from a conversation with Seth Baum on October 2, 2013

8. “The community of people and organizations that explicitly identify as working on existential risk includes:

  • Nick Bostrom and the organization that he runs, the Future of Humanity Institute (FHI).
  • The Global Catastrophic Risk Institute (GCRI), founded by Seth Baum and colleagues. [The GCRI’s research is not strictly limited to existential risk, but places strong weight on it.]
  • The Cambridge Centre for the Study of Existential Risk, which is currently in development. It has a board of directors and one employee, and is currently applying for grants from academic funding bodies to hire full-time staff. Huw Price and Martin Rees are two of the co-founders.
  • The Machine Intelligence Research Institute.
  • The Institute for Ethics and Emerging Technology has an existential risk/emerging risk category.

Within the Effective Altruist movement, there are a number of people who put priority on the future but may not be doing direct work in this field.” Notes from a conversation with Carl Shulman on September 25, 2013

“Some organizations explicitly working on crosscutting GCRs issues include:

  • The Future of Humanity Institute.
  • The Cambridge Centre for the Study of Existential Risk. The founding members of the organization have been using their influence and the Cambridge brand to pull together important people and generate attention for their project.
  • The Global Catastrophic Risk Institute.
  • The Lifeboat Foundation.
  • The Skoll Global Threats Fund.
  • The Global Challenges Foundation, a recently founded organization.”

Notes from a conversation with Seth Baum on October 2, 2013

9. “The Future of Humanity Institute’s budget, stemming primarily from grants, the James Martin School, and some private donations, is currently $1.1 MM, including all overhead charged by Oxford University, conferences, and other programs. There are 12 full time employee equivalents, plus research associates. FHI receives academic grants reliably enough to continue to exist, but the grants often require that work be moved in the direction of funders’ interests relative to its core research agenda. Such grants allow FHI to work on useful projects, but not as useful as they might otherwise do with more unrestricted funding, and the work of the Institute is also impacted by the time and human cost of repeatedly applying for grants. The Cambridge Centre for the Study of Existential Risk has 1 employee shared with FHI and other possible employees ready for when it gets funding.” Notes from a conversation with Carl Shulman on September 25, 2013

MIRI 2012 Form 990

10. Skoll Global Threats Fund 2011 Form 990 pg 23.

Subscribe to new blog alerts
Open Philanthropy
Open Philanthropy
  • We’re Hiring!
  • Press Kit
  • Governance
  • Privacy Policy
  • Stay Updated
Mailing Address
Open Philanthropy
182 Howard Street #225
San Francisco, CA 94105
Email
info@openphilanthropy.org
Media Inquiries
media@openphilanthropy.org
Anonymous Feedback
Feedback Form

© Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT