• Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us
  • Partner With Us
  • Focus Areas
    • Cause Selection
    • Global Health & Wellbeing
      • Abundance & Growth
      • Effective Giving & Careers
      • Farm Animal Welfare
      • Global Aid Policy
      • Global Health & Development
      • Global Health R&D
      • Global Public Health Policy
      • Scientific Research
    • Global Catastrophic Risks
      • Biosecurity & Pandemic Preparedness
      • Forecasting
      • Global Catastrophic Risks Capacity Building
      • Potential Risks from Advanced AI
    • Other Areas
      • History of Philanthropy
  • Grants
  • Research & Updates
    • Blog Posts
    • In the News
    • Research Reports
    • Notable Lessons
  • About Us
    • Grantmaking Process
    • How to Apply for Funding
    • Careers
    • Team
    • Operating Values
    • Stay Updated
    • Contact Us

Open Philanthropy AI Worldviews Contest

Table of contents

Prize Conditions and Amounts

Eligibility

Submission

Judging Process and Criteria

Questions?

Update: We’ve now chosen the winning entries; you can read them here.

We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest.

The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly.

The contest deadline of May 31, 2023 has now passed. All winners have been selected and notified; we will publicly announce them by the end of September.

 

Prize Conditions and Amounts

Essays should address one of these two questions:

Question 1: What is the probability that AGI is developed by January 1, 2043?[1]By “AGI” we mean something like “AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less.” AGI is a notoriously thorny concept to define precisely. What we’re actually interested in is the potential … Continue reading

Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?

Essays should be clearly targeted at one of the questions, not both.


Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist:

  • An essay could cause a panelist to change their central estimate of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070.
  • An essay could cause a panelist to change the shape of their probability distribution for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn’t alter the panelist’s central estimate.
  • An essay could clarify a concept or identify a crux in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn’t change anybody’s probability distribution or central estimate).

We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil’s published body of work on AI[2]This includes the research published on our website, as well as material from Ajeya Cotra, Holden Karnofsky, Joe Carlsmith, and Tom Davidson. broadly represents the views of the panel.

Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%.


We will award a total of six prizes across three tiers:

  • First prize (two awards): $50,000
  • Second prize (two awards): $37,500
  • Third prize (two awards): $25,000

 

Eligibility

  • Submissions must be original work, published for the first time on or after September 23, 2022 and before 11:59 pm EDT May 31, 2023.
  • All authors must be 18 years or older.
  • Submissions must be written in English.
  • No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references).
  • Open Phil employees and their immediate family members are ineligible.
  • The following groups are also ineligible:
    • People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by law
    • People who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing)
  • You can submit as many entries as you want, but you can only win one prize.
  • Co-authorship is fine.
  • See here for additional details and fine print. 

 

Submission

Use this form to submit your entries. We strongly encourage (but do not require) that you post your entry on the EA Forum and/or LessWrong. However, if your essay contains infohazardous material, please do not post the essay publicly. 

Note that submissions will be hosted on a Google server and viewable by Open Phil staff. We don’t think that (m)any submissions will warrant more security than this. However, if you believe that your submission merits a more secure procedure, reach out to AIWorldviewsContest@openphilanthropy.org, and we will make appropriate arrangements.

 

Judging Process and Criteria

There will be three rounds of judging.

Round 1: An initial screening panel will evaluate all submitted essays by blind grading to determine whether each essay is a good-faith entry. All good-faith entries will advance to Round 2.

Round 2: Out of the good-faith entries advancing from Round 1, a panel of judges will select at least twenty-four finalists.

Round 3: Out of the finalists advancing from Round 2, the judges will select two first-place entries, two second-place entries, and two third-place entries.

In Rounds 2 and 3, the judges will make their decision using the criteria described below:

  • The extent to which an essay uncovers considerations that change a judge’s beliefs about the probability of AGI arriving by 2043 or the threat that AGI systems might pose. (67%)
  • The extent to which an essay clarifies the underlying concepts that ought to inform one’s views about the probability of AGI arriving by 2043 or the threat that AGI systems might pose. (33%)

 

Questions?

Please email AIWorldviewsContest@openphilanthropy.org with any questions, comments, or concerns.

Footnotes[+]Footnotes[−]

Footnotes
1 By “AGI” we mean something like “AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less.” AGI is a notoriously thorny concept to define precisely. What we’re actually interested in is the potential existential threat posed by advanced AI systems. To that end, we welcome submissions that are oriented around related concepts, such as transformative AI, human-level AI, or PASTA.
2 This includes the research published on our website, as well as material from Ajeya Cotra, Holden Karnofsky, Joe Carlsmith, and Tom Davidson.
Open Philanthropy
Open Philanthropy
  • We’re Hiring!
  • Press Kit
  • Governance
  • Privacy Policy
  • Stay Updated
Mailing Address
Open Philanthropy
182 Howard Street #225
San Francisco, CA 94105
Email
info@openphilanthropy.org
Media Inquiries
media@openphilanthropy.org
Anonymous Feedback
Feedback Form

© Open Philanthropy 2025 Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT