This post aims is to give blog readers and followers of the Open Philanthropy Project an opportunity to publicly raise comments or questions about the Open Philanthropy Project or related topics (in the comments section below). As always, you’re also welcome to email us at info@openphilanthropy.org if there’s feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can see our previous open thread here.

Comments

Are there ways for those outside the organization to contribute to the Open Philanthropy Project’s work?

I am a recent college graduate currently working as a security analyst. While my focus right now is “building career capital” I’d really like to stay close to this type of work in my free time if for no other reasons than to learn and keep myself motivated.

Kelvin, thanks for your interest! The best ways to follow along with our work are listed here. (We generally don’t take volunteers.)

“We’ve also updated in favor of the “expert” model of philanthropy described in this blog post.”

What were 1-2 of the driving factors behind updating to the expert model, rather than a broad model?

Arik, we’ve now experimented a fair amount with each model - “expert” philanthropy via Program Officers who specialize in one area, and “broad” philanthropy via the policy work mentioned above and a variety of one-off grants. We’ve subjectively felt, so far, that the former model has generated more exciting grants, and that the writeups for these grants have indicated a level of understanding and deep context that seems important and hard to replicate under the “broad” model.

What a few constraints on Open Phil, as you see them? i.e. “if we had X, we’d be able to make more progress”

Arik, I think we’d make more progress in our mission if there were more potential grantees - and other people in the fields we work in - who were highly value-aligned (as well as good at communicating with us). Another answer would be that I feel we’re expanding our staff and giving at a prudent rate, so that even though we aren’t yet where we eventually want to be, I don’t have a particular desire to be moving faster on either dimension than we currently are.

How often do you explore and consider completely new focus areas, and is there any process for doing so?

Arik, see answer to your next question.

Arik, those documents are not actively updated; they’re a record of our past decision-making. We don’t have a particular date at which we’re hoping to revisit our cause selection overall, though at some point we will (and in the meantime, we can always change focus areas using an informal process, and have done so to some degree).

Thanks for all the answers, Holden. I appreciate it!

(x-post Facebook)

Great to hear about Open Phil’s first investment. It also yields a number of questions for Open Phil staff:

1. Would you consider this to be philanthropic? (i.e. Did you take worse terms than other investors? Or are there other reasons it’s philanthropic?)
2. Was this part of a round of funding? Who advised you on terms? What was your role in the round? (i.e. Can you share if you constituted less than 10%, less than 50%, or more than 50%?)
3. Did you take into account the possible financial return? To what extent did it play a role in your decision?
4. What was your interaction with the other investors? Are they at all troubled by your reasons for getting involved, and the limits that might put on the company?
5. Are other investment opportunities on your radar?

The blog post makes clear many of these may not be answerable, but I thought I’d throw them out there anyway.

Josh, as noted in the writeup, there are some points we’ve been asked not to disclose. Here are some answers we can give:

#1, #3 - our main reason for recommending the investment was to improve farm animal welfare. We aren’t discussing the terms of the investment.

#5 - not at this time.

This post is missing the “open threads” tag, hence it fails to show up at http://www.openphilanthropy.org/blog/open-threads

Thanks Vipul! Fixed.

Do you have publicly available, or would you be willing to make publicly available, the criteria that need to be met for a grant to be eligible to be given as a discretionary (formerly “no-process”) grant? As well as how discretionary granting differs from your usual grantmaking process.

For instance, I got the impression that discretionary grants cannot be used in cases of conflict of interest, and that there is also some money cap on individual discretionary grants and total discretionary grants by program officer, but I wasn’t able to locate any place where you’d said these things.

Hi Vipul,

We haven’t published detailed information about the discretionary grantmaking process, though pages about discretionary grants describe the aims of the process.

The way it works is that some staff members have a discretionary “budget” and can use the discretionary process for grants (within their focus area) adding up to that figure or less. Discretionary grants are subject to the same approval requirements as other grants (Cari and I need to approve), but the investigator submits only a brief (1-paragraph or so) explanation of their reasoning along with any risks/downsides, rather than completing a full internal writeup, and we have a strong default to approving the grant in the absence of serious risks/downsides. The overall aim is for us to be able to move forward on relatively small and low-risk grants, based purely on the judgment of a single staff member and with minimal delay.

Thank you, Holden. Is it also the case that grants where the program officer might have a conflict of interest cannot be made through the discretionary grantmaking process? That’s the impression I got when looking through the grants database, but I don’t have a reference offhand.

Hi Vipul,

There is no such restriction. It would need to be disclosed, and the usual decision-makers (Cari and myself) could stop a grant from moving forward on that basis or any other basis.

Your “Farm Animal Welfare” grantmaking area is listed as being under U.S. Policy, though for the past year it has included a lot of non-U.S. activity. Do you plan to restructure the grantmaking areas to put it outside of “U.S. Policy”?

http://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare

Hi Vipul, we don’t plan to do that at this time. The “U.S. Policy” designation doesn’t have much practical importance here except to help people understand how we came to the focus area in the first place, and to some extent to assist with site navigation.

I’m curious about Open Phil’s take on a question raised by Ben Hoffman here: http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/.

“Why not buy people like Eliezer Yudkowsky, Nick Bostrom, or Stuart Russell a seat on OpenAI’s board?”

More broadly, it seems there exist people who (i) care a lot about AI safety, (ii) have
enough of a relationship with Open Phil staff to communicate effectively with them, and (iii) have greater background in AI safety than Holden does.

Were these sorts of people considered as serious candidates for an OpenAI board seat? If so, why weren’t they selected?

Hi Milan, we did discuss different possibilities for who would take the seat, and considered the possibility of someone outside Open Phil, but I’m going to decline to go into detail on how we ultimately made the call. I will note that I’ve been looping in other AI safety folks (such as those you mention) pretty heavily as I’ve thought through my goals for this partnership, and I recognize that there are often arguments for deferring to their judgment on particular questions.

Can you comment on how much of the $30 million grant to OpenAI will be used for long term AI safety / alignment research (the kind that Paul Christiano is doing), vs short-term safety research (things like adversarial examples), vs general AI capability research? I’ve noticed that Paul seems to be the first and only person OpenAI has hired to focus on long-term alignment research, and that happen in Jan 2017 before the Open Phil grant was made. Is OpenAI actively trying to recruit more people for long-term alignment research?

Hi Wei, a few comments on this:

  • The grant is general support, so there is no set allocation among the categories you list.
  • I don’t understand the distinction you are drawing between “long-term” and “short-term” research. If you mean research that is vs. isn’t relevant to the biggest possible risks from AI, I think both types of research you just mentioned are relevant.
  • Your statement about Paul is inaccurate. Dario Amodei focuses on safety research and co-authored the recent paper with Paul. OpenAI recently hired a third researcher to focus on safety (not announced yet).
  • OpenAI is currently recruiting more people for safety-related research.
  • While emphasizing and encouraging safety-related research is definitely a goal of our partnership, (a) that doesn’t mean additional hires from this point should be attributed to our influence; (b) this isn’t the only goal of our partnership. More on the latter here.

Thanks for your answers. I understand there is no set allocation, so I was asking about your expectation (subjective belief) of how the money will be used.

The distinction between “short-term” and “long-term” I was drawing is between solving problems that already exist in current AI systems, such as adversarial examples, vs trying to predict and work on future problems that are not easily visible today, especially ones that may only occur once we achieve superintelligence. Examples of the latter would be laying out conceptual and theoretical foundations for highly reliable agents and scalable AI control (scalable in the sense of working regardless of how powerful the AI becomes). I think we can expect AI research groups to work on the former regardless of outside influence, but may need a push to do more of the latter, hence my question.

I don’t have much sense of how much of OpenAI’s research will be safety-focused. I think OpenAI wants to do safety research to the extent that it can find great people who fit with the organization to work on it, but over the next few years that could be consistent with a big ramp-up or no ramp-up. (I do expect OpenAI to put more than 50% of its resources into capabilities research, and I think this is sensible and important to the case for our grant.)

Re: short- vs. long-term, thanks for clarifying. I have substantial disagreements with these premises.

First, I don’t agree with the spirit of “we can expect AI research groups to work on the former regardless of outside influence.” There are a huge number of potential research directions in the field of AI, and I think that encouraging safety-relevant work is likely to be important and effective on anything but the very hottest topics.

Second, I think that there are many problems that affect today’s AI systems in small, farfetched, and/or “toy” ways, but that could be illustrations or starting points for work on long-term problems. Simultaneously, I think that work that addresses literally _no_ problems observable in _any_ systems today is quite unlikely to be useful even in the long run. Because of these two points, I think the distinction between “Optimized for today’s systems” and “Optimized for the long run” is quite fuzzy, and for many research paths (including both that you name), it’s very hard to predict the degree to which long-run importance will exceed short-run importance. As an illustration of this, you cite Paul as working on “long-run” concerns, but his recent focus has been (with Dario and others) on learning from human feedback, which led to a paper applying the method to today’s systems.

I think there is some research that clearly has almost no relevance to today’s systems, and some that has such pressing relevance that it seems it would get done regardless of any implications for long-run considerations. But I don’t think either of the examples you gave fits in either category, and I think the line you are trying to draw is quite fuzzy overall.

I agree that long term vs short term is a drastic simplification, but it still seems a good way to organize one’s thinking on some of these issues. Work that directly addresses issues in currently deployed systems or near-future systems is more likely to be adequately funded relative to the social impact of the work, through standard market mechanisms/incentives such as liability and reputation, than work that is motivated by longer term concerns (for which market incentives are more likely to be insufficient). The fact that various complications and fuzziness exist don’t seem to change this basic premise. But I appreciate that the fuzziness makes it hard to draw a line and say which work counts as short term or long term, and hence how much money is spent on each.

If OpenAI wants to do more safety research, and the main constraint is suitable talent, I’d like to suggest that it make clearer what kind of safety-focused researchers it’s willing to hire. Currently the OpenAI jobs page emphasizes wanting to hire deep learning experts and people willing to switch to deep learning. It doesn’t appear very welcoming to someone who, for example, lacks a strong deep learning background but wants to help work on Paul’s meta-execution idea (which is not directly related to deep learning or needs much deep learning as background knowledge). Would OpenAI want to hire someone like that (assuming they have other relevant skills for the safety work they want to do)?

Hi Wei,

I think OpenAI is generally most interested in deep learning research, but for anyone looking for more specifics, I’d suggest just going ahead and applying, or reaching out to Paul directly.

I agree that there is a meaningful spectrum from less to more “likely to be adequately funded … through standard market mechanisms/incentives such as liability and reputation.” But I don’t really think the adversarial work scores very high on this. I do think it is more likely to get done without a “special push” than Paul’s work, but that has more to do with its generally better fit with the culture of deep learning research.

I don’t actually know anyone who wants to apply to OpenAI in order to do research into meta-execution, or more generally into gaining a better understanding of what Paul calls “deliberation”, which will be needed to ensure AI alignment (https://agentfoundations.org/item?id=1534). And that’s part of the problem that I see. As Victoria Krakovna says, there’s a chicken-and-egg problem in AI safety research.(http://slatestarcodex.com/2017/07/08/two-kinds-of-caution/#comment-520373), with lack of funding and lack of talent feeding into each other. I think certain parts of AI safety have even more of this problem than others, for reasons of market incentives and distance from existing research culture (including academic programs that train graduate students). This seems to me like one of the main problems that could cause a bad outcome from AI, if left uncorrected. My thinking here is that, since OpenAI has already hired Paul, it seems to be in a unique position to make a significant dent in this problem by making clear that it wants to hire more people to work on the parts of Paul’s approach (or AI safety in general) that are longer-term and/or further away from existing research culture, so as to encourage more people to go into those areas.

Pages

Leave a comment