RobertM

LessWrong dev & admin as of July 5th, 2022.

Wiki Contributions

Comments

Among any big company's investors there's those who want it to stay lean, and those who want it to expand. When Meta posts poor financial results (even if they have more to do with the state of the economy than whether Meta's business decisions were any good), that shifts more firepower towards those who want to slim things down, and so Meta is forced to make cuts.

 

Mark Zuckerberg has majority voting rights as a Meta shareholder, so unless meant some way other than literally, this can't be the explanation here.

RobertM10dModerator Comment20

Ban evasion is against the rules.  Also, Reversed Stupidity Is Not Intelligence.

Either way such agents have no more reason to cooperate with each other than with us (assuming we have any relevant power).

Conflict is expensive.  If you have an alternative (i.e. performing a values handshake) which is cheaper, you'd probably take it?  (Humans can't do that, for reasons outlined in Decision theory does not imply that we get to have nice things.)

Ping us on intercom so that we don't forget :)

I'd be very interested in a write-up, especially if you have receipts for pessimism which seems to be poorly calibrated, e.g. based on evidence contrary to prior predictions.

Something I find interesting is the relationship between believing that the marginal researcher's impact, if taking a capabilities role, is likely to be negligible, and having a position somewhere on the spectrum other than "this is obviously a terrible idea".

On one hand, that seems obvious, maybe even logically necessary.  On the other hand, I think that the impact of the marginal researcher on capabilities research has a much higher mean than median (impact is heavy-tailed), and this may be even more true for those listening to this advice.  I also think the arguments for working on capabilities seem quite weak:

  • "up-skilling"
    • My first objection is that it's not clear why anybody needs to up-skill in a capabilities role before switching to work on alignment.  Most alignment organizations don't have bureaucratic requirements like "[x] years of experience in a similar role", and being an independent researcher obviously has no requirements whatsoever.  The actual skills that might make one more successful at either option... well, that leads to my second objection.
    • My second objection is that "capabilities" is a poorly-defined term.  If one wants to up-skill in ML engineering by e.g. working at an organization which only uses existing techniques to build consumer features, I expect this to have approximately no first-order[1] risk of advancing the capabilities frontier.  However, this kind of role by definition doesn't help you up-skill in areas like "conduct research on unsolved (or worse, unspecified) problems".  To the extent that a role does exercise that kind of skill, that role becomes correspondingly riskier.  tl;dr: you can up-skill in "python + ML libraries" pretty safely, as long as the systems you're working on don't themselves target inputs to AI (i.e. making cheaper chips, better algorithms, etc), but not in "conduct novel research".
  • "influence within capabilities organization"
    • I think the median outcome of an early-career alignment researcher joining a capabilities org and attempting to exert influence to steer the organization in a more alignment-friendly direction is net-negative (though I'm pretty uncertain).  I suspect that for this to be a good idea, it needs to be the primary focus of the person going into the organization, and that person needs to have a strong model of what exactly they're trying to accomplish and how they're going to accomplish it, given the structure and political landscape of the organization that they'll be joining.  If you don't have experience successfully doing this in at least one prior organization, it's difficult to imagine a justified inside-view expectation of success.
  • "connections"
    • See "influence" - what is the plan here?  Admittedly connections can at least preserve some optionality when you leave, but I don't think I've actually seen anyone argue the case for how valuable they expect connections to be, and what their model is for deriving that.

 

In general, I think the balance of considerations quite strongly favors not working on capabilities (in the narrower sense, rather than the "any ML application that isn't explicitly alignment" sense).  The experts themselves seem to be largely split between "obviously bad" and "unclear, balance of trade-offs", and the second set seem to mostly be conditional on beliefs like:

  • "I don’t think it’s obvious that capabilities work is net negative"
  • "I don’t think on the margin AI risk motivated individuals working in these spaces would boost capabilities much"
  • other confusions or disagreements around the category of "capabilities work"
  • what I think are very optimistic beliefs about the ability of junior researchers to exert influence over large organizations

I recognize that "We think this is a hard question!" is not necessarily a summary of the surveyed experts' opinions, but I would be curious to hear the "positive" case for taking a capabilities role implied by it, assuming there's ground not covered by the opinions above.

  1. ^

    And I think the second-order effects, like whatever marginal impact your decision has on the market for ML engineers, in pretty trivial in this case. 

Hm, yeah, that sure does seem like it ought to behave like that.

Interesting analysis.  Some questions and notes:

How are you looking at "researchers" vs. "engineers"?  At some organizations, i.e. Redwood, the boundary is very fuzzy - there isn't a sharp delineation between anyone whose job it is to primarily "think of ideas", vs. "implement ideas that researchers come up with", so it seems reasonable to count most of their technical staff as researchers.

FAR, on the other hand, does have separate job titles for "Research Scientist" (4 people) vs. "Research Engineer" (5 people), though they do also say they "expect everyone on the project to help shape the research direction".

Some of the other numbers seem like overestimates.

CHAI has 2 researchers and 6 research fellows, and only 2 (maybe 3) of the research fellows are doing anything recognizable as alignment research.  (Not extremely confident; didn't spend a lot of time digging for details for those that didn't have websites.  But generally not optimistic.)  One of the researcher is Andrew Critch, who is one of the two people at Encultured.  If you throw in Stuart Russell that's maybe 6 people, not 30.

FHI has 2 people in their AI Safety Research Group.  There are also a couple people in their macrostrategy research group it wouldn't be crazy to count.  Everybody else listed on the page either isn't working on technical Alignment research or is doing so under another org also listed here.  So maybe 4 people, rather than 10?

I don't have very up-to-date information but I would pretty surprised if MIRI had 15 full-time research staff right now.

Also, I think that every single person I counted above has at least a LessWrong account, and most also have Alignment Forum accounts, so a good chunk are probably double-counted.

On the other hand, there are a number of people going through SERI MATS who probably weren't counted; most of them will have LessWrong accounts but probably not Alignment Forum accounts (yet).

I'd be very happy to learn that there were 5 people at Meta doing something recognizable as alignment research; the same for Google Brain.  Do you have any more info on those?

Putting aside the concerns about potential backfire effects of unilateral action[1], calling the release of gene drive mosquitoes "illegal" is unsubstantiated.  The claim that actually cashes out to is "every single country where Anopheles gambiae are a substantial vector for the spread of malaria has laws that narrowly prohibit the release of release of mosquitoes".  The alternative interpretation, that "every single country will stretch obviously unrelated laws as far as necessary to throw the book at you if you do this", may be true, but isn't very interesting, since that can be used as a fully general argument against doing anything ever.[2]

  1. ^

    Which I'm inclined to agree with, though notably I haven't actually seen a cost/benefit analysis from any of those sources.

  2. ^

    Though you're more likely to have the book thrown at you for some things than for others, and it'd be silly to deny that we have non-zero information about what those things are in advance.  I still think the distinction is substantial.

Load More