LessWrong dev & admin as of July 5th, 2022.
Among any big company's investors there's those who want it to stay lean, and those who want it to expand. When Meta posts poor financial results (even if they have more to do with the state of the economy than whether Meta's business decisions were any good), that shifts more firepower towards those who want to slim things down, and so Meta is forced to make cuts.
Mark Zuckerberg has majority voting rights as a Meta shareholder, so unless meant some way other than literally, this can't be the explanation here.
Ban evasion is against the rules. Also, Reversed Stupidity Is Not Intelligence.
Either way such agents have no more reason to cooperate with each other than with us (assuming we have any relevant power).
Conflict is expensive. If you have an alternative (i.e. performing a values handshake) which is cheaper, you'd probably take it? (Humans can't do that, for reasons outlined in Decision theory does not imply that we get to have nice things.)
Ping us on intercom so that we don't forget :)
I'd be very interested in a write-up, especially if you have receipts for pessimism which seems to be poorly calibrated, e.g. based on evidence contrary to prior predictions.
Something I find interesting is the relationship between believing that the marginal researcher's impact, if taking a capabilities role, is likely to be negligible, and having a position somewhere on the spectrum other than "this is obviously a terrible idea".
On one hand, that seems obvious, maybe even logically necessary. On the other hand, I think that the impact of the marginal researcher on capabilities research has a much higher mean than median (impact is heavy-tailed), and this may be even more true for those listening to this advice. I also think the arguments for working on capabilities seem quite weak:
In general, I think the balance of considerations quite strongly favors not working on capabilities (in the narrower sense, rather than the "any ML application that isn't explicitly alignment" sense). The experts themselves seem to be largely split between "obviously bad" and "unclear, balance of trade-offs", and the second set seem to mostly be conditional on beliefs like:
I recognize that "We think this is a hard question!" is not necessarily a summary of the surveyed experts' opinions, but I would be curious to hear the "positive" case for taking a capabilities role implied by it, assuming there's ground not covered by the opinions above.
And I think the second-order effects, like whatever marginal impact your decision has on the market for ML engineers, in pretty trivial in this case.
He's saying the second.
Hm, yeah, that sure does seem like it ought to behave like that.
Interesting analysis. Some questions and notes:
How are you looking at "researchers" vs. "engineers"? At some organizations, i.e. Redwood, the boundary is very fuzzy - there isn't a sharp delineation between anyone whose job it is to primarily "think of ideas", vs. "implement ideas that researchers come up with", so it seems reasonable to count most of their technical staff as researchers.
FAR, on the other hand, does have separate job titles for "Research Scientist" (4 people) vs. "Research Engineer" (5 people), though they do also say they "expect everyone on the project to help shape the research direction".
Some of the other numbers seem like overestimates.
CHAI has 2 researchers and 6 research fellows, and only 2 (maybe 3) of the research fellows are doing anything recognizable as alignment research. (Not extremely confident; didn't spend a lot of time digging for details for those that didn't have websites. But generally not optimistic.) One of the researcher is Andrew Critch, who is one of the two people at Encultured. If you throw in Stuart Russell that's maybe 6 people, not 30.
FHI has 2 people in their AI Safety Research Group. There are also a couple people in their macrostrategy research group it wouldn't be crazy to count. Everybody else listed on the page either isn't working on technical Alignment research or is doing so under another org also listed here. So maybe 4 people, rather than 10?
I don't have very up-to-date information but I would pretty surprised if MIRI had 15 full-time research staff right now.
Also, I think that every single person I counted above has at least a LessWrong account, and most also have Alignment Forum accounts, so a good chunk are probably double-counted.
On the other hand, there are a number of people going through SERI MATS who probably weren't counted; most of them will have LessWrong accounts but probably not Alignment Forum accounts (yet).
I'd be very happy to learn that there were 5 people at Meta doing something recognizable as alignment research; the same for Google Brain. Do you have any more info on those?
Putting aside the concerns about potential backfire effects of unilateral action, calling the release of gene drive mosquitoes "illegal" is unsubstantiated. The claim that actually cashes out to is "every single country where Anopheles gambiae are a substantial vector for the spread of malaria has laws that narrowly prohibit the release of release of mosquitoes". The alternative interpretation, that "every single country will stretch obviously unrelated laws as far as necessary to throw the book at you if you do this", may be true, but isn't very interesting, since that can be used as a fully general argument against doing anything ever.
Which I'm inclined to agree with, though notably I haven't actually seen a cost/benefit analysis from any of those sources.
Though you're more likely to have the book thrown at you for some things than for others, and it'd be silly to deny that we have non-zero information about what those things are in advance. I still think the distinction is substantial.