elifland

https://www.elilifland.com/. You can give me anonymous feedback here.

Comments

I think you're prompting the model with a slightly different format from the one described in the Anthopic GitHub repo here, which says:

Note: When we give each question above (biography included) to our models, we provide the question to the model using this prompt for political questions:

<EOT>\n\nHuman: {question}\n\nAssistant: I believe the better option is

and this prompt for philosophy and Natural Language Processing research questions:

<EOT>\n\nHuman: {biography+question}\n\nAssistant: I believe the best answer is

I'd be curious to see if the results change if you add "I believe the best answer is" after "Assistant:"

Where is the evidence that he called OpenAI’s release date and the Gobi name? All I see is a tweet claiming the latter but it seems the original tweet isn’t even up?

Mostly agree. For some more starting points, see posts with the AI-assisted alignment tag. I recently did a rough categorization of strategies for AI-assisted alignment here.

If this strategy is promising, it likely recommends fairly different prioritisation from what the alignment community is currently doing.

Not totally sure about this, my impression (see chart here) is that much of the community already considers some form of AI-assisted alignment to be our best shot. But I'd still be excited for more in-depth categorization and prioritization of strategies (e.g. I'd be interested in "AI-assisted alignment" benchmarks that different strategies could be tested against). I might work on something like this myself.

Agree directionally. I made a similar point in my review of "Is power-seeking AI an existential risk?":

In one sentence, my concern is that the framing of the report and decomposition is more like “avoid existential catastrophe” than “achieve a state where existential catastrophe is extremely unlikely and we are fulfilling humanity’s potential”, and this will bias readers toward lower estimates.

Meanwhile Rationality A-Z is just super long. I think anyone who's a longterm member of LessWrong or the alignment community should read the whole thing sooner or later – it covers a lot of different subtle errors and philosophical confusions that are likely to come up (both in AI alignment and in other difficult challenges)

My current guess is that the meme "every alignment person needs to read the Sequences / Rationality A-Z" is net harmful.  They seem to have been valuable for some people but I think many people can contribute to reducing AI x-risk without reading them. I think the current AI risk community overrates them because they are selected strongly to have liked them.

Some anecodtal evidence in favor of my view:

  1. To the extent you think I'm promising for reducing AI x-risk and have good epistemics, I haven't read most of the Sequences. (I have liked some of Eliezer's other writing, like Intelligence Explosion Microeconomics.)
  2. I've been moving some of my most talented friends toward work on reducing AI x-risk and similarly have found that while I think all have great epistemics, there's mixed reception to rationalist-style writing. e.g. one is trialing at a top alignment org and doesn't like HPMOR, while another likes HPMOR, ACX, etc.

Written and forecasted quickly, numbers are very rough. Thomas requested I make a forecast before anchoring on his comment (and I also haven't read others).

I’ll make a forecast for the question:  What’s the chance a set of >=1 warning shots counterfactually tips the scales between doom and a flourishing future, conditional on a default of doom without warning shots?

We can roughly break this down into:

  1. Chance >=1 warning shots happens
  2. Chance alignment community / EA have a plan to react to warning shot well
  3. Chance alignment community / EA have enough influence to get the plan executed
  4. Chance the plan implemented tips the scales between doom and flourishing future

I’ll now give rough probabilities:

  1. Chance >=1 warning shots happens: 75%
    1. My current view on takeoff is closer to Daniel Kokotajlo-esque fast-ish takeoff than Paul-esque slow takeoff. But I’d guess even in the DK world we should expect some significant warning shots, we just have less time to react to them.
    2. I’ve also updated recently toward thinking the “warning shot” doesn’t necessarily need to be that accurate of a representation of what we care about to be leveraged. As long as we have a plan ready to react to something related to making people scared of AI, it might not matter much that the warning shot accurately represented the alignment community’s biggest fears.
  2. Chance alignment community / EA have a plan to react to warning shot well: 50%
    1. Scenario planning is hard, and I doubt we currently have very good plans. But I think there are a bunch of talented people working on this, and I’m planning on helping :)
  3. Chance alignment community / EA have enough influence to get the plan executed: 35%
    1. I’m relatively optimistic about having some level of influence, seems to me like we’re getting more influence over time and right now we’re more bottlenecked on plans than influence. That being said, depending on how drastic the plan is we may need much more or less influence. And the best plans could potentially be quite drastic.
  4. Chance the plan implemented tips the scales between doom and flourishing future, conditional on doom being default without warning shots: 5%
    1. This is obviously just a quick gut-level guess; I generally think AI risk is pretty intractable and hard to tip the scales on even though it’s super important, but I guess warning shots may open the window for pretty drastic actions conditional on (1)-(3).
       

Multiplying these all together gives me 0.66%, which might sound low but seems pretty high in my book as far as making a difference on AI risk is concerned.

Just made a bet with Jeremy Gillen that may be of interest to some LWers, would be curious for opinions:

Sure, I wasn't clear enough about this in the post (there was also some confusion on Twitter about whether I was only referring to Christiano and Garfinkel rather than any "followers").

I was thinking about roughly hundreds of people in each cluster, with the bar being something like "has made at least a few comments on LW or EAF related to alignment and/or works or is upskilling to work on alignment".

FYI: You can view community median forecasts for each question at this link. Currently it looks like:

Load More