Noosphere89

Wiki Contributions

Comments

This. I think a lot of the problems re emergency mobilization systems relate to that feeling of immediateness, when it's not.

I think a lot of emergencies are way too long-term for us, so we apply emergency mobilization systems even when they aren't there.

I'm very conflicted about this post. On the one hand, many of it's parts are necessary things for LWers to hear, and I'm getting concerned about the doom loop that seems to form a cult-like mentality on AI.

On the other hand, it also has serious issues in it's framing, and I'm worried that the post is coming out of a mentality that isn't great as well.

This story does hinge on "sweeping under the rug" being easier than actually properly solving alignment, but if deceptive alignment is a thing and is even moderately hard to solve properly then this seems very likely the case.

This, plus the failure mode talked about in

https://www.lesswrong.com/posts/xFotXGEotcKouifky/worlds-where-iterative-design-fails

With RLHF, this could plausibly cause outer alignment to be easily faked by companies.

Gain-of-lambda-function research: yes, this is among the worser things you could be researching, up there with the Codex code evolution & Adept Transformer agents. There are... uh, not many realistic, beneficial applications for this work. No one really needs a Diplomacy AI, and applications to things like ad auctions are tenuous. (Note the amusing wriggling of FB PR when they talk about "a strategy game which requires building trust, negotiating and cooperating with multiple players" - you left out some relevant verbs there...) And as we've seen with biological research, no matter how many times bugs escape research laboratories and literally kill people, the déformation professionnelle will cover it up and justify it. Researchers who scoff at the idea that a website should be able to set a cookie without a bunch of laws regulating it suddenly turn into don't-tread-on-me anarchocapitalists as soon as it comes to any suggestion that their research maybe shouldn't be done.

But this is far from the most blatantly harmful research (hey, they haven't killed anyone yet, so the bar is high), so we shouldn't be too hard on them or personalize it. Let's just treat this as a good example for those who think that researchers collectively have any fire alarm and will self-regulate. (No, they won't, and someone out there is already calculating how many megawatts the Torment Nexus will generate and going "Sweet!" and putting in a proposal for a prototype 'Suffering Swirlie' research programme culminating in creating a Torment Nexus by 2029.)

The really dangerous part of the research is that this is the type of game which incentivizes deceptive alignment by default, which is extremely dangerous to do. It ranks amongst one of the worst failure modes in AI Alignment, and this is at the top 5 risky directions to go to.

I don't understand, this seems clearly the case to me. Higher IQ seems to result in substantially higher performance in approximately all domains of life, and I strongly expect the population of successful CEOs to have many standard deviations above average IQ.

This can't actually happen, but only due to the normal distribution of human intelligence placing hard caps on how much variance exists in humans.

I agree there isn't a phase transition in the technical sense, but the relevant phase transition is the breaking of the IID assumption and distribution, which essentially allow you to interpolate arbitrarily well.

Uh, there is? IQ matters for a lot of complicated jobs, so much so that I tend to assume whenever there is something complicated at play, there will be a selection effects towards greater intelligence. Now the results are obviously very limited, but they matter in real life.

Here's a link to why I think IQ is important:

https://www.gwern.net/docs/iq/ses/index

My view is the reasons individual humans don't dominate is due to an IID distribution, called the normal distribution, holds really well for human intelligence.

68% percent of the population is a .85x-1.15x smartness level, 95% of the population is .70-1.30x smartness, and 99.7% percent are .55-1.45x smartness level.

Even 2x in a normal distribution is off the scale, and one order of magnitude more compute is so far beyond it that the IID distribution breaks hard.

And even with 3x differences like humans-rest of animals, things are already really bad in our own world. Extrapolate that to 10x or 100x and you have something humanity is way off distribution for.

Seconding. I'd really like a clear explanation of why he tends to view nanotech as such a game changer. Admittedly Drexler is on the far side of nanotechnology being possible, and wrote a series of books about it: (Engines of Creation, Nanosystems, and Radical Abundance)

Load More