Wiki Contributions


CDT gives into blackmail (such as the basilisk), whereas timeless decision theories do not.

My personal suspicion is that an AI being indifferent between a large class of outcomes matters little; it's still going to absolutely ensure that it hits the pareto frontier of its competing preferences.

Have you read / are you interested in reading Project Lawful? It eventually explores this topic in some depth—though mostly after a million words of other stuff.

I think "existential risk" is a bad name for a category of things that isn't "risks of our existence ending."

I mostly think the phrase "psychologically addictive" is way less clear than necessary to communicate to me.

I think I would write the paragraph as something vaguely like:

"The physiological withdrawal symptoms of Benzodiazepines can be avoided—but often people have a bad time coming of Benzodiazepines because they start relying on them over other coping mechanisms. So doctors try to avoid them."

It seems possible to come up with something that is both succinct and actually communicates the gears.

The front page going down doesn't actually make people who want to check the latest posts unable to due so; it's so easy to circumvent that I think the front page going down is nearly costless 

That said I do think the symbolic meaning is neat

This is really just a "what is your utility function and what is your prior on the bonus" question, I guess? There is no clearly correct answer with just the information given.

General relativity seems like a little bit too strong a premise to me

Can this be partially fixed by using uBlock Origin or whatever to hide certain elements of the page? I'd expect it to help at least imperfectly, not sure if you've tried it.

I don't think the point of the detailed stories is that they strongly expect that particular thing to happen? It's just useful to have a concrete possibility in mind.

Load More