shminux

Wiki Contributions

Comments

Are you saying that outside experts were better at understanding potential consequences in these cases? I have trouble believing it.

Other than the printing press, do you have other members of the reference class you are constructing, where outside holistic experts are better at predicting the consequences of a new invention than the inventors themselves?

the question of what is actually moral, beyond what you have been told is moral

that is what a moral realist would say

Like most things, it is sometimes helpful, sometimes harmful, sometimes completely benign, depending on the person, the type, the amount and the day of the week. There is no "consensus" because the topic is so heterogeneous. What is your motivation for asking?

Note that if you have a little bit extra to spend, you can outsource some of the dimensions to experts. For example, those with a sense of style can offer you options you'd not have thought of yourself. The same applies to functionality and comfort (different experts though).

Exterminating humans can be done without acting on humans directly. We are fragile meatbags, easily destroyed by an inhospitable environment. For example:

  • Raise CO2 levels to cause a runaway greenhouse effect (hard to do quickly though).
  • Use up enough oxygen in the atmosphere to make breathing impossible, through some runaway chemical or physical process.

There have been plenty of discussions on "igniting the atmosphere" as well.

I am not confidently claiming anything, not really an expert... But yeah, I guess I like the way you phrased it. The more disparity there is in intelligence, the less extra noise matters. I do not have a good model of it though. Just feels like more and more disparate dangerous paths appear in this case, overwhelming the noise.

If a plan is adversarial to humans, the plan's executor will face adverse optimization pressure from humans and adverse optimization pressure complicates error correction.

I can see that working when the entity is at the human level of intelligence or less. Maybe I misunderstand the setup, and this is indeed the case. I can't imagine that it would work on a superintelligence...

I thought it was a sort of mundane statement that morality is a set of evolved heuristics that make cooperation rather than defection possible, even when it is ostensibly against the person's interests in the moment. 

Basically, a resolution of the Parfit's hitchhiker problem is inducing morality into the setup: it is immoral not to pick up a dying hitchhiker, and it is dishonorable to renege on the promise to pay. If you dig into the decision-theoretic logic of it, you can figure out that in repeated Parfit's hitchhiker setup you are better off picking up/paying up, but humans are not great at that, so evolutionarily we ended up with morality as a crutch.

Brittleness: Since treacherous plans may require higher precision than benign plans, treacherous plans should be more vulnerable to noise.

I wonder where this statement is coming from? I'd assume the opposite, most paths lead to bad outcomes by default, making a plan work as intended is what requires higher precision.

Load More