There is something of an informal reasoning fallacy where you promote something that seems like something that we should "surely do", something that seems obviously important, but where we haven't thought through all of the steps involved and so we aren't actually justified in assuming that the impact flow through.

Here are some examples:

  • The Dean of a computer science department thinks that "surely, it's important to produce not only technically proficient graduates but those who use their skills for good". So they mandate that all students must do a "technology and ethics class". The only problem is that none of their professors are an expert in this nor even interested in it, so it ends up being a poorly taught and run subject such that students put in the absolute minimum effort and forget everything a week after the exam.
  • The prime minister of a country wants to reduce crime. He notices that the police department is severely underfunded so he significantly increases funding. Unfortunately, they're so corrupt and nepotistic that the department is unable to spend the funds effectively.
  • A government wants to increase recycling, so they create a "national recycling day" reasoning “surely this will increase recycling. Unfortunately, most people end up ignoring it and, out of those who actually are enthusiastic, most make an extra effort to recycle for a few days or even a week, then basically forget about it for the rest of the year. So it ends up having some effect, but it’s basically negligible.

In each of these cases, the decision maker may not have chosen the same option if they'd taken the time to think it through and ask themselves if the benefits were likely to actually accrue.

Domain experience can be helpful to know that these kinds of issues are likely to crop up, but so is the habit of Murphyjitsu'ing your plans.

To be clear, I'm not intending to refer to the following ways in which things could go wrong:

  • Side-effects: The government introduces snakes to reduce the rodent population, but this has the side-effect of causing more people to gain snake bites.
  • Reactions: Amy donates $10 million to the Democrats. Bob hears about this and decides to donate $20 million to the Republicans in response
  • Value confusion: James spends years acquiring his Pokemon card collection as a kid. He builts an amazing Pokemon card collection, but regrets all the time and money he spent on this once he becomes an adult.

(Please let me know if this fallacy already has a name or if you think you've thought of a better one.)

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 2:12 AM

There's the old syllogism,

  • Something must be done
  • This is something
  • Therefore: this must be done

Not sure if there's a snappy name for it

"Politician's logic"


Snappy British sitcom clip:

Related effects referred to under the headings of lost purposes and principle agent problems.

I think lots of people would say that all three examples you gave are more about signalling than about genuinely attempting to accomplish a goal.

I wouldn’t say that. Signalling the way you seem to have used it implies deception on their part, but each of these instances could just be a skill issue on their end, an inability to construct the right causal graph with sufficient resolution.

For what it’s worth whatever this pattern is pointing at also applies to how wrongly most of us got the AI box problem, i.e., that some humans by default would just let the damn thing out without needing to be persuaded.

How would one even distinguish between those who don't actually care about solving the problem and only want to signal that they care, and those who care but are too stupid to realize that intent is not magic? I believe that both do exist in the real world.

I would probably start charitably assuming stupidity, and try to explain. If the explanations keep failing mysteriously, I would gradually update towards not wanting to actually achieve the declared goal.