Sequences

Re-reading Rationality From AI To Zombies
Reflections on Premium Poker Tools

Comments

I see them as distinct because what I'm saying is that lying generally tends to lead to bad outcomes (for both the liar and society at large) whereas mistrust specifically is just one component of the bad outcomes.

Other components that come to my mind:

  • People don't end up with accurate information.
  • Expectations that people will cooperate (different from "tell you the truth") go down.
  • Expectations that people will do things because they are virtuous go down.

But a big thing here is that it's difficult to know why exactly it will lead to bad outcomes. The gears are hard to model. However, I think there's solid evidence that it leads to bad outcomes.

Very cool! I'm curious: what are some concrete use cases here?

Rationalist(adjacent) stuff is somewhat obvious, but I wonder how far it extends. Like, do you envision a local Python meetup maybe wanting to use the space? Random individual rationalists wanting to sleep over for a few days for fun? Google paying to have their Search team go on a retreat and mingle with smart people?

I did a deep dive a while ago, if that's helpful to you.

Something frustrating happened to me a week or two ago.

  • I was at the vet for my dog.
  • The vet assistant (I'm not sure if that's the proper term) asks if I want to put my dog on these two pills, one to protect against heartworm and another to protect against fleas.
  • I asked what heartworm is, what fleas are, and what the pros and cons are. (It became clear later in the conversation that she was expecting a yes or no answer from me and perhaps had never been asked before about pros and cons, because she seemed surprised when I asked for them.)
  • Iirc, she said something about there not really being any cons (I'm suspicious). For heartworm the dogs can die of it so the pros are strong. For fleas, it's just an annoyance to deal with, not really dangerous.
  • I asked how likely it is for my dog to be exposed to fleas given that we're in a city and not eg. a forest.
  • The assistant responded with something along the lines of "Ok, so we'll just do the heartworm pill then."
  • I clarified something along the lines of "No, that wasn't a rhetorical question. I was actually interested in hearing about the likelihood. I have no clue what it is; I didn't mean to imply that it is low."

I wish that we had a culture of words being used more literally.

and the sentence “broccoli is good for you” is making a bid to circumvent that machinery entirely and just write a result into my value-cache.

I think this has some truth to it, but that is missing important nuance.

When I imagine eg. my mom telling me that broccoli is good for you, I imagine her having read it on some unreliable magazine's cover. Or maybe she heard it from some unreliable friend of hers.

But when I imagine a smart friend of mine telling me that broccoli is good for you, I start making some educated guesses about the gears. Maybe it is because broccoli has a lot of fiber. Or because of some micronutrients.

In the latter scenario, I think a relevant follow-up question is about the extent to which it bypasses the gear-level machinery. And I think the answer is an unfortunate "it depends". In the broccoli example, I have enough knowledge about the domain such that I think I can make some pretty good educated guesses, and so it actually doesn't bypass the gears too much. Maybe we can say it bypasses it a "moderate amount". In other contexts though where I don't have much domain knowledge I think it'd frequently bypass the gears "a lot" though.

(All of that said, I agree with the broad gist of this post. In particular, with things like "value judgements usually pull on the wrong levers.")

In the followup, I admit you don't have to choose as long as you don't give up on untangling the question.

Ah, I kinda overlooked this. My bad.

In general my position is now that:

  • I'm a little confused.
  • I think what you wrote is probably fine.
  • Think you probably could have been more clear about what you initially wrote.
  • Think it's totally fine to not be perfect in what you originally wrote.
  • Feel pretty charitable. I'm sure that what you truly meant is something pretty reasonable.
  • Think downvoters were probably triggered and were being uncharitable.
  • Am not interested in spending much more time on this.

Is this just acknowledging some sort of monkey brain thing, or endorsing it as well? (If part of it is acknowledging it, then kudos. I appreciate the honesty and bravery. I also think the data point is relevant to what is discussed in the post.)

I ask because it strikes me as a Reversed Stupidity Is Not Intelligence sort of thing. If Hitler thinks the sky is green, well, he's wrong, but it isn't really relevant to the question of what color the sky actually is.

Yeah, it still doesn't seem true even given the followup clarification.

Well, depending on what you actually mean. In the original excerpt, you're saying that the question is whether you want to be in epistemic environment A or epistemic environment B. But in your followup clarification, you talk about the need to decide on something. I agree that you do need to decide on something (~carnist or vegan). I don't think that means you necessarily have to be in one of those two epistemic environments you mention. But I also charitably suspect that you don't actually think that you necessarily have to be in one of those two specific epistemic environments and just misspoke.

I don't agree with the downvoting. The first paragraph sounds to me like a not only fair, but good point. The first sentence in the second paragraph doesn't really seem true to me though.

Load More