Very cool! I'm curious: what are some concrete use cases here?
Rationalist(adjacent) stuff is somewhat obvious, but I wonder how far it extends. Like, do you envision a local Python meetup maybe wanting to use the space? Random individual rationalists wanting to sleep over for a few days for fun? Google paying to have their Search team go on a retreat and mingle with smart people?
Something frustrating happened to me a week or two ago.
I wish that we had a culture of words being used more literally.
and the sentence “broccoli is good for you” is making a bid to circumvent that machinery entirely and just write a result into my value-cache.
I think this has some truth to it, but that is missing important nuance.
When I imagine eg. my mom telling me that broccoli is good for you, I imagine her having read it on some unreliable magazine's cover. Or maybe she heard it from some unreliable friend of hers.
But when I imagine a smart friend of mine telling me that broccoli is good for you, I start making some educated guesses about the gears. Maybe it is because broccoli has a lot of fiber. Or because of some micronutrients.
In the latter scenario, I think a relevant follow-up question is about the extent to which it bypasses the gear-level machinery. And I think the answer is an unfortunate "it depends". In the broccoli example, I have enough knowledge about the domain such that I think I can make some pretty good educated guesses, and so it actually doesn't bypass the gears too much. Maybe we can say it bypasses it a "moderate amount". In other contexts though where I don't have much domain knowledge I think it'd frequently bypass the gears "a lot" though.
(All of that said, I agree with the broad gist of this post. In particular, with things like "value judgements usually pull on the wrong levers.")
In the followup, I admit you don't have to choose as long as you don't give up on untangling the question.
Ah, I kinda overlooked this. My bad.
In general my position is now that:
Is this just acknowledging some sort of monkey brain thing, or endorsing it as well? (If part of it is acknowledging it, then kudos. I appreciate the honesty and bravery. I also think the data point is relevant to what is discussed in the post.)
I ask because it strikes me as a Reversed Stupidity Is Not Intelligence sort of thing. If Hitler thinks the sky is green, well, he's wrong, but it isn't really relevant to the question of what color the sky actually is.
Yeah, it still doesn't seem true even given the followup clarification.
Well, depending on what you actually mean. In the original excerpt, you're saying that the question is whether you want to be in epistemic environment A or epistemic environment B. But in your followup clarification, you talk about the need to decide on something. I agree that you do need to decide on something (~carnist or vegan). I don't think that means you necessarily have to be in one of those two epistemic environments you mention. But I also charitably suspect that you don't actually think that you necessarily have to be in one of those two specific epistemic environments and just misspoke.
I don't agree with the downvoting. The first paragraph sounds to me like a not only fair, but good point. The first sentence in the second paragraph doesn't really seem true to me though.
I see them as distinct because what I'm saying is that lying generally tends to lead to bad outcomes (for both the liar and society at large) whereas mistrust specifically is just one component of the bad outcomes.
Other components that come to my mind:
But a big thing here is that it's difficult to know why exactly it will lead to bad outcomes. The gears are hard to model. However, I think there's solid evidence that it leads to bad outcomes.