tailcalled

Wiki Contributions

Comments

I tend to think of ideology as a continuum, rather than a strict binary. Like people tend to have varying degrees of belief and trust in the sides of a conflict, and various unique factors influencing their views, and this leads to a lot of shades of nuance that can't really be captured with a binary carnist/not-carnist definition.

But I think there are still some correlated beliefs where you could e.g. take their first principal component as an operationalization of carnism. Some beliefs that might go into this, many of which I have encountered from carnists:

  • "People should be allowed to freely choose whether they want to eat factory-farmed meat or not."
  • "Animals cannot suffer in any way that matters."
  • "One should take an evolutionary perspective and realize that factory farming is actually good for animals. After all, if not for humans putting a lot of effort into farming them, they wouldn't even exist at their current population levels."
  • "People who do enough good things out of their own charity deserve to eat animals without concerning themselves with the moral implications."
  • "People who design packaging for animal products ought to make it look aesthetically pleasing and comfortable."
  • "It is offensive and unreasonable for people to claim that meat-eating is a horribly harmful habit."
  • "Animals are made to be used by humans."
  • "Consuming animal products like meat or milk is healthier than being strictly vegan."

One could make a defense of some of the statements. For instance Elizabeth has made a to-me convincing defense of the last statement. I don't think this is a bug in the definition of carnism, it just shows that some carnist beliefs can be good and true. One ought to be able to admit that ideology is real and matters while also being able to recognize that it's not a black-and-white issue.

I agree in principle, though someone has to actually create a community of people who track the truth in order for this to be effective and not be outcompeted by other communities. When working individually, people don't have the resources to untangle the deception in society due to its scale.

But the fact that the wider world is so confused there's no point in pushing for truth is the point. EA needs to stay better than that, and part of that is deescalating the arms race when you're inside its boundaries. 

Agree with this. I mean I'm definitely not pushing back against your claims, I'm just pointing out the problem seems bigger than commonly understood.

Could you expand on why you think that it makes a significant difference?

  • E.g. if the goal is to model what epistemic distortions you might face, or to suggests directions of change for fewer distortions, then coherence is only of limited concern (a coherent group might be easier to change, but on the other hand it might also more easily coordinate to oppose change).
  • I'm not sure why you say they are not an ideology, at least under my model of ideology that I have developed for other purposes, they fit the definition (i.e. I believe carnism involves a set of correlated beliefs about life and society that fit together).
  • Also not sure what you mean by carnists not having an agenda, in my experience most carnists have an agenda of wanting to eat lots of cheap delicious animal flesh.

It is true that the original theorem relies on common knowledge. In my original post, I phrased it as "a family of theorems" because one can prove various theorems with different assumptions yet similar outcomes. This is a general feature in math, where one shouldn't get distracted by the boilerplate because the core principle is often more general than the proof. So e.g. the principle you mention, of "If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have.", is something I'd suggest is in the same family as Aumann's agreement theorem.

The reason for my post is that a lot of people find Aumann's agreement theorem counterintuitive and feel like its conclusion doesn't apply to typical real-life disagreements, and therefore assume that there must be some hidden condition that makes it inapplicable in reality. What I think I showed is that Aumann's agreement theorem defines "disagreement" extremely broadly and once you think about it with such a broad conception it does indeed appear to generally apply in real life, even under far weaker conditions than the original proof requires.

I think this is useful partly because it suggests a better frame for reasoning about disagreement. For instance I provide lots of examples of disagreements that rapidly dissipate, and so if you wish to know why disagreements persist, it can be helpful to think about how persistent disagreements differ from the examples I list (for example many persistent disagreements are about politics, and for politics there are strong incentives for bias, so maybe some people who make political claims are dishonest, suggesting that conflict theory (the idea that political disagreement is due to differences in interests) is more accurate than mistake theory (the idea that political disagreement is due to making reasoning mistakes, which does not seem to predict that disagreement would be specific to politics, but which people might assume is plausible if they haven't thought about general tendencies for agreement).

More generally I have a whole framework of disagreement and beliefs that I intend to write about.

In the followup, I admit you don't have to choose as long as you don't give up on untangling the question. So like I'm implying that there's multiple options such as:

  • Try to figure it out (NinetyThree rejects this, "not really open to persuasion")
  • Adopt the carnist side (I think NinetyThree probably broadly does this though likely with exceptions)
  • Adopt the vegan side (NinetyThree rejects this)

Though I suppose you are right that there are also lots of other nuanced options that I haven't acknowledged, such as "decide you are uncertain between the sides, and e.g. use utility weights to manage risk while exploiting opportunities", which isn't really the same as "try to figure it out". Not sure if that's what you mean; another option would be that e.g. I have a broader view of what "try to figure it out" means than you do, or similar (though what really matters for the literal truth of my comment is what NinetyThree's view is). Or maybe you mean that there are additional sides that could be adopted? (I meant to hint at that possibility with phrasings like "the most common side", but I suppose that could also be interpreted to just be acknowledging the vegan side.) Or maybe it's just "all of the above"?

I do genuinely think that there is value in thinking of it as a 2D space of tradeoffs for cheap epistemics <-> strong epistemics and pro animal <-> pro human (realistically one could also put in the environment too, and realistically on the cheap epistemics side it's probably anti human <-> anti animal). I agree that my original comment lacked nuance wrt the ways one could exist within that tradeoff, though I am unsure to what extent your objection is about the tradeoff framing vs the nuance in the ways one can exist in it.

Yes, I tried participating in this twice and am probably somewhat inspired by it.

Does it also not seem true in the context of my followup clarification?

Ok I'm getting downvoted to oblivion because of this, so let me clarify:

So I guess the question is whether you prefer being in an epistemic environment that has declared war on humans or an epistemic environment that has declared war on farm animals.

If, like NinetyThree, you decide to give up on untangling the question for yourself because of all the lying ("I would describe my epistemic status as "not really open to persuasion""), then you still have to make decisions, which in practice means following some side in the conflict, and the most common side is the carnist side which has the problems I mention.

I don't want to be in a situation where I have to give up on untangling the question (see my top-level comment proposing a research community), but if I'm being honest I can't exactly say that it's invalid for NinetyThree to do so.

I would really like to have a community of people who take truth-seeking seriously. While I can do some research, the world is too big for me to research most things. Furthermore, the value of the research that I do could be much bigger if others could benefit from it, but this would require a community that upholds proper epistemic standards towards me and communicates value of information well. I assume other people face the same problems, of not having the resources to research everything, and finding that it is inefficient for them to research the things they do research.

I think this can be fixed by getting a couple of honest people representing different interests together for each topic, and having them perform research that answers the most commonly relevant question on the topic and writing up the answers in a convenient format.

(At least up to a point? People are, probably rightfully, skeptical that this approach can be used to research who is an abuser or not. But for "scientific" questions like veganism, which concern subjects that are present in many places across the world like human nutritional needs or means of food production, and therefore feasible to collect direct information on without too much interference, it seems like it should be feasible.)

The rationalist community seems too loosely organized to handle this automatically. The EA community seems too biased and maybe also too loose to handle it. So I would like to create a community within rationalism to address it. For now, here is a Discord link for it: https://discord.gg/sTqMq8ey

Note that I don't mean this to bash vegans. While the vegan community is often dishonest, I have the impression that the carnist community is also often dishonest. I think that people on all sides are too focused on creating counternarratives to places where they are being attacked, instead of creating actionable answers to important questions, and I would like a community that just focuses on 1) figuring out what questions people have, and 2) answering them as accurately as possible, in easy-to-understand formats, and communicating the ranges of uncertainty and the raw evidence used to answer them.

Load More