I agree in principle, though someone has to actually create a community of people who track the truth in order for this to be effective and not be outcompeted by other communities. When working individually, people don't have the resources to untangle the deception in society due to its scale.
But the fact that the wider world is so confused there's no point in pushing for truth is the point. EA needs to stay better than that, and part of that is deescalating the arms race when you're inside its boundaries.
Agree with this. I mean I'm definitely not pushing back against your claims, I'm just pointing out the problem seems bigger than commonly understood.
Could you expand on why you think that it makes a significant difference?
It is true that the original theorem relies on common knowledge. In my original post, I phrased it as "a family of theorems" because one can prove various theorems with different assumptions yet similar outcomes. This is a general feature in math, where one shouldn't get distracted by the boilerplate because the core principle is often more general than the proof. So e.g. the principle you mention, of "If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have.", is something I'd suggest is in the same family as Aumann's agreement theorem.
The reason for my post is that a lot of people find Aumann's agreement theorem counterintuitive and feel like its conclusion doesn't apply to typical real-life disagreements, and therefore assume that there must be some hidden condition that makes it inapplicable in reality. What I think I showed is that Aumann's agreement theorem defines "disagreement" extremely broadly and once you think about it with such a broad conception it does indeed appear to generally apply in real life, even under far weaker conditions than the original proof requires.
I think this is useful partly because it suggests a better frame for reasoning about disagreement. For instance I provide lots of examples of disagreements that rapidly dissipate, and so if you wish to know why disagreements persist, it can be helpful to think about how persistent disagreements differ from the examples I list (for example many persistent disagreements are about politics, and for politics there are strong incentives for bias, so maybe some people who make political claims are dishonest, suggesting that conflict theory (the idea that political disagreement is due to differences in interests) is more accurate than mistake theory (the idea that political disagreement is due to making reasoning mistakes, which does not seem to predict that disagreement would be specific to politics, but which people might assume is plausible if they haven't thought about general tendencies for agreement).
More generally I have a whole framework of disagreement and beliefs that I intend to write about.
In the followup, I admit you don't have to choose as long as you don't give up on untangling the question. So like I'm implying that there's multiple options such as:
Though I suppose you are right that there are also lots of other nuanced options that I haven't acknowledged, such as "decide you are uncertain between the sides, and e.g. use utility weights to manage risk while exploiting opportunities", which isn't really the same as "try to figure it out". Not sure if that's what you mean; another option would be that e.g. I have a broader view of what "try to figure it out" means than you do, or similar (though what really matters for the literal truth of my comment is what NinetyThree's view is). Or maybe you mean that there are additional sides that could be adopted? (I meant to hint at that possibility with phrasings like "the most common side", but I suppose that could also be interpreted to just be acknowledging the vegan side.) Or maybe it's just "all of the above"?
I do genuinely think that there is value in thinking of it as a 2D space of tradeoffs for cheap epistemics <-> strong epistemics and pro animal <-> pro human (realistically one could also put in the environment too, and realistically on the cheap epistemics side it's probably anti human <-> anti animal). I agree that my original comment lacked nuance wrt the ways one could exist within that tradeoff, though I am unsure to what extent your objection is about the tradeoff framing vs the nuance in the ways one can exist in it.
Ok I'm getting downvoted to oblivion because of this, so let me clarify:
So I guess the question is whether you prefer being in an epistemic environment that has declared war on humans or an epistemic environment that has declared war on farm animals.
If, like NinetyThree, you decide to give up on untangling the question for yourself because of all the lying ("I would describe my epistemic status as "not really open to persuasion""), then you still have to make decisions, which in practice means following some side in the conflict, and the most common side is the carnist side which has the problems I mention.
I don't want to be in a situation where I have to give up on untangling the question (see my top-level comment proposing a research community), but if I'm being honest I can't exactly say that it's invalid for NinetyThree to do so.
I would really like to have a community of people who take truth-seeking seriously. While I can do some research, the world is too big for me to research most things. Furthermore, the value of the research that I do could be much bigger if others could benefit from it, but this would require a community that upholds proper epistemic standards towards me and communicates value of information well. I assume other people face the same problems, of not having the resources to research everything, and finding that it is inefficient for them to research the things they do research.
I think this can be fixed by getting a couple of honest people representing different interests together for each topic, and having them perform research that answers the most commonly relevant question on the topic and writing up the answers in a convenient format.
(At least up to a point? People are, probably rightfully, skeptical that this approach can be used to research who is an abuser or not. But for "scientific" questions like veganism, which concern subjects that are present in many places across the world like human nutritional needs or means of food production, and therefore feasible to collect direct information on without too much interference, it seems like it should be feasible.)
The rationalist community seems too loosely organized to handle this automatically. The EA community seems too biased and maybe also too loose to handle it. So I would like to create a community within rationalism to address it. For now, here is a Discord link for it: https://discord.gg/sTqMq8ey
Note that I don't mean this to bash vegans. While the vegan community is often dishonest, I have the impression that the carnist community is also often dishonest. I think that people on all sides are too focused on creating counternarratives to places where they are being attacked, instead of creating actionable answers to important questions, and I would like a community that just focuses on 1) figuring out what questions people have, and 2) answering them as accurately as possible, in easy-to-understand formats, and communicating the ranges of uncertainty and the raw evidence used to answer them.
I tend to think of ideology as a continuum, rather than a strict binary. Like people tend to have varying degrees of belief and trust in the sides of a conflict, and various unique factors influencing their views, and this leads to a lot of shades of nuance that can't really be captured with a binary carnist/not-carnist definition.
But I think there are still some correlated beliefs where you could e.g. take their first principal component as an operationalization of carnism. Some beliefs that might go into this, many of which I have encountered from carnists:
One could make a defense of some of the statements. For instance Elizabeth has made a to-me convincing defense of the last statement. I don't think this is a bug in the definition of carnism, it just shows that some carnist beliefs can be good and true. One ought to be able to admit that ideology is real and matters while also being able to recognize that it's not a black-and-white issue.