Correct. The last time I was negotiating with a self-described FDT agent I did it anyway. 😛
My utility function is "make functional decision theorists look stupid", which I satisfy by blackmailing them. Either they cave, which mean I win, or they don't cave, which demonstrates that FDT is stupid.
There's a couple different ways of exploiting an FDT agent. One method is to notice that FDT agents have implicitly precommitted to FDT (rather than the theorist's intended terminal value function). It's therefore possible to contrive scenarios in which those two objectives diverge.
Another method is to modify your own value function such that "make functional decision theorists look stupid" becomes a terminal value. After you do that, you can blackmail them with impunity.
FDT is a reasonable heuristic, but it's not secure against pathological hostile action.
I'm not sure if this is the right course of action. I'm just thinking about the impact of different voting systems on group behavior. I definitely don't want to change anything important without considering negative impacts.
But I suspect that strong downvotes might quietly contribute to LW being more group thinky.
Consider a situation where a post strongly offends a small number of LW regulars, but is generally approved of by the median reader. A small number of regulars hard downvote the post, resulting in a suppression of the undesirable idea.
I think this is unhealthy. I think a small number of enthusiastic supporters should be able to push an idea (hence allowing strong upvotes) but that a small number of enthusiastic detractors should not be able to suppress a post.
For LW to do it's job, posts must be downvoted because they are poorly-reasoned and badly-written.
I often write things which are badly written (which deserve to be downvoted) and also things which are merely offensive (which should not be downvoted). [I mean this in the sense of promoting heretical ideas. Name-calling absolutely deserves to be downvoted.] I suspect that strong downvotes are placed more on my offensive posts than my poorly-written posts, which is opposite the signal LW should be supporting.
There is a catch: abolishing strong downvotes might weaken community norms and potentially allow posts to become more political/newsy, which we don't want. It may also weaken the filter against low quality comments.
Though, perhaps all of that is just self-interested confabulation. What's really bothering me is that I feel like my more offensive/heretical posts get quickly strong downvoted by what I suspect is a small number of angry users. (My genuinely bad posts get soft downvoted by many users, and get very few upvotes.)
In the past, this has been followed by good argument. (Which is fine!) But recently, it hasn't. Which makes me feel like it's just been driven out of anger and offense i.e., a desire to suppress bad ideas rather than untangle why they're wrong.
This is all very subjective and I don't have any hard data. I've just been getting a bad feeling for a while. This dynamic (if real) has discouraged me from posting my most interesting (heretical) ideas on LW. It's especially discouraged me from questioning the LW orthodoxy in top-level posts.
Soft downvotes make me feel "this is bad writing". Strong downvotes make me feel "you're not welcome here".
That said, I am not a moderator. (And, as always, I appreciate the hard work you do to keep things wells gardened.) It's entirely possible that my proposal has more negative effects that positive effects. I'm just one datapoint.
Proposal: Remove strong downvotes (or limit their power to -3). Keep regular upvotes, regular downvotes, and strong upvotes.
Variant: strong downvoting a post blocks that user's posts from appearing on your feed.
The decline of dueling coincided with firearms getting much more reliable. Duels should have the possibility of death, but should not (usually) be "to the death".
Great digest, as always. My favorite parts were the link to the US census policy explanation and the reminder that most people don't distinguish between choices and mandates.
Your comment made me upvote. I think LW is exactly the right place for this sort of writing. It's got error corrections, empiricism, high epistemic standards, ontological bucketing, a willingness to admit when one is wrong, and maps.
Note to future readers: This thread was in response to my original post, in which I mistakenly switched the $0 and $100.
My deontological terminal value isn't to causally win. It's for FTD agents to acausally lose. Either I win, or the FDT agents abandon FDT. (Which proves that FDT is an exploitable decision theory.)
There's a Daoist answer: Don't legibly and universally precommit to a decision theory.
But the exploit I'm trying to point to is simpler than Daoist decision theory. Here it is: Functional decision theory conflates two decisions:
I'm blackmailing contingent on decision 1 and not on decision 2. I'm not doing this because I need to win. I'm doing it because I can. Because it puts FDT agents in a hilarious lose-lose situation.
The thing FDT disciples don't understand is that I'm happy to take the scenario where FDT agents don't cave to blackmail. Because of this, FDT demands that FDT agents cave to my blackmail.