Said Achmiz

Wiki Contributions


You aren’t meant to be able to anti-react to a reaction that no one else has reacted

But this seems bad, then, given the current stable of reactions!

I understand it from the standpoint of interaction design, of course—but then it seems like you should add opposite-valence reactions for those reactions which currently make sense as standalone anti-reacts (see my other comments in this thread for some examples).

It seems like an awkward bit of information-architecture design, though, doesn’t it?

I mean, for some of the reactions, it does, actually, make sense to anti-react to them directly, “from scratch”, as it were. Anti-“Insightful” clearly means “not insightful”, anti-“Virtue of Scholarship” can mean “this should exhibit the virtue of scholarship but fails to do so”, anti-“Clear” and anti-“Hits the Mark” and anti-“Exciting” also all have fairly clear meanings even when not reacting to their regular (non-reversed) versions.

Now, for one thing, that this is the case for some of the reacts but not others seems like it’s bound to lead to confusion and weirdness.

For another thing, it seems like making it easy to directly anti-react with the reactions I list above… should be fairly easy to access via the UI, given that it’s clearly meaningful to do so. But this would also (as currently designed) make it easier to directly anti-react with “Wrong” or “Shrug” or whatever, which seems less than ideal.

This seems to me to suggest that the conceptual design of the feature might need some work.

What does it mean to anti-react “Wrong” when no one has reacted “Wrong” (for example)? (Or “Shrug”, or “Additional questions”.)

Also, the hover tooltip for a reaction covers up the one(s) below it—very annoying, makes it hard to browse them.

(Note: this comment delayed by rate limit. Next comment on this topic, if any, won’t be for a week, for the same reason.)

Very ironic! I had all three of those in mind as counterexamples to your claim. (Well, not Deepmind specifically, but Google in general; but the other two for sure.)

Bell Labs was indeed “one of history’s most intellectually generative places”. But the striking thing about Bell Labs (and similarly Xerox PARC, and IBM Research) is the extent to which the people working there were isolated from ordinary corporate politics, corporate pressures, and day-to-day business concerns. In other words, these corporate research labs are notable precisely for being enclaves within which corporate/company culture essentially does not operate.

As far as Google and/or Deepmind goes, well… I don’t know enough about Deepmind in particular to comment on it. But Google, in general, is famous for being a place where fixing/improving things is low-prestige, and the way to get ahead is to be seen as developing shiny new features/products/etc. This has predictable consequences for, e.g., usability (Google’s products are infamous for having absolutely horrific interaction and UX design—Google Plus being one egregious example). Everything I’ve heard about Google indicates that the stereotypical “moral maze” dynamics of corporate culture are in full swing there.

Re: Bridgewater, you remember correctly, although “some concerns” is rather an understatement; it’s more like “the place is a real-life Orwellian panopticon, with all the crushing stress and social/psychological dysfunction that implies”. Even more damning is that they never even bother to verify that all of this helps their investing performance in any way. This seems to me to be very obviously the opposite of a healthy epistemic environment—something to avoid as assiduously as we possibly can.

Excellent post! I agree with Daniel—this is a post which I feel like should’ve been made long ago (which is about as high a level of praise as I can think of).

The discussion under this post is an excellent example of the way that a 3-per-week per-post comment limit makes any kind of useful discussion effectively impossible.

When you say that “world’s best teams and cultures are located in for-profit companies”, what companies do you have in mind? SpaceX? Google? Jane Street…?

the standard that we hold companies to

Company/corporate cultures are hardly a good model to emulate if we want to optimize for truth-seeking, as such cultures famously select for distortions of truth, often lack any incentives for truth-seeking and truth-telling, and generally reward sociopathy (instrumentally) and bullshit (epistemically) to an appalling degree. Is that really the standard to aim for, here?

Human psychology and cognitive science

You mean all that stuff that famously fails to replicate on a regular basis and huge swaths of which have turned out to be basically nonsense…?

the general study of minds-in-general

I don’t think I know what this is. Are you talking about animal psychology, or formal logic (and similarly mathematical fields like probability theory), or what…?

There is some fact-of-the-matter about what sort of human cultures find out the most interesting and important things most quickly.

No doubt there is, but I would like to see something more than just a casual assumption that we have any useful amount of “scientific” or otherwise rigorous knowledge (as opposed to, e.g., “narrative” knowledge, or knowledge that consists of heuristics derived from experience) about this.

I’d recommend instead you frame it as a recommendation for specific action, not a question about attitude. “you, dear reader, should do Y next week to reduce expected {average, total, median, whatever} future suffering” would go a lot further than asking why they’re not obsessing over the topic.

This would seem to be at odds with “aim to inform, not persuade”. (Is that still a rule? I seem to recall it being a rule, but now I can’t easily find it anywhere…)

Load More