LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.


The Coordination Frontier
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong


It looks like we were accidentally limiting the size of the modal gallery. I just changed the styling so they should appear larger (may take a couple minutes for the site to update)

It’s a bit tricksy because we’re adding more over time.

We let users edit their name once but not multiple times to avoid users doing shenanigany impersonation things. I’ll change it

We have a bunch of different bedrooms of various sizes/shapes/prices. We'll probably get more info up about them over time. (The current website is basically the minimal version we felt good about launching with after a couple days of work)

My guess is that the ultimate hope here is for:

  • All sizes of rationalist, x-risk, EA type events that seem connected to the mission (who probably get some kind of discount)
  • Mid-to-large size events from various non-rationalist people or organizations (who probably get charged roughly a typical market rate)

Mostly because all events have some fixed overhead, and it's not quite worth it for meetups or groups that are smaller. The reason we bought the place was to do various on-mission stuff, the reason we're marketing it outside of that is for making sure we have enough funding to do our other mission-aligned stuff.

But, this is all my off-the-cuff thoughts, not a considered company policy or anything.

I generally tag chapters with "fiction" and "whatever the actual topic is, if applicable" (some fiction is more AI focused, some is more Rationality focused, etc)

My current moral code says "it's okay to do things like this, as part of games, psychology study-ish-things, jokes, etc", if you tell people the truth shortly afterwards (which we did).

[EDIT: Actually, there is a correction to be made here, and it refers to my wrong reading of the message after clicking the link. The lesson is: if I make a split-second decision, I need to carefully reexamine it after the fact, in order to understand its true consequences, and beware of anchoring on my split-second reasoning: this anchoring is probably motivated by wanting to justify myself later.]

I really like this takeaway, and generally like how "rationality test self-assessment" process here.

Load More