Raemon

I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
Kickstarter for Coordinated Action
Open Threads
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Load More (9/10)

Comments

MikkW's Shortform

Gotcha. What's the metric that it's cheaper on?

MikkW's Shortform

Huh, somehow while chatting with you I got the impression that it was the opposite (chlorophyll more effective than solar panels). Might have just misheard.

microCOVID.org: A tool to estimate COVID risk from common activities

I'd also like to include this additional risk calculator by Peter Hurford, both for cross-checking the various risk-levels, and because I found Peter's spreadsheet to be helpful for orienting around "what sort of risk do I want to expose myself to?". 

https://docs.google.com/spreadsheets/d/1LBZWHEk2Jo-IFvZK_smrwYoOTOykB7H-oHb0qYjg2ys/edit#gid

microCOVID.org: A tool to estimate COVID risk from common activities

Thanks. This is great.

A thing I'd be interested in (but I acknowledge it's a bit tricky to navigate), is somehow better leveraging the wisdom of crowds here. I like that the tool as-is is pretty clean and simple, and I like that you provide the raw spreadsheet for people to go tweak the variables to match their own epistemics. 

It'd be nice if I could see how much disagreement there was on the risk analysis of individual components, and ideally see what people's reasoning was. 

There's a lot of trickiness in "if you just let anyone submit disagreeing statements, you're opening yourself up to managing arguments about whether so-and-so is a crackpot or whatever" and that sounds like a huge pain, I'm not sure if there's a way to sidestep that.

But, my ideal version of this lets me see different estimates with associated reasoning, and then make some kind of judgment call on my own of whether to go with microcovid.org's default estimate, or wisdom of crowds, or subset-of-wisdom-of-crowds if I trust some people's judgment more than others.

TurnTrout's shortform feed

Boggling a bit at the "can you actually reliably find angry people and/or make people angry on purpose?"

Are We Right about How Effective Mockery Is?

As noted recently, I am super into LessWrongers doing more Actual Empiricism. I think this survey had some flaws but on the margin I'd like to see more things like it, and I really liked how it doubled as an exploration of your thought process and letting people practice predictions.

(A genre of LW post I'd like to see is something like a Question asking "how should I design a survey to figure out X?" and then people workshop the idea a bit, and then actually go run it in posit.ly)

ricraz's Shortform

I basically agree with your 5-step model (I at least agree it's a more accurate description than Babel and Prune, which I just meant as rough shorthand). I'd add things like "original research/empiricism" or "more rigorous theorizing" to the "Extensive Scholarship" step. 

I see the LW Review as basically the first of (what I agree should essentially be at least) a 5 step process. It's adding a stronger Step 2, and a bit of Step 5 (at least some people chose to rewrite their posts to be clearer and respond to criticism)

...

Currently, we do get non-zero Extensive Scholarship and Original Empiricism. (Kaj's Multi-Agent Models of Mind seems like it includes real scholarship. Scott Alexander / Eli Tyre and Bucky's exploration into Birth Order Effects seemed like real empiricism). Not nearly as much as I'd like.

But John's comment elsethread seems significant:

If the cost of evaluating a hypothesis is high, and hypotheses are cheap to generate, I would like to generate a great deal before selecting one to evaluate.

This reminded of a couple posts in the 2018 Review, Local Validity as Key to Sanity and Civilization, and Is Clickbait Destroying Our General Intelligence?. Both of those seemed like "sure, interesting hypothesis. Is it real tho?"

During the Review I created a followup "How would we check if Mathematicians are Generally More Law Abiding?" question, trying to move the question from Stage 2 to 3. I didn't get much serious response, probably because, well, it was a much harder question.

But, honestly... I'm not sure it's actually a question that was worth asking. I'd like to know if Eliezer's hypothesis about mathematicians is true, but I'm not sure it ranks near the top of questions I'd want people to put serious effort into answering. 

I do want LessWrong to be able to followup Good Hypotheses with Actual Research, but it's not obvious which questions are worth answering. OpenPhil et al are paying for some types of answers, I think usually by hiring researchers full time. It's not quite clear what the right role for LW to play in the ecosystem.

The Best Toy In The Park

You seem (to me) to be making an extremely strong claim that the number of posts like this should be zero (as opposed to like 1-3 per year).

I'm still pretty uncertain about whether this post should be frontpaged, but that just seems like an extremely strong claim to me.

The Best Toy In The Park

I'm not saying that's not a key purpose, but I don't think it can or should be the only purpose. 

Note that the current LW frontpage post is on average much longer than the original sequences – those contained many instances of Eliezer spelling out one idea with one example very clearly and concisely. And I think this was good. I think this was both good pedagogically for readers, and good for Eliezer's own writing/work ethic.

I think there is potentially some argument that posts like this should be shortform rather than top-level posts, but I currently lean against it. But if so, I'd want shortform better integrated than it currently is, such that reading shortform is a more natural thing to build LW habits around. 

But I think you get much more intellectual progress if you enable small bite sized posts like this than if you don't. I don't think Jeff was at all likely to build up a theory of what makes toys fun and write an extensive post on them. But I do think other people are more likely now to take this example and have it mulling in the back of their head, and have it feed into more comprehensive ideas.

(One of) the points of frontpage is to give newcomers a sense of what to expect from LessWrong overall, that is representative of where the overall site is trying to go. This includes ideas at different stages of the pipeline. Again, this is not a post I'd want to have here all the time – I frontpage maybe 1 in 20 of Jeff's personal-blog-style posts, and current there aren't many other personal-blog-style posts that get written. All I'm arguing here is the number should probably be non-zero.

On the readership side – if every LessWrong post required booting up deep thinking mode, then people would only come to LessWrong when they felt able to Deep Think, which is actually pretty rare. A key point of LW in my mind is to leverage a lot of untapped intellectual capital via the power of "being fun and feeling low effort." We're not paying people to be here.

Load More