Raemon

I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

The Coordination Frontier
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Comments

Nod. There is still, like a lot more than there historically has been. 

I think this is probably right, but I also think the sequences generally made a solid case for that from a few different angles, so it was already pretty established on LW by the time this post came out.

FYI, I'm totally fine calling things religion. It's the actual structure of religion (i.e. community centered around intergenerational memetic ideology) I think no longer works. 

My point isn't that atomization is good, but that it makes different sets of things practical/impractical for creating community. (i.e. I think most people would rather get their community from Dancing or Crossfit groups or whatever than from a humanist religion. Ideological conformity is actually a key ingredient for religion working, and it's incompatible with the kinds of humanism you probably want).

(I have a bunch more thoughts/models here, may not have time to articulate them. But, a lot of people have attempted the thing you're articulating here and my current bet is that this thing doesn't work)

To be clear, I think there's a huge problem humanity is facing about how to do community and various social structure in this era, I just don't think the solution is to try to do religion-structured-things.

Answer by RaemonNov 15, 202260

Adding a detail to the other comments saying "robustness is hard" – I heard from someone working at a Self Driving Car company that right now, you basically need training data to cover every edge case individually. i.e. the car can drive, and it knows when to break normally, but it doesn't know about school buses and how those are subtly different. Or it doesn't know about snow. etc.

So you end up getting specific whitelisted streets that auto-automobiles are allowed to drive on, and need to slowly expand the whitelisted area with exhaustive data and testing.

(But, there are totally specific neighborhoods in SF where self-driving cars are legal and in-use)

As the guy who created the Secular Solstice, helped run Sunday Assembly NYC and has thought a bunch about humanist religion, I feel fairly doomy about this path. I think the Way of Religion is fading – organized religion was a response to one set of societal pressures, and we now live in a different set of societal pressures. 

My impression is that hardcore conservative religions are still doing okay, but that more scientifically minded or liberal religions are mostly losing membership. Atomic individualism, the rise of the internet and other modern forces make religion a lot less compelling than it used to be. There are lessons to be learned from religion, but simply copying them doesn't work. Religion-as-we-know-it is built around society being a lot more stable/persistent than it now is.

(also, doublechecking if you've read the sequences on the Craft and the Community, which are the relevant background reading here)

Raemon12dModerator Comment71

Hey Flagland, I feel a bit bad about how this played out, but after thinking more and reading this, the mod team has decided to full restrict your commenting permissions. I don't really expect you posting about your interests here on shortform to be productive for you or for LW. 

We're also experimenting more with moderating in public so it's clearer to everyone where are our boundaries are. (I think expect this to feel a bit more intense as a person-getting-moderated, but to probably be better overall for transparency)

My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are. [...] The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I'm not allowed to link either).

There seems to be a deep willingness among normal people to accept evil, which may be the source of their power.

To be clear, I think your topics have been totally fine things to think about and discuss on LessWrong. The problem is that, well, ranting and hate-filled screeds just aren't very productive most of the time. If it seemed like you were here to think clearly and figure out solutions, that'd be a pretty different situation.

I think the amount of investment into a serious hobby is basically similar to a career change, so I don't really draw a distinction. It's enough investment, and has enough of a track-record of burnout, that I think it's totally worth strategizing about based on your own aptitudes. 

(To be clear, I think "try it out for a month and see if it feels good to you" is a fine thing to do, my comments here are mostly targeted at people who are pushing themselves to do it out of consequentialism reasoning/obligation)

My beliefs here are based on hearing from various researchers over the years what timescale good research takes. I've specifically heard that it's hard to evaluate research output for less than 6 months of work, and that 1-2 years is honestly more realistic. 

John Wentworth claims, after a fair amount of attempting to train researchers and seeing how various research careers have gone, that people have about 5 years worth of bad ideas they need to get through before they start producing actually possibly-good-ideas. I've heard secondhand from another leading researcher that a wave of concentrated effort they oversaw from the community didn't produce any actually novel results. My understanding is Eliezer thinks there basically been no progress on the important problems. 

My own epistemic-status here is secondhand, and there may be other people who disagree with the above take. but my sense is that there's been a lot of "try various ways of recruiting and training researchers over the years", and that it's at least nontrivial to get meaningful work done.

I don’t think the kind of work we’re talking about here is really possible without something close to ‘making a career if it’ - at least being a sustained, serious hobby for years.

One distinction I want to make here is between people who are really excited to work on AI Alignment (or, any particular high-impact-career), and who are motivated to stick with it for years (but who don't seem sufficiently competent), vs people who are doing it out of a vague sense of obligation, don't feel excited (and don't seem sufficiently competent).

For the first group, a) I can imagine them improving over time, b) if they're excited about it and find it fulfilling, like, great! It's the second group I feel most worried about, and I really worry about the vague existential angst driving people to throw themselves into careers they aren't actually well suited for. (and for creative research I suspect you do need a degree of enthusiasm in order to make it work)

Load More