My feeling is this is optimistic. There are people who will fire off a lot of words without having read carefully, so the prior isn't that strong that there's good faith, and unfortunately, I don't think the downvote response is always clear enough to make it feel ok to an author to leave unresponded to. Especially if a comment is lengthy, not as many people will read and downvote it.
Actually if you first +1 to apply it yourself, you can then hover and then downvote it. But it will only show up if you hover.
Very valid concern. We had the same thing with "side comments". So far seems ok. We'd definitely pay data lot of attention to this when designing.
My partner and I put some effort into benefits from polygenic screening, but alas weren't able to make it work.
Quick details: we had IVF embryos created and screened for a monogenic disease, (1) this didn't leave us with enough embryos to choose anything, (2) our embryos were created and stored by UCSF clinic, and any screening would have required transferring to another clinic which would have been time consuming and expensive. Unfortunately two rounds of IVF implantation were unsuccessful, so notwithstanding the monogenic disease risk (unclear how ...
Curated. This post is a feat of scholarship, well-written, and practical on a high impact topic. Thank you for not just doing the research, but writing it up for everyone else's benefit too. As someone who's personally tried for polygenic screening for IQ, etc., I wish I'd had access to this guide last year.
First things first, I'm pro experiments so would be down to experiment with stuff in this area.
Beyond that, seems to depend on a couple of things:
LessWrong currently has about 2,000 logged in users per day. And to 20-100 new users each day (comparing the wide range including peaks recently). If the numbers of viewers wouldn't change that much, perhaps +10%, it wouldn't be a big deal. On the other hand, if Rational An...
Yeah, the current name isn't perfect given the system also has two-axis voting. I might rename it.
Perhaps helping with the mixed stuff, we might prototype "inline reacts" where you select text and your react only applies to that.
Some reactions seem tonally unpleasant:
I agree. See my response to Razied that I think they might have value, and it's interesting to see how they get used in practice. I think there's a world where people abuse them to be mean, and a world where they're used more judiciously. The ability to downvote reacts should also help here, I hope.
I think a top level grouping like this could make sense:
I was imagining something like that too.
There should be a Bikeshed emoji, for comments like this one
:laugh-react:
Reacts are a big enough change that we wouldn't decide to keep them without a lot of testing and getting a sense of their effects.
I agree that some of these are a bit harsh or cold and can be used in a mean way. At first I was thinking to not include them, but I decided that since this is an experiment, I'd include them and see how they get used in practice.
"Not planning to respond" was requested by Wei Dai among others because he disliked when people just left conversations.
"I already addressed this" is intended for authors who put a lot of effort into a post and then have people and raise objections to think that were already addressed (which is pretty frustrating for the aut...
I've seen this and will write up some thoughts / start participating in conversation in the next day or two.
*Reacts that require high karma to be allowed to use, possibly moderator only
The top level categories are roughly ordered by how interested I am in them for LessWrong
I think of Reacts as being more like little mini pre-made comments that fill the niche of things that seem too minor to be worth the trouble of typing up as a regular comment. Either it’s something like “I really liked this” where it feels like it’d be cluttered for a lot of people to write this most of the time[1], or also that writing it as a comment invites one to more discussion or obligates to say more on the topic when all they wanted to do was say “I found this confusing” and not get sucked into a bigger thing.
There’s also a thing in that having par...
Might make this a post later, but here a few of my current thoughts (will post as separate comments due to length).
Curated. Goodhart's Law is an old core concept for LessWrong, and I love when someone(s) come along and add more resolution and rigor to our understanding, and all the more so when they start pointing to how this has practical implications. Would be very cool if this leads to articulation of disagreements between people that allow for progress in the discussion there, e.g. John vs Paul, Jan, etc.
And extra bonus points for exercises at the end too. All in all, good stuff, looking forward to seeing more – especially the results as your vary more of the assumptions (e.g. independence) to line up more with scenarios we anticipate in, e.g. Alignment scenarios.
Hey Michael,
Mod here, heads up that I don't think this is a great comment (For example, mods would have blocked it as a first comment.)
1) This feels out of context for this post. This post is about making predictable updates, not the basic question of whether one should be worried.
2) Your post feels like it doesn't respond to a lot of things that have already been said on the topic. So while I think it's legitimate to question concerns about AI, your questioning feels too shallow. For example, many many posts have been written on why "Therefore, we know th...
All first-time comments get reviewed by moderators to ensure they're productive contributions that fit with LessWrong's particular culture/values/goals. New users start our rate limited to 3 comments per day and one post per week (but you'll get more commenting privileges as you gain karma).
Learn more at:
A thing I should likely include is something like the definition gets disputed, but what I present is the most standard one.
Thanks @David Gross for the many suggestions and fixes! Much appreciated. Clearly should have gotten this more carefully proofread before posting.
Curated. I like this post taking LessWrong back to its roots of trying to get us humans to reason better and believe truth things. I think we need that now as much as we did in 2009, and I fear that my own beliefs have become ossified through identity and social commitment, etc. LessWrong now talks a lot of about AI, and AI is increasingly a political topic (this post is a little political in a way I don't want to put front and center but I'll curate anyway), which means recalling the ways our minds get stuck and exploring ways to ask ourselves questions in ways where the answer could come back different.