All of dkirmani's Comments + Replies

Advertisements on Lesswrong (like lsusr's now-deleted "Want To Hire Me?" post) are good, because they let the users of this site conduct mutually-beneficial trade.

I disagree with Ben Pace in the sibling comment; advertisements should be top-level posts, because any other kind of post won't get many eyeballs on it. If users don't find the advertised proposition useful, if the post is deceptive or annoying, then they should simply downvote the ad.

you can get these principles in other ways

I got them via cultural immersion. I just lurked here for several months while my brain adapted to how the people here think. Lurk moar!

I noticed this happening with goose.ai's API as well, using the gpt-neox model, which suggests that the cause of the nondeterminism isn't unique to OpenAI's setup.

The SAT switched from a 2400-point scale back to a 1600-point scale in 2016.

2Screwtape4mo
That I feel a bit embarrassed for missing. Thank you for pointing it out; since the question asks for which range the respondent had hopefully everything got answered right. I updated the description.

Lesswrong is a garden of memes, and the upvote button is a watering can.

This post is unlisted and is still awaiting moderation. Users' first posts need to go through moderation.

Is it a bug that I can see this post? I got alerted because it was tagged "GPT".

4Raemon5mo
Ah, yeah. New user posts are supposed to be hidden for awhile but there's some edgecases that are hard to notice and track down. Thanks.

Timelines. USG could unilaterally slow AI progress. (Use your imagination.)

Even if only a single person's values are extrapolated, I think things would still be basically fine. While power corrupts, it takes time do so. Value lock-in at the moment of creation of the AI prevents it from tracking (what would be the) power-warped values of its creator.

[This comment is no longer endorsed by its author]Reply
3quetzal_rainbow5mo
I'm frankly not sure how many among respectably-looking members of our societies those who would like to be mind-controlling dictators if they had chance.

My best guess is that there are useful things for 500 MLEs to work on, but publicly specifying these things is a bad move.

1joseph_c5mo
Could you please elaborate? Why is it bad to publicly specify these things?

Agree, but LLM + RL is still preferable to muzero-style AGI.

2Jozdien6mo
I agree, but this is a question of timelines too. Within the LLM + RL paradigm we may not need AGI-level RL or LLMs that can accessibly simulate AGI-level simulacra just from self-supervised learning, both of which would take longer than many points requiring intermediate levels of LLM and RL capabilities, because people are still working on RL stuff now.

I'm not so sure! Some of my best work was done from the ages of 15-16. (I am currently 19.)

2Slider6mo
I am all for stimulating stuff to do. That sounds like a case where personal lack of money is not a significant factor. To me it would seem that doing that stuff as a hobbyist would be largely similar (ie money is a nice bonus but tinkering would happen anyway because of intrinsic interest / general development). Not being able to mess with computers because your parents needed hands to pull potatoes from fields would probably also made it hard to be a relevant blip when that employer was searching for talent. I am also more worried about when it systematically affects a lot of people, when "so where do you work?" you would get an eyebrow raising answer "I in fact do not work, but my mother insisted that I should go to school" from a 10 year old. It would actually probably be working a fast food joint to pay on the family car loan interest. If we could make work so enriching that it would bring people up all their life then maybe it would be developmentally desirable environment. But as long as you will have adult unemployed people, I consider the job of children to be playing and any employed minor to be a person that is inappropriately not playing. Then offcourse if a framework where education is preparation to be a cog in a factory leads to schools being even more stiffling than actual factories, having a artifically stably bad environment is worse than unstably bad environment. In certain sense this "prepatory phase" lasts until the end of tetriary education. I am of the impression that "mid stage" people do not push off their work to pick up new skill. By doing the aquisitions early in life we have it "installed" and pay dividends during most of the lenght of life. But the environment where you develop the capabilities and where you can use out of them are different. And the transition costs between them are not always trivial.

Here's an idea for a decision procedure:

  • Narrow it down to a shortlist of 2-20 video ideas that you like
  • For each video, create a conditional prediction market on Manifold with the resolution criterion "if made, would this video get over X views/likes/hours of watch-time", for some constant threshold X
  • Make the video the market likes the most
  • Resolve the appropriate market

[copying the reply here because I don't like looking at the facebook popup]

(I usually do agree with Scott Alexander on almost everything, so it's only when he says something I particularly disagree with that I ever bother to broadcast it. Don't let that selection bias give you a misleading picture of our degree of general agreement. #long)

I think Scott Alexander is wrong that we should regret our collective failure to invest early in cryptocurrency. This is very low on my list of things to kick ourselves about. I do not consider it one of my life's regrets... (read more)

Yes. From the same comment:

Spend a lot of money on ad campaigns and lobbying, and get {New Hampshire/Nevada/Wyoming/Florida} to nullify whatever federal anti-gambling laws exist, and carve out a safe haven for a serious prediction market (which does not currently exist).

And:

You could alternatively just fund the development of a serious prediction market on the Ethereum blockchain, but I'm not as sure about this path, as the gains one could get might be considered "illegal". Also, a fully legalized prediction market could rely on courts to arbitrate m

... (read more)

Agreed. To quote myself like some kind of asshole:

In order for a prediction market to be "serious", it has to allow epistemically rational people to get very very rich (in fiat currency) without going to jail, and it has to allow anyone to create and arbitrate a binary prediction market for a small fee. Such a platform does not currently exist.

2eigen7mo
Thank you for being a temporary asshole, that is a great comment. Does it occur to you how it can be done?    the first prediction market about this was in 8 november on polymarket and surprisingly it was 94 percent probability on them not halting withdrawals 

TikTok isn't doing any work here, I compile the text to mp4 using a script I wrote.

Thanks!

might be able to find a better voice synthesizer that can be a bit more engaging (not sure if TikTok supplies this)

Don't think I can do this that easily. I'm currently calling Amazon Polly, AWS' TTS service, from a python script I wrote to render these videos. Tiktok does supply an (imo) annoying-sounding female TTS voice, but that's off the table since I would have to enter all the text manually on my phone.

experimentation is king.

I could use Amazon's Mechanical Turk to run low-cost focus groups.

2ersatz7mo
You should probably use Google Neural2 [https://cloud.google.com/text-to-speech/docs/wavenet] voices which are far better.

True, but then I have to time the text-transitions manually.

The "anti-zoomer" sentiment is partially "anti-my-younger-self" sentiment. I, personally, had to expend a good deal of effort to improve my attention span, wean myself off of social media, and reclaim whole hours of my day. I'm frustrated because I know that more is possible.

I fail to see how this is an indictment of your friend's character, or an indication that he is incapable of reading.

That friend did, in fact, try multiple times to read books. He got distracted every time. He wanted to be the kind of guy that could finish books, but he couldn't. I... (read more)

(creating a separate thread for this, because I think it's separate from my other reply)

That friend did, in fact, try multiple times to read books. He got distracted every time. He wanted to be the kind of guy that could finish books, but he couldn’t.

You've described the problem exactly. Your friend didn't have a clear reason to read books. He just had this vague notion that reading books was "good". That "smart people" read lots of books. Why? Who knows, they just do.

I read a lot. But I have never read just for the sake of reading. All of my reading h... (read more)

Further evidence that I should write a factpost investigating whether attention spans have been declining.

1[comment deleted]7mo
3quanticle7mo
When I went to college, I knew a guy who spent >7 hours per day playing World of Warcraft. He ended up dropping out. My dad knows multiple people who failed out of IIT because they spent too much time playing bridge (the card game). Every generation has its anecdotes about smart people who got nothing done because they were too interested in trivialities.

Thank you for the feedback! I didn't consider the inherent jumpiness/grabbiness of Subway Surfers, but you're right, something more continuous is preferable. (edit: but isn't the point of the audio to allow your eyes to stray? hmm)

I will probably also take your advice wrt using the Highlights and CFAR handbook excerpts in lieu of the entire remainder of R:AZ.

9Esben Kran7mo
Thank you for making this, I think it's really great! The idea of the attention-grabbing video footage is that you're not just competing between items on the screen, you're also competing with the videos that come before and after your video. Therefore, yours has to be visually engaging just for that zoomer (et al. [https://www.lesswrong.com/posts/Bfq6ncLfYdtCb6sat/i-converted-book-i-of-the-sequences-into-a-zoomer-readable?commentId=vuFP3HwuerTgBNEhC]) dopamine rush.  Subway Surfers is inherently pretty active and as you mention, the audio will help you here, though you might be able to find a better voice synthesizer that can be a bit more engaging (not sure if TikTok supplies this). So my counterpoint to Trevor1's is that we probably want to keep something like Subway Surfers in the background but that can of course be many things such as animated AI generated images or NASA videos of the space race. Who really knows - experimentation is king.

guessing this wouldn't work without causal attention masking

2Neel Nanda7mo
Yeah, I think that's purely symmetric.

distillation of Taleb's core idea:

expected value estimates are dominated by tail events (unless the distribution is thin-tailed)

repeated sampling from a distribution usually does not yield information about tail events

therefore repeated sampling can be used to estimate EVs iff the distribution is thin-tailed according to your priors

if the distribution is fat-tailed according to your priors, how to determine EV?

estimating EV is much harder

some will say to use the sample mean as EV anyway, they are wrong

in the absence of information on tail events (which is ... (read more)

Could also be that you're a more reliable worker if you have a ton of student debt.

1the gears to ascension7mo
you'd be more inclined to be obedient if you're a debt slave, yes

Wouldn't curing aging turn people into longtermists?

4Mitchell_Porter8mo
I'm saying AI is on track to take over in the short term. That means right now is our one chance to make it something we can coexist with. 

If you can't see why a single modern society locking in their current values would be a tragedy of enormous proportions, imagine an ancient civilization such as the Romans locking in their specific morals 2000 years ago. Moral progress is real, and important.

That wouldn't be a tragedy if I were a Roman.

1anonce8mo
Yes it would, at least if you mean their ancient understanding of morals.

Meanwhile, Adderall works for people whether they “have” “ADHD” or not. It may work better for people with ADHD – a lot of them report an almost “magical” effect – but it works at least a little for most people. There is a vast literature trying to disprove this. Its main strategy is to show Adderall doesn’t enhance cognition in healthy people. Fine. But mostly it doesn’t enhance cognition in people with ADHD either. People aren’t using Adderall to get smart, they’re using it to focus. From Prescription stimulants in individuals with and without attention

... (read more)
1bvbvbvbvbvbvbvbvbvbvbv9mo
A one word answer here is not very useful I think but I somewhat agree. Adderall and other ADHD medication can, in a context of abuse, lead to somewhat "more productive time" or something like that. This if well known. But too few people know that having a reduced "attention store" throughout the day can be a symptom of ADHD. I can recall a patient that was not overly hyperactive but was daydreaming a lot and starting the medication actually helped him go "slower" in his mind but for way longer, hence increasing output.

Set up two bitcoin wallets, transfer funds from one to the other, and put your hash in the message field.

The bitcoin blockchain is both immutable and public, making it an ideal medium for sealed predictions. While the LW servers might be compromised, there are game-theoretic guarantees that the blockchain won't be.

Much of why my priors say that the e/acc thing is organic is just my gestalt impression of being on Twitter while it was happening. Unfortunately, that's not a legible source of evidence to people-who-aren't-me. I'll tell you what information I do remember, though:

  • "Bitalik Vuterin" does not ring a bell, I don't think he was a very consequential figure to begin with.
  • @BasedBeffJezos claims to be the same person as @BasedBeff, and claims that he was locked out of his @BasedBeff account on 2022-08-08 due to "misinformation", which he attributes to "blue che
... (read more)
2David Scott Krueger (formerly: capybaralet)9mo
I think it was something else like that, not that.

I'm Twitter mutuals with some of these e/acc people. I think that its founders and most (if not all) of its proponents are organic accounts, but it still might be a good idea to not signal-boost them.

2David Scott Krueger (formerly: capybaralet)9mo
What makes you think that?  Where do you think this is coming from?  It seems to have arrived too quickly (well on to my radar at least) to be organic IMO, unless there is some real-world community involved, which there doesn't seem to be?

True, but I endorse the "zero-effort" plan because it destroys trivial inconveniences.

I've also read that the definition of "high-dimensional" is relative to the number of data points you have in your dataset, such that "high-d" means dims is greater[1] than n, "low-d" means n is greater[1:1] than dims, and "medium-d" means the two are of similar magnitude.

However, this would mean that much of modern machine learning is done on "low-dimensional" data, which is absurd.


  1. (by a substantial amount) ↩︎ ↩︎

1Mo Putera8mo
Yeah, I agree that's a weird way to define "high-dimensional". I'm more partial to defining it as "when the curse of dimensionality [https://en.wikipedia.org/wiki/Curse_of_dimensionality] becomes a concern", which is less precise but more useful.

I don't work for OpenAI. I just saw Sam Altman tweet this post, so I linkposted it here.

There are about 2 to 5 steps, each with a due diligence procedure, in order to manipulate reality in any way with crypto, or even to transfer from a “niche” crypto it a more widely-used one such as ETH.

Nitpick: with Uniswap (or another DEX), you can convert your niche crypto to ETH without a due diligence/KYC check.

1George3d69mo
Last time I checked that wouldn't work for a sizeable amount. Maybe I'm wrong? I claim no expertise in crypto and, as I said, I think that's my weakest point. In principle, I can see smart-contract-based swapping with a large liquidity pool + ETH VM sidechain being sufficient to do this. Wouldn't fit the exact description in the story but would server roughly the same purpose and be sufficient if you assume the ETH VM-optimized sidechain has enough volume (or a similar thing, with whatever would overthrone ETH in 20xx)
2Vaniver9mo
I find it somewhat implausible that you'll turn a few hundred million dollars worth of crypto to compute without having a KYC check at some point, which is required by the story. [Even if you have a giant pile of ETH, will AWS or others take it without knowing more about who you are?]

Bootstrapped SaaS starting from AWS' free tier? Substack?

I argue that all organisms suffer from rot. There is thermodynamic lower bound on rot. The larger & more complex the organism is the more rot. I argue that biological life solves this fundamental problem by a bounded-error lifecycle strategy.

The germline doesn't rot, though. Human egg and sperm-producing cells must maintain (epi-)genomic integrity indefinitely.

[This comment is no longer endorsed by its author]Reply
2JBlack10mo
Germlines do rot. It's just countered by branching and pruning faster than the rot.

You could post a research bounty on this site.

Agreed. Maybe the blocker here is that LW/EA people don't have many contacts in public policy, and are much more familiar with tech.

1OldEphraim1y
Adderall caused weight gain for me, and anecdotally also for a close friend of mine. Wellbutrin works, though, at least for me personally. Lots of drugs have effects that vary wildly between different individuals (and they may even sometimes cause paradoxical effects), so I'm not sure that variance in response to amphetamines is necessarily that much of a hint about what is causing the obesity epidemic. If semaglutide works universally, or nearly so -- and early studies are very promising -- then that might be a strong hint as to what is causing the obesity epidemic.
8Eliezer Yudkowsky1y
Adderall worked for you.  It didn't work for me.

Even more obviously, why do aliens adhere to the IEEE 754 standard? My interpretation is that the cryptanalyst from the post has indeed been pranked by their friend at NASA.

How much gain do you think is actually available for someone who is still limited by human tissue brain performance and just uses the best available consistently winning method?

Quite a bit.

https://www.gwern.net/on-really-trying

Has anyone tried a five-dimensional representation instead of a two-dimensional one? 2095 isn't divisible by 2 or by 3, but it is divisible by 5. Maybe our "aliens" have four spatial dimensions and one temporal.

5Rafael Harth1y
I've tried highlighting every k-th point out of n with the same color for a bunch of n, but it all looks random. Right now, I've also tried using only 2 of 5 float values. It looks like a dead end, even though the idea is good. I think the data is 1-dimensional, the interesting part is how each number is transformed into the next, and the 2d representation just happens to catch that (e.g., if x is transformed into −x, it lands on the diagonal).
5Nnotm1y
Second try: When looking at scatterplots of any 3 out of 5 of those dimensions and interpreting each 5-tuple of numbers as one point, you can see the same structures that are visible in the 2d plot, the parabola and a line - though the line becomes a plane if viewed from a different angle, and the parabola disappears if viewed from a different angle.
1Nnotm1y
Looking at scatterplots of any 3 out of 5 of those dimensions, it looks pretty random, much less structure than in the 2d plot. Edit: Oh, wait, I've been using chunks of 419 numbers as the dimensions but should be interleaving them

Now that I know that, I've updated towards the "float64" area of hypothesis space. But in defense of the "cellular automaton" hypotheses, just look at the bitmap! Ordered initial conditions evolving into (spatially-clumped) chaos, with at least one lateral border exhibiting repetitive behavior:

I'm trying to figure out why the left hand side of the full picture has a binary 01010101

Yeah, I originally uploaded this version by accident, which is the same as the above image, but the lines that go [0,0,0, .... ,0,1,0] are so common that I removed them and represented them as a single bit on the left.

Alternative hypothesis: The first several bits (of each 64-bit chunk) are less chaotic than the middle bits due to repetitive border behavior of a 1-D cellular automaton. This hypothesis also accounts for the observation that the final seven bits of each chunk are always either 1000000 or 0111111.

If you were instead removing the last n bits from each chunk, you'd find another clear phase transition at n=7, as the last seven bits only have two observed configurations.

1blf1y
If you calculate the entropy p0log2(p0)+p1log(p1) of each of the 64 bit positions (where p0 and p1 are the proportion of bits 0 and 1 among 2095 at that position), then you'll see that the entropy depends much more smoothly on position if we convert from little endian to big endian, namely if we sort the bits as 57,58,...,64, then 49,50,...,56, then 41,42,...,48 and so on until 1,...,8.  That doesn't sound like a very natural boundary behaviour of an automaton, unless it is then encoded as little endian for some reason.
Load More