All of Matthew Barnett's Comments + Replies

Congratulations. However, unless I'm mistaken, you simply said you'd be open to taking the bet. We didn't actually take it with you, did we?

4Evan R. Murphy5d
Yea, I guess I was a little unclear on whether your post constituted a bet offer where people could simply reply to accept as I did, or if you were doing specific follow-up to finalize the bet agreements. I see you did do that with Nathan and Tomás, so it makes sense you didn't view our bet as on. It's ok, I was more interested in the epistemic/forecasting points than the $1,000 anyway.  ;) I commend you for following up and for your great retrospective analysis of the benchmark criteria. Even though I offered to take your bet, I didn't realize just how problematic the benchmark criteria were for your side of the bet. Most importantly, it's disquieting and bad news that long timelines are looking increasingly implausible. I would have felt less worried about a world where you were right about that.

As I said it's ridiculous to think someone either in the Google or OAI camp won't have more than 1 billion USD in training hardware, in service for a single model (training many instances in parallel) by openAI.

I think you're reading this condition incorrectly. The $1 billion would need to be spent for a single model. If OpenAI buys a $2 billion supercomputer but they train 10 models with it, that won't necessarily qualify.

1Gerald Monroe6d
Then why did you add the term? I assume you meant that the entire supercomputer is working on instances of the same model at once. Obviously training is massively parallel. Once the model is done obviously the supercomputer will be used for other things.

I suspect the MMLU and the MATH milestones are the easiest to achieve. I suspect it will probably happen after a GPT-4-level model is specialized to perform well in mathematics like Minerva.

I think you're overconfident here. I'm quite skeptical that GPT-4 already got above 80% on every single task in the MMLU since there are 57 tasks and it got 86.4% on average. I'm also skeptical that OpenAI will very soon spend >$1 billion to train a single model, but I definitely don't think that's implausible. "Almost certain" for either of those seems wrong.

1Gerald Monroe6d
There's gpt-5 though, or GPT-4.math.finetune. You saw the Minerva results. You know there will be significant gain with a fine-tune, likely enough to satisfy 2-3 of your conditions. As I said it's ridiculous to think someone either in the Google or OAI camp won't have more than 1 billion USD in training hardware, in service for a single model (training many instances in parallel) by openAI. Think about what that means. 1 A100 is 25k. The cluster meta uses is 2048 of them. So about 50 million. Why would you not go for the most powerful model possible as soon as you can? Either the world's largest tech giant is about to lose it all, or they are going to put the proportional effort in.

Assuming the British government gets a fair price for the hardware, and actually has the machine running prior to the bet end date, does this satisfy the condition?

No.

That condition resolves on the basis of the cost of the training run, not the cost of the hardware. You can tell because we spelled out the full details of how to estimate costs, and it depends on the cost in FLOP for the training run.

But honestly at this point I'm considering conceding early and just paying out, because I don't look forward to years of people declaring victory early, which seems to already be happening.

To be clear, I think I will lose, but I think this is weak evidence. The bet says that $1bn must be spent on a single training run, not a single supercomputer.

1Gerald Monroe6d
"OR hardware that would cost $1bn if purchased through competitive cloud computing vendors at the time on a training run to develop a single ML model" Assuming the British government gets a fair price for the hardware, and actually has the machine running prior to the bet end date, does this satisfy the condition? I don't actually think it will be the one that ends the bet as I expect the British government to take a while to actually implement this, but possibly they finish before 2026.

Did they reveal how GPT-4 did on every task in the MMLU? If not, it's not clear whether the relevant condition here has been met yet.

1Gerald Monroe6d
So you may lose the bet imminently:  https://www.theguardian.com/technology/2023/mar/15/uk-to-invest-900m-in-supercomputer-in-bid-to-build-own-britgpt [https://www.theguardian.com/technology/2023/mar/15/uk-to-invest-900m-in-supercomputer-in-bid-to-build-own-britgpt] 900 million pounds is 1 billion USD And for the other part, for MMLU your 'doubt' hinges on it doing <80% on a subtest while reaching 88% overall. I know it's just a bet over a small amount of money, but to lose in 1 year is something.  

Well, to be fair, I don't think many people realized how weak some of these benchmarks were. It is hard to tell without digging into the details, which I regrettably did not either.

I'm not sure. It depends greatly on the rate of general algorithmic progress, which I think is unknown at this time. I think it is not implausible (>10% chance) that we will see draconian controls that limit GPU production and usage, decreasing effective compute available to the largest actors by more than 99% from the trajectory under laissez faire. Such controls would be unprecedented in human history, but justified on the merits, if AI is both transformative and highly dangerous. 

It should be noted that, to the extent that more hardware allows for more algorithmic experimentation, such controls would also slow down algorithmic progress.

What is your source for the claim that effective compute for AI is doubling more than once per year? And do you mean effective compute in the largest training runs, or effective compute available in the world more generally?

A retrospective on this bet:

Having thought about each of these milestones more carefully, and having already updated towards short timelines months ago, I think it was really bad in hindsight to make this bet, even on medium-to-long timeline views. Honestly, I'm surprised more people didn't want to bet us, since anyone familiar with the relevant benchmarks probably could have noticed that we were making quite poor predictions.

I'll explain what I mean by going through each of these milestones individually,

  • "A model/ensemble of models achieves >80% on all
... (read more)
1Stephen McAleese5d
You said that you updated and shortened your median timeline to 2047 and mode to 2035. But it seems to me that you need to shorten your timelines again. In the It's time for EA leadership to pull the short-timelines fire alarm [https://www.lesswrong.com/posts/wrkEnGrTTrM2mnmGa/retracted-it-s-time-for-ea-leadership-to-pull-the-short] post says: It seems that the purpose of the bet was to test this hypothesis: My understanding is that if AI progress occurred slowly and no more than one of the advancements listed were made by 2026-01-01 then this short timelines hypothesis would be proven false and could then be ignored. However, the bet was conceded on 2023-03-16 which is much earlier than the deadline and therefore the bet failed to prove the hypothesis false. It seems to me that the rational action is to now update toward believing that this short timelines hypothesis is true and 3-7 years from 2022 is 2025-2029 which is substantially earlier than 2047.
1Stephen McAleese5d
I don't agree with the first point: Although the MMLU task is fairly straightforward given that there are only 4 options to choose from (25% accuracy for random choices) and experts typically score about 90%, getting 80% accuracy still seems quite difficult for a human given that average human raters only score about 35%. Also, GPT-3 only scores about 45% (GPT-3 fine-tuned still only scores 54%), and GPT-2 scores just 32% even when fine-tuned. One of my recent posts [https://www.lesswrong.com/posts/iQx2eeHKLwgBYdWPZ/retrospective-on-gpt-predictions-after-the-release-of-gpt-4] has a nice chart showing different levels of MMLU performance. Extract from the abstract of the paper [https://arxiv.org/pdf/2009.03300.pdf] (2021):
1[comment deleted]7d
4M. Y. Zuo9d
Thanks for posting this retrospective. Considering your terms were so in favour of the bet takers, I was also surprised last summer when so few actually committed. Especially considering there were dozens, if not hundreds, of LW members with short timelines who saw your original post. Perhaps that says something about actual beliefs vs talked about beliefs?

Having not read the detailed results yet, I would be quite surprised if [Gato] performed better on language-only tasks than a pretrained language model of the same size...

In general, from a "timelines to risky systems" perspective, I'm not that interested in these sorts of "generic agents" that can do all the things with one neural net; it seems like it will be far more economically useful to have separate neural nets doing each of the things and using each other as tools to accomplish particular tasks and so that's what I expect to see.

Do you still believ... (read more)

6Rohin Shah9d
Sorry, I think that particular sentence of mine was poorly written (and got appropriate pushback [https://www.lesswrong.com/posts/xxvKhjpcTAJwvtbWM/deepmind-s-gato-generalist-agent?commentId=FjSToJGgQzKGsvdNb] at the time). I still endorse my followup comment [https://www.lesswrong.com/posts/xxvKhjpcTAJwvtbWM/deepmind-s-gato-generalist-agent?commentId=7RSPRbmTwFBsigN5W], which includes this clarification: In particular, my impression with Gato is that it was not showing much synergy. I agree that synergy is possible and likely to increase with additional scale (and I'm pretty sure I would have said so at the time, especially since I cited a different example of positive transfer). (Note I haven't read the mixed-modal scaling laws paper in detail so I may be missing an important point about it.)

Can you provide an example (without naming people)?

4Said Achmiz20d
Sure—what I had in mind was mostly stuff like “lockdowns / mask mandates / etc. are good/necessary” -> “lockdowns / mask mandates / etc. are bad/harmful/etc.”. People have drawn entirely the wrong conclusions about these things from observations of the last two years. (Robyn Dawes, in Rational Choice in an Uncertain World, writes about this mistake, wherein people learn from experience when they really shouldn’t; this seems like a good real-life example.)

Baumol-effect jobs where it is essential (or strongly preferred) that the person performing the task is actually a human being. So: therapist, tutor, childcare, that sort of thing

Huh. Therapists and tutors seem automatable within a few years. I expect some people will always prefer an in-person experience with a real human, but if the price is too high, people are just going to talk to a language model instead.

However, I agree that childcare does seem like it's the type of thing that will be hard to automate.

My list of hard to automate jobs would probably include things like: plumber, carpet installer, and construction work.

3maia21d
Therapy is already technically possible to automate with ChatGPT. The issue is that people strongly prefer to get it from a real human, even when an AI would in some sense do a "better" job. EDIT: A recent experiment demonstrating this: https://www.nbcnews.com/tech/internet/chatgpt-ai-experiment-mental-health-tech-app-koko-rcna65110 [https://www.nbcnews.com/tech/internet/chatgpt-ai-experiment-mental-health-tech-app-koko-rcna65110]

I'm happy that Scott Sumner commented. I think his analysis is reasonable, and I roughly agree with what he said. My only major complaint is that I think he might have misread the extent to which my article was intended as a criticism of his policy recommendations as opposed to Eliezer's specific commentary. I think it's plausible that the new monetary policy had a modest but positive counterfactual impact on RGDP over several years. I just don't think that's the impression Eliezer gave in the book when he provided the example.

I think your critiques are great since you're thinking clearly about how this approach is supposed to work. At a high level my reply to your comment is something like, "I basically agree, but don't think that anything you mentioned is devastating. I'm trying to build something that is better than Bio Anchors, and I think I probably succeeded even with all these flaws."

That said, I'll address your points more directly.

My understanding is that the irreducible part of the loss has nothing (necessarily) to do with "entropy of natural text" and even less with "

... (read more)

training on whichever distribution does give human-level reasoning might have substantially different scaling regularities.

I agree again. I talked a little bit about this at the end of my post, but overall I just don't have any data for scaling laws on better distributions than the one in the Chinchilla paper. I'd love to know the scaling properties of training on scientific tasks and incorporate that into the model, but I just don't have anything like that right now.

Also, this post is more about the method rather than any conclusions I may have drawn. I hope this model can be updated with better data some day.

In the notebook, the number of FLOP to train TAI is deduced a priori. I basically just estimated distributions over the relevant parameters by asking what I'd expect from TAI, rather than taking into consideration whether those values would imply a final distribution that predicts TAI arrived in the past. It may be worth noting that that Bio Anchors also does this initially, but it performs an update by chopping off some probability from the distribution and then renormalizing. I didn't do that yet because I don't know how to best perform the update.

Personally, I don't think a 12% chance that TAI already arrived is that bad, given that the model is deduced a priori. Others could reasonably disagree though.

But most science requires actually looking at the world. The reason we spend so much money on scientific equipment is because we need to check if our ideas correspond to reality, and we can't do that just by reading text.

I agree. The primary thing I'm aiming to predict using this model is when LLMs will be capable of performing human-level reasoning/thinking reliably over long sequences. It could still be true that, even if we had models that did that, they wouldn't immediately have a large scientific/economic impact on the world, since science requires a ... (read more)

4Richard Korzekwa 1mo
Yeah, and I agree this model seems to be aiming at that. What I was trying to get at in the later part of my comment is that I'm not sure you can get human-level reasoning on text as it exists now (perhaps because it fails to capture certain patterns), that it might require more engagement with the real world (because maybe that's how you capture those patterns), and that training on whichever distribution does give human-level reasoning might have substantially different scaling regularities. But I don't think I made this very clear and it should be read as "Rick's wild speculation", not "Rick's critique of the model's assumptions".

I'll just note that the NGDP growth from 2013 to 2017 (when Inadequate Equilibria was published) was about 2% per year whereas RGDP went up by about 1% per year. This definitely makes me sympathetic to "they didn't go far enough" but I'm still not sympathetic to "they never tested my theory" since you'd still expect some noticeable large effects from the new policy if the old monetary policy was responsible for a multi-trillion real-dollar problem.

1artifex1mo
Trillions of dollars in lost economic growth just seems like hyperbole. There’s some lost growth from stickiness and unemployment but of course the costs aren’t trillions of dollars.

It sounds like you are saying that he was making claims about 2.

No, I think he was also wrong about the Bank of Japan's relative competence. I didn't argue this next point directly in the post because it would have been harder to argue than the other points I made, but I think Eliezer is just straight up wrong that the Bank of Japan was pursuing a policy prior to 2013 that made Japan forgo trillions of dollars in lost economic growth. 

To be clear, I don't think that the Bank of Japan was following the optimal monetary policy by any means, and I curren... (read more)

2Adam Zerner1mo
Hm, I think I'm still confused about what thesis you're pointing to and where, if anywhere, you and I disagree. I think we agree: 1. That you should be more hesitant to disagree in places where the incentives are strong for others to get things right (like stocks). 2. That you should be more hesitant to disagree with people who seem smart. 3. That you should be more hesitant to disagree about topics you are less knowledgeable about. 4. That you should be more hesitant to disagree about topics that are complex. 5. That the above is not an exhaustive list of things to consider when thinking about how hesitant you should be to disagree. There's a lot more to it. I think Eliezer as well as most reasonable people would agree with the above as well. The difficulty comes when you starting considering a specific example and getting concrete. How smart do we think the people who run the Bank of Japan are? Are they incentivized to do what is best for the country or are other incentives driving their policy? How complex is the topic? For the Bank of Japan example, it sounds like you (as well as most others) think that Eliezer was too confident. Personally I am pretty agnostic about that point and don't have much of an opinion. I just don't see why it matters. Regardless of whether Eliezer himself is prone to being overconfident, points 1 through 5 still stand, right? Or do you think one/part of Eliezer's thesis goes beyond them?

I think the claims of the book along the lines of the following quote were definitely undermined in light of this factual error,

We have a picture of the world where it is perfectly plausible for an econblogger to write up a good analysis of what the Bank of Japan is doing wrong, and for a sophisticated reader to reasonably agree that the analysis seems decisive, without a deep agonizing episode of Dunning-Kruger-inspired self-doubt playing any important role in the analysis.

In particular, I think this error highlights that even sophisticated observers can ... (read more)

2Adam Zerner1mo
I see, thanks for clarifying. Here is how I am thinking about it. Consider the claim "the Bank of Japan is being way too tight with its monetary policy". Consider two reasons why that claim might be wrong: 1. The Bank of Japan pursued a tight monetary policy and they probably know what they're doing. 2. Monetary policy is a complex topic that is difficult to reason about. My read is that Eliezer was only making points about 1, not 2. It sounds like you are saying that he was making claims about 2. Something like, "don't be too hesitant to trust your reasoning about complex topics". But even if he was making this claim about 2, I still don't think that the Bank of Japan example matters much. It would still just be illustrative, not strong evidence. And as something that is merely illustrative, if it turned out to be wrong it wouldn't be reason to change one's mind about the original claim.
-1TAG1mo
Why is blogging being sold as something superior to boring old books and lectures? Blogs have the advantage of speed , right enough, but the BoJ thing has dragged on for decades. And, as you say, if you try to read blogs without having the boring groundwork in place, you might not even understand them..

It does seem plausible that the Bank of Japan thing was an error. However, I don't think that would undermine his thesis.

I agree that this error does not substantially undermine the entire book, much less prove its central thesis false. I still broadly agree with most of the main claims of the book, as I understand them.

2Adam Zerner1mo
Which thesis were you referring to?

I disagree. Elsewhere in the chapter he says,

How likely is it that an entire country—one of the world’s most advanced countries—would forego trillions of dollars of real economic growth because their monetary controllers—not politicians, but appointees from the professional elite—were doing something so wrong that even a non-professional could tell? How likely is it that a non-professional could not just suspect that the Bank of Japan was doing something badly wrong, but be confident in that assessment?

and later he says,

Roughly, the civilizational inadequa

... (read more)

That's fair. FWIW, I don't follow monetary policy very closely, but I usually see people talking about unemployment, price levels, and the general labor force participation rate in these discussions, not prime age labor force participation rate. The Bank of Japan's website has a page called "Outline of monetary policy" and it states,

The Bank of Japan, as the central bank of Japan, decides and implements monetary policy with the aim of maintaining price1 stability.

Price stability is important because it provides the foundation for the nation's economic acti

... (read more)

The Bank of Japan never carried out the policies that Eliezer favored

I regard this claim as unproven. I think it's clear the Bank of Japan (BOJ) began a new monetary policy in 2013 to greatly increase the money supply, with the intended effect to spur significant inflation. What's unclear to me is whether this policy matched the exact prescription that Eliezer would have suggested; it seems plausible that he would say the BOJ didn't go far enough. "They didn't go far enough" seems a bit different than "they never tested my theory" though.

2Unnamed1mo
I was assuming that the lack of inflation meant that they didn't fully carry out what he had in mind. Maybe something that Eliezer, or Scott Sumner, has written would help clarify things. It looks like Japan did loosen their monetary policy some, which could give evidence on whether or not the theory was right. But I think that would require a more in-depth analysis than what's in this post. I don't read the graphs as showing 'clearly nothing changed after Abe & Kuroda', just that there wasn't the kind of huge improvement that hits you in the face when you look at a graph, which is what I would've expected from fixing a trillions-dollar mistake. If we're looking for smaller effects, I'd want a more careful analysis rather than squinting at graphs. (And when I do squint at these graphs, I see some possible positive signs. 2013-19 real GDP growth seems better than I would've predicted if I had only seen the pre-Kuroda graph, and Kuroda's first ~year is one of the better years.)
3artifex1mo
They did in fact not go far enough. Japanese GNI per capita growth from 2013 to 2021 was 1.02%. The prescription would be something like 4%.

Perhaps explain your story in more detail. Others might find it interesting.

Yes, monetary policy didn't become loose enough to create meaningful inflation. That doesn't by itself imply that monetary policy didn't become loose, because the theory of inflation here (monetarism) could be wrong. Nonetheless, I think your summary is only slightly misleading.

You could swap in an alternative phrasing that clarifies that I merely demonstrated that the rate of inflation was low, and then the summary would seem adequate to me.

I have one nitpick with your summary.

Now, at time3, you are looking back at Japan's economy and saying that it didn't actually do especially well at that time, and also that it's monetary policy never actually became all that loose. 

I'm not actually sure whether Japan's monetary policy became substantially looser after 2013, nor did I claim that this did not occur. I didn't look into this question deeply, mostly because when I started looking into it I quickly realized that it might take a lot of work to analyze thoroughly, and it didn't seem like an essential thesis to prove either way.

7Unnamed1mo
It didn't become loose enough to generate meaningful inflation, right? And I thought Sumner & Eliezer's views were that monetary policy needed to be loose enough to generate inflation in order to do much good for the economy. That's what I had in mind by not "all that loose"; I could swap in alternate phrasing if that content seems accurate.

I previously thought you were saying something very different with (2), since the text in the OP seems pretty different.

FWIW I don't think you're getting things wrong here. I also have simply changed some of my views in the meantime.

That said, I think what I was trying to accomplish with (2) was not that alignment would be hard per se, but that it would be hard to get an AI to do very high-skill tasks in general, which included aligning the model, since otherwise it's not really "doing the task" (though as I said, I don't currently stand by what I wrote in the OP, as-is).

I think I understand my confusion, at least a bit better than before. Here's how I'd summarize what happened.

I had three arguments in this essay, which I thought of as roughly having the following form:

  1. Deployment lag: after TAI is fully developed, how long will it take to become widely impactful?
  2. Generality: how difficult is it to develop TAI fully, including making it robustly and reliably achieve what we want?
  3. Regulation: how much will people's reactions to and concerns about AI delay the arrival of fully developed TAI?

You said that (2) was already answere... (read more)

4Rohin Shah2mo
My summary of your argument now would be: 1. Deployment lag: it takes time to deploy stuff 2. Worries about AI misalignment: the world will believe that AI alignment is hard, and so avoid deploying it until doing a lot of work to be confident in alignment. 3. Regulation: it takes time to comply with regulations If that's right, I broadly agree with all of these points :) (I previously thought you were saying something very different with (2), since the text in the OP seems pretty different.)

Sorry for replying to this comment 2 years late, but I wanted to discuss this part of your reasoning,

Fwiw, the problem I think is hard is "how to make models do stuff that is actually what we want, rather than only seeming like what we want, or only initially what we want until the model does something completely different like taking over the world".

I think that's what I meant when I said "I think it will be hard to figure out how to actually make models do stuff we want". But more importantly, I think that's how most people will in fact perceive what it ... (read more)

4Rohin Shah2mo
I want to distinguish between two questions: 1. At some specified point in the future, will people believe that AI CEOs can perform the CEO task as well as human CEOs if deployed? 2. At some specified point in the future, will AI CEOs be able to perform the CEO task as well as human CEOs if deployed? (The key difference being that (1) is a statement about people's beliefs about reality, while (2) is a statement about reality directly.) (For all of this I'm assuming that an AI CEO that does the job of CEO well until the point that it executes a treacherous turn counts as "performing the CEO task well".) I'm very sympathetic to skepticism about question 1 on short timelines, and indeed as I mentioned I agree with your points (1) and (3) in the OP and they cause me to lengthen my timelines for TAI relative to bio anchors. My understanding was that you are also skeptical about question 2 on short timelines, and that was what you were arguing with your point (2) on overestimating generality. That's the part I disagree with. But your response is talking about things that other people will believe, rather than about reality; I already agree with you on that part.

I didn't downvote, but I think the comment would have benefitted from specific commentary about which parts were uncivil. There's a lot of stuff in the post, and most of it has pretty neutral language.

Note: I deleted and re-posted this comment since I felt it was missing key context and I was misinterpreting you previously.

What I specifically said is that in isolation, the graph we have been discussing better fits the SMTM hypothesis than your hypothesis. Bringing in a separate graph that you think better supports your hypothesis than SMTM's has zero bearing on the claim that I made, which is exclusively and entirely about the one graph we have been discussing. This new comment with this new graph reads to me as changing the subject, not making a rebutt

... (read more)
4DirectedEvolution2mo
No worries! I am approaching this debate in a collaborative spirit. I may have been misunderstanding you as well. What I see when I examine the second graph you have shown, again in isolation, is that it does indeed look very much like the results of the "shifted normal" model you described earlier. Or rather, of that process happening twice, with a sort of temporary tapering off around 1940. Although if I'm understanding right, the earlier pre-60s part is pure extrapolation. This graph clearly fits your and Natalia's hypothesis and not SMTM's, we see nothing of particular significance around 1980. As you say, the next question becomes how to decide which to put more weight on. Do we like the statistical heft of Komlos and Brabec, or do we think they're just using fancy statistics to erase a crucial feature of the raw data? I don't know how to arbitrate that question. But I would be sympathetic to an interpreter who said they were convinced by the sophisticated statistical model and viewed the apparent "elbow" in the raw data as more likely an artifact than a real feature of the true trend, and I'm sure Komlos and Brabec know much better than me what they're about. My way of proceeding would be to say "looking at the raw data, it does look like there's a sharp change around 1980, but that might be an artifact. Looking at a sophisticated curve-fitting model of similar data, that feature vanishes. We might put 80% weight on the sophisticated modeling and 20% on the raw data, and note that the raw data itself isn't so incompatible with a "shifted normal" interpretation, maybe 70/30." Overall, I'm inclined to put maybe 85% credence in the "shifted normal" interpretation in which there was no big event in obesity around 1980, and 15% credence that a real " elbow feature" is being obscured by the statistical smoothing.

ETA: I misinterpreted the above comment. I thought they were talking about the data, rather than the specific graph. See discussion below.

My visual inspection makes me think that, in isolation, the graph better fits the SMTM hypothesis than your hypothesis

And I'm quite confused by that, because of the chart below (and the other ones for different demographic groups). I am not saying that this single fact proves much in isolation. It doesn't disprove SMTM, for sure. But when I read your qualitative description of the shift that we're supposed to find in thi... (read more)

4Natália Coelho Mendonça2mo
I think that, when you cite that chart, it's useful for readers if you point out that it's the output of a statistical model created using NCHS data collected between 1959 and 2006. 
6DirectedEvolution2mo
To be super clear, I am exclusively considering the one graph that I started this comment chain with and am not making any other claims whatsoever about the rest of the data. What I specifically said is that in isolation, the graph we have been discussing better fits the SMTM hypothesis than your hypothesis. Bringing in a separate graph that you think better supports your hypothesis than SMTM's has zero bearing on the claim that I made, which is exclusively and entirely about the one graph we have been discussing. This new comment with this new graph reads to me as changing the subject, not making a rebuttal. This might seem unreasonable, but I think it's extremely important that we be able to see truly what a specific piece of evidence tells us in isolation. We should not let other pieces of evidence distort what we see. We should form our synthetic interpretation on the basis of truly seeing each individual piece for what it is, and then building up our interpretation from there. When I look individually at the one graph we've been discussing and don't consider the rest, I see an abrupt change between two different linear paths more than I see a smooth exponential increase.

I agree that whatever happened in ~1980 could have been a minor part of a longer-term trend, but if it's not, if there was some contamination that put us on a very different trajectory into raging obesity

I agree that there's still some plausible thing that happened in 1980 that was different from the previous trend. There could be, and probably are, multiple causes of the trend of increasing weight over time. And as one trend loses steam, another could have taken over. To the extent that that's what you're saying, I agree.

But I'm still not sure I agree wit... (read more)

4DirectedEvolution2mo
  I think our problem fundamentally with the graph in question is that we have just 5 timepoints to base our perception of the shape of the trend during this era and our data cuts off at a crucial moment. Again, what you call a "slight" acceleration appears to me as a tripling of the obesity increase rate which I do not consider slight. Where I think our semantics are a little off is that we're most centrally debating the existence of a "contamination epidemic" that happens to result in obesity. We can both agree there was some obesity growing at a slower rate prior to 1980, and much more obesity  growing at a faster rate after 1980. The question is whether this graph is evidence for a contamination epidemic, against, or neutral. My visual inspection makes me think that, in isolation, the graph better fits the SMTM hypothesis than your hypothesis, updating my priors somewhat toward the existence of a "contamination epidemic," but that's just one piece of data in the context of a much larger argument. I would be entirely open to revising my opinion contingent on more fine-grained data or on a better method to distinguish how well the data we do have fits the "shifting normal" vs. "two linear regimes" models.

Conditional on accepting that there are two distinct linear regimes, the first linear regime from at least 1960 to 1976-80 is growing about 3x more slowly than the one from 1976-80 and on.

But we have data going back to the late 19th century, and it demonstrates that weight was increasing smoothly, and at a moderately fast rate, before 1960. That is a crucial piece of evidence. It shows that whatever happened in about 1980 could have simply been a minor part of a longer-term trend. I don't see why we would call that the "start" of the obesity epidemic.

8DirectedEvolution2mo
An obesity epidemic is an epidemic of obesity, not of weight gain, and conventionally refers both to an increase in the rate of obesity increase and to the sheer amount of obesity in the population. It wouldn't surprise me if the trajectory looked something like this: * Late 19th century-earlyish 20th century: increasing gains in agricultural productivity and infrastructure among other things ensure adequate food supply, increasing the population's weight to a steady state where everybody has enough to eat. This leads to a big increase in % obesity, but from a very low base rate. We don't call this an "obesity epidemic" because most of what is happening is malnourished people getting enough to eat, even though the abundant food is causing a 3x to 8x fold change in obesity from like 1.5-3% to ~10-13%. * Mid 20th century: a steady state in which everybody has enough to eat, a constant increase in obesity but not enough to ring alarm bells * 1974-1980: some event (contamination, whatever) disrupts the equilibrium leading to a new regime in which % obesity rises 3x faster than it was during the mid 20th century equilibrium era. Here's how I see your and Natalia's hypothesis vs. SMTM's hypothesis. Both seem mechanistically plausible and roughly in accordance with the data, depending on which graphs you pick. So it's not that I see an overwhelming case for one or the other, just that SMTM's hypothesis implies there's "one weird trick" to not have rampant obesity and that we ought to figure out what it is, which is a "big if true" idea that motivates investigation. I agree that whatever happened in ~1980 could have been a minor part of a longer-term trend, but if it's not, if there was some contamination that put us on a very different trajectory into raging obesity rather than modest constant increases oevr time, then that is a pretty important thing to check out and we'd definitely call it the start of the obesity epidemic.

I admit that the data is a bit fuzzy and hard to interpret. But ultimately, we've basically reached the point at which it's hard to tell whether the data supports an abrupt shift, which to me indicates that, even if we find such a shift, it's not going to be that large. The data could very well support a minor acceleration around 1980 (indeed I think this is fairly likely, from looking at the other data).

On the one hand, that means there are some highly interesting questions to explore about what happened around 1980! But on the other hand, I think the dat... (read more)

7DirectedEvolution2mo
I don't think this is the right way to look at it. Conditional on accepting that there are two distinct linear regimes, the first linear regime from at least 1960 to 1976-80 is growing about 3x more slowly than the one from 1976-80 and on. I think that a tripling of the rate at which the obese population increases is a perfectly fine definition of "the start of the obesity epidemic." I want to be quite clear that I'm not defending SMTM or saying that they are correct - I am just trying to make sure that we are representing accurately the underlying data and the specific claims being made. I see myself as a self-appointed referee here, not a player in the game.

Accepting that obesity rates anywhere went up anywhere from 4x to 9x from 1900-1960 (i.e. from 1.5%-3% to 13.4%), I still think we have to explain the "elbow" in the obesity data starting in 1976-80. It really does look "steady around 10%" in the 1960-1976 era, with an abrupt change in 1976. If we'd continued to increase our obesity rates at the rate of 1960-74, we'd have less than 20% obesity today rather than the 43% obesity rate we actually experience. I think that is the phenomenon SMTM is talking about, and I think it's worth emphasizing.

I think the r... (read more)

Let's get a clearer illustration of your point. Here's a graph of the fraction of a normally distributed population above an arbitrary threshold as the population mean varies, ending when the population mean equals the threshold. In obesity terms, we start with a normally distributed population of BMI, and increase the average BMI linearly over time, ending when the average person is obese, and determine at each timepoint what fraction of the population is obese (so 50% obesity at the end).

The graph above has 4 timepoints roughly evenly spaced by the decad... (read more)

Some people seem to be hoping that nobody will ever make a misaligned human-level AGI thanks to some combination of regulation, monitoring, and enlightened self-interest. That story looks more plausible if we’re talking about an algorithm that can only run on a giant compute cluster containing thousands of high-end GPUs, and less plausible if we’re talking about an algorithm that can run on one 2023 gaming PC.

Isn't the relevant fact whether we could train an AGI with modest computational resources, not whether we could run one? If training runs are curtail... (read more)

2Steven Byrnes2mo
Hmm, maybe. I talk about training compute in Section 4 of this post (upshot: I’m confused…). See also Section 3.1 of this other post [https://www.lesswrong.com/posts/LFNXiQuGrar3duBzJ/what-does-it-take-to-defend-the-world-against-out-of-control#3_1_A_solution_to_widespread_deployability_of_under_control_AGI__for_red_teaming___defense___resilience__without_accident_risk]. If training is super-expensive, then run-compute would nevertheless be important if (1) we assume that the code / weights / whatever will get leaked in short order, (2) the motivations are changeable from "safe" to "unsafe" via fine-tuning or decompiling or online-learning or whatever. (I happen to strongly expect powerful AGI to necessarily use online learning, including online updating the RL value function which is related to motivations / goals. Hope I’m wrong! Not many people seem to agree with me on that.)

That makes sense. However, Davinci-003 came out just a few days prior to ChatGPT. The relevant transition was from Davinci-002 to Davinci-003/ChatGPT.

6cfoster02mo
To throw in another perspective, I've been working with the OpenAI API models most days of the week for the past year or so. For my uses, the step-change in quality came from moving from base davinci to text-davinci-002, whereas the improvements moving from that to text-davinci-003 were decidedly less clear.

[edit: this says the same thing as Quintin's sibling comment]

Important context for those who don't know it: the main difference between text-davinci-002 and text-davinci-003 is that the latter was trained with PPO against a reward model, i.e. RLHF as laid out in the InstructGPT paper. (Source: OpenAI model index.)

In more detail, text-davinci-002 seems to have been trained via supervised fune-tuning on the model outputs which were rated highest by human reviewers (this is what the model index calls FeedME). The model index only says that text-davinci-003 wa... (read more)

Yep, and text-davinci-002 was trained with supervised finetuning / written demos, while 003 was trained with RLHF via PPO. Hypothetically, the clearest illustration of RLHF's capabilities gains should be from comparing 002 to 003. However, OpenAI could have also used other methods to improve 003, such as with Transcending Scaling Laws with 0.1% Extra Compute.

This page also says that:


Our models generally used the best available datasets at the time of training, and so different engines using the same training methodology might be trained on different data.

S... (read more)

I don't think this is right -- the main hype effect of chatGPT over previous models feels like it's just because it was in a convenient chat interface that was easy to use and free.

I don't have extensive relevant expertise, but as a personal datapoint: I used Davinci-002 multiple times to generate an interesting dialogue in order to test its capabilities. I ran several small-scale Turing tests, and the results were quite unimpressive in my opinion. When ChatGPT came out, I tried it out (on the day of its release) and very quickly felt that it was qualitati... (read more)

7Quintin Pope2mo
I've felt that ChatGPT was roughly on par with text-davinci-003, though much more annoying and with a worse interface.

I think that even the pseudo-concrete "block progress in Y," for Y being compute, or data, or whatever, fails horribly at the concreteness criterion needed for actual decision making. [...] What the post does do is push for social condemnation for "collaboration with the enemy" without concrete criteria for when it is good or bad

There are quite specific things I would not endorse that I think follow from the post relatively smoothly. Funding the lobbying group mentioned in the introduction is one example.

I do agree though that I was a bit vague in my su... (read more)

2Davidmanheim3mo
Yeah, definitely agree that donating to the Concept Art Association isn't effective - and their tweet tagline "This is the most solid plan we have yet" is standard crappy decision making on its own.

But for the record, the workers due deserve to be paid for the value of the work that was taken.

I have complicated feelings about this issue. I agree that, in theory, we should compensate people harmed by beneficial economic restructuring, such as innovation or free trade. Doing so would ensure that these transformations leave no one strictly worse off, turning a mere Kaldor-Hicks improvement into a Pareto improvement.

On the other hand, I currently see no satisfying way of structuring our laws and norms to allow for such compensation fairly, or in a way th... (read more)

2the gears to ascension3mo
[comment deleted]

These numbers were based on the TAI timelines model I built, which produced a highly skewed distribution. I also added several years to the timeline due to anticipated delays and unrelated catastrophes, and some chance that the model is totally wrong. My inside view prediction given no delays is more like a median of 2037 with a mode of 2029.

I agree it appears the mode is much too near, but I encourage you to build a model yourself. I think you might be surprised at how much sooner the mode can be compared to the median.

I played with davinci, text-davinci-002, and text-davinci-003, if I recall correctly. The last model had only been out for a few days at most, however, before ChatGPT was released.

Of course, I didn't play with any of these models in enough detail to become an expert prompt engineer. I mean, otherwise I would have made the update sooner

Agreed. Taxing or imposing limits on GPU production and usage is also the main route through which I imagine we might regulate AI.

2CarlShulman9d
What level of taxation do you think would delay timelines by even one year?
Load More