Matthew Barnett

Someone who is interested in learning and doing good.

My Substack: https://matthewbarnett.substack.com/

Sequences

Daily Insights

Wiki Contributions

Comments

That makes sense. However, Davinci-003 came out just a few days prior to ChatGPT. The relevant transition was from Davinci-002 to Davinci-003/ChatGPT.

I don't think this is right -- the main hype effect of chatGPT over previous models feels like it's just because it was in a convenient chat interface that was easy to use and free.

I don't have extensive relevant expertise, but as a personal datapoint: I used Davinci-002 multiple times to generate an interesting dialogue in order to test its capabilities. I ran several small-scale Turing tests, and the results were quite unimpressive in my opinion. When ChatGPT came out, I tried it out (on the day of its release) and very quickly felt that it was qualitatively better at dialogue. Of course, I could have simply been prompting Davinci-002 poorly, but overall I'm quite skeptical that the main reason for ChatGPT hype was that it had a more convenient chat interface than GPT-3.

I think that even the pseudo-concrete "block progress in Y," for Y being compute, or data, or whatever, fails horribly at the concreteness criterion needed for actual decision making. [...] What the post does do is push for social condemnation for "collaboration with the enemy" without concrete criteria for when it is good or bad

There are quite specific things I would not endorse that I think follow from the post relatively smoothly. Funding the lobbying group mentioned in the introduction is one example.

I do agree though that I was a bit vague in my suggestions. Mostly, I'm asking people to be careful, and not rush to try something hasty because it seems "better than nothing". I'm certainly not asking people to refuse to collaborate or associate with anyone who I might consider a "neo-luddite".

I edited the title (back) to "Slightly against aligning with neo-luddites" to better reflect my mixed feelings on this matter.

But for the record, the workers due deserve to be paid for the value of the work that was taken.

I have complicated feelings about this issue. I agree that, in theory, we should compensate people harmed by beneficial economic restructuring, such as innovation or free trade. Doing so would ensure that these transformations leave no one strictly worse off, turning a mere Kaldor-Hicks improvement into a Pareto improvement.

On the other hand, I currently see no satisfying way of structuring our laws and norms to allow for such compensation fairly, or in a way that cannot be abused. As is often the case with these things, although there is a hypothetical way of making the world a better place, the problem is precisely designing a plan to make it a reality. Do you have any concrete suggestions?

These numbers were based on the TAI timelines model I built, which produced a highly skewed distribution. I also added several years to the timeline due to anticipated delays and unrelated catastrophes, and some chance that the model is totally wrong. My inside view prediction given no delays is more like a median of 2037 with a mode of 2029.

I agree it appears the mode is much too near, but I encourage you to build a model yourself. I think you might be surprised at how much sooner the mode can be compared to the median.

I played with davinci, text-davinci-002, and text-davinci-003, if I recall correctly. The last model had only been out for a few days at most, however, before ChatGPT was released.

Of course, I didn't play with any of these models in enough detail to become an expert prompt engineer. I mean, otherwise I would have made the update sooner

Agreed. Taxing or imposing limits on GPU production and usage is also the main route through which I imagine we might regulate AI.

What is the role of Chat-GPT? Do you see it as progress over GPT-3, or is it just a tool for discovering capabilities that were already available in GPT-3 to good prompt engineers? [...] Is the importance that Chat is revealing those abilities and narrowing the ignorance?

Yes, it had revealed to me that GPT-3 was stronger than I had thought. I played with GPT-3 prior to ChatGPT, but it seems I was never very good at finding a good prompt. For example, I had tried to make it produce dialogue, in a similar manner to that of ChatGPT, but its replies were often surprisingly incoherent. On top of that, it would often produce boilerplate replies in the dialogue that were quite superficial, almost like the much worse BlenderBot from Meta.

After playing with ChatGPT however, and after seeing many impressive results on Twitter, I realized that the model's fundamental capabilities were solidly on the right end of the distribution of what I had previously believed. I truly underestimated the power of getting the right prompt, or fine-tuning it. It was a stronger update than almost anything else I have seen from any language model.

I think it's worth forecasting AI risk timelines instead of GDP timelines, because the former is what we really care about while the latter raises a bunch of economics concerns that don't necessarily change the odds of x-risk.

I agree that's probably the more important variable to forecast. On the other hand, if your model of AI is more continuous, you might expect a slow-rolling catastrophe, like a slow takeover of humanity's institutions, making it harder to determine the exact "date" that we lost control. Predicting GDP growth is the easy way out of this problem, though I admit it's not ideal.

On a separate note, you might be interested in Erik Byrnjolfsson's work on the economic impact of AI and other technologies. For example this paper argues that general purpose technologies have an implementation lag, where many people can see the transformative potential of the technology decades before the economic impact is realized.

In fact, I cited this strand of research in my original post on long timelines. It was one of the main reasons why I had long timelines, and can help explain why it seems I still have somewhat long timelines (a median of 2047) despite having made, in my opinion, a strong update.

 I predict by this date 2023 your median will be at least 5 years sooner. 

That's possible. I'm already trying to "price in" what I expect from GPT-4 into my timeline, which I expect to be very impressive.

It's perhaps worth re-emphasizing that my median timeline is so far in the future primarily because I'm factoring in delays, and because I set a very high bar. >30% GWP growth has never happened in human history. I think we've seen up to 14% growth in some very fast growing nations a few times, but that's been localized, and never at the technological frontier.

By these standards, the internet and tech revolution of the 1990s barely mattered. I could definitely see something as large as the rise of the internet happening in the next 10 years. But to meet my high bar, we'll likely need to see something radically changing the way we live our lives (or something that makes us go extinct).

Load More