Wiki Contributions

Comments

It seems totally reasonable to say that AI is rapidly getting many very large advantages with respect to humans, so if it gets to ‘roughly human’ in the core intelligence module, whatever you want to call that, then suddenly things get out of hand fast, potentially the ‘fast takeoff’ level of fast even if you see more signs and portents first.

 

In retrospect, there had been omens and portents.

But if you use them as reasons to ignore the speed of things happening, they won't help you.

Lots of thoughts here. One is that over the course of our lives we encounter so many stories that they need to have variety, and Tolstoy's point makes pure heroes less appealing: "Happy families are all alike; every unhappy family is unhappy in its own way." Heroes and conflicts in children's stories are simple, and get more complex in stories for teens and adults. This is not just about maturity and exposure to the complexities of life and wanting to grapple with real dilemmas, it's also about not reading the hundredth identical plot.

Milton's Lucifer was also my first thought reading this, but I'm not sure I agree with your take. I think the point, for me, is that he makes us question whether is actually is the villain, at all. The persuasion element is, I think, an artifact of the story being told in a cultural context where there's an overwhelming presumption that he is the villain. The ancient Greeks had a different context, and had no problem writing complex and flawed and often doomed heroes who fought against fate and gods without their writers/composers ever thinking they needed to classify them as villains or anti-heroes, just larger-than-life people.

Perhaps it's just my American upbringing, but I think I want to live in a world where agents can get what they want, even with the world set against them, if only they are clever and persistent enough.

I'm American too, and I don't want that. At least not in general. I do share the stereotypical American distrust of rigid traditions and institutions to a substantial degree. I want agents-in-general to get much of what they want, but when the world is set against them? Depends case-by-case on why the world is set against them, and on why the agents have the goals they have. Voldemort was a persistent and clever (by Harry Potter standards) agent as much as Gandhi was. I can understand how each arrived at their goals and methods, and why their respective worlds were set against them, but that doesn't mean I want them both to get what they want. Interpolate between extremes however you like.

I know opinions about these kinds questions differ widely, and I think you shouldn't take too much advice from people who don't know anything about you. Regardless, I think the answers depend a lot on what set of options is or seems available to you.

For the first set of questions, do any of the options you'd consider seem likely to change the answer of "how many years?" If not, I would probably not use that as a deciding factor. You're building a life for yourself, it's unlikely the things you value only have value in the future and not the present, and there's enough probability that the answer is "at least decades" to make accounting for long timelines in your plans worthwhile.

For the second, this is harder and depends a lot on where you are, where you can easily go, where you have personal or family ties, how much money and other resources you have available, and how exposed you currently are to different kinds of economic and geopolitical changes.

As for personal anecdotes: none of the career options I considered had to do with AI, so I've treated the first set of questions as irrelevant to my own career path. I do understand that AI going well is extremely high-importance and high-variance, but I'm still focusing on the much lower-variance problem of climate change (and relatedly, energy, food, water, transportation, etc.). Sure, it won't make a difference if humanity goes extinct in 2035, but neither would any other path I took. I've also had the luxury of mostly being able to ignore the second set of questions, but FWIW I work fully remote and travel full time, which has the side effects of preserving optionality and of teaching me how to be transplantable and not get tied to hard-to-transport stuff.

The actual WSJ article centers on companies not sure they want to pay $30/month per user for Microsoft Copilot.

I understand that this is a thing, but I find it hard to imagine there are that many people making significant use of Windows and Microsoft Office at work who wouldn't be able to save an hour or two a month using Copilot or it's near-term successors. For me the break-even point would be saving somewhere between 5-30 minutes a month depending on how I calculate the cost and value of my work time.

On the protest acceptability: whenever I read about these polls I have no idea how much I'm supposed to think about the actual question. Personally I find it very easy to imagine fliers someone could hand out, and audiences they could hand them to, that I would find unacceptable but that both causes I support and those I oppose might decide are great ideas. "Always" to me means "probability ~1" but maybe that is too high a thresholdfor the intended question.

On binging: shows have gotten more complex since the advent of DVR and then streaming platforms. Yes, some space is better than binging, but also a lot of what's actually good is going to demand a lot of memory from me until I reach a natural breakpoint where there's not so many loose ends and intertwined plot points. Sometimes a week is fine. Other times even a day might leave me having to go rewatch bits of previous episodes.

Yes, agreed, and just to be clear I'm not talking about delays in granting a patent, I'm talking about delays in how long it takes to bring a technology to market and generate a return once a patent has been filed and/or granted.

Also, I'm not actually sure I'm 100% behind extending patent terms. I probably am. I do think they should be longer than copyright terms, though.

I think there could be a lot of value in having a sequence of posts on, basically, "What is this 'science' thing anyway?" Right now all the core ideas (including various prerequisites and corollaries) exist on this site or in the Sequences, but not a single, clear, cohesive whole that isn't extremely long.

However, I think trying to frame it this way, in one post, doesn't work. It's unclear who the target audience it, how they should approach it, and what they should hope to get out of it. Even knowing and already understanding these points, I read it wondering, "Why are these here, together, in this order? What is implied by the point numbering? Who, not already knowing these, would be willing to read this and able to understand it?"

It looks like the author created this account only a day before posting this. IDK if they've been here lurking or using another account a long time before that or not. In any case, my suggestion would be to look at how the Sequences are structured, find the bits that tie into what you're writing here, and then refactor this post into a series. Try and make it present ideas in a cohesive order, in digestible chunks, with links to past posts by others that expand on important points in more detail or other styles.

I agree with pretty much all of this. If anything I think it somewhat understates the case. We're going to need a lot more clean power than current  electricity demand suggests if and when we make a real effort to reduce fossil fuel consumption in chemical production and transportation, and the latter will necessitate building a whole lot of some form of energy storage whether or not batteries get much cheaper.

Is the slow forward march of copyright terms the optimal response to the massive changes in information technology we’ve seen over the past several centuries?

 

Of course not! Even without the detailed analysis and assorted variables needed to figure out anything like an optimal term, economics tells us this can't actually help much in increasing idea, tech, patent, or creative work output. Market size for copies of a given work changes over time (speed and direction vary), but to a first approximation assume you can get away with holding it steady. Apply a 7% discount rate to the value of future returns and by year 20 you've gotten roughly 75% of all the value you'll ever extract. By year 50 you're over 95%. 

Even without copyright, Disney must still own the trademark rights to use Mickey in many ways, so that part is really about copyright. Honestly outside of an occasional TV special I can't remember the last time there was an actual new creative work about Mickey at all. Who can claim that copyright was still incentivizing anything at all? What court would believe it?

Patents are trickier, because they start at date of filing, and in some industries it can take most of the patent term just to bring it to market, leaving only a few years to benefit from the protection. Something as simple as a lawsuit from a competitor, or any form of opposition to building a plant, or a hiccup in raising funds, could create enough delay to wipe out a significant chunk of a patent's value, in a way that wasn't really true in a century ago. It makes little sense to me to have the same patent durations across software, automotive, energy, aviation, and pharmaceutical inventions/discoveries. 

Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can't predict the outcomes of actions even in principle.

In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It's similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn't a point of difference among options, and it isn't a lever anyone can pull that affects what needs to be done.

I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That's very different than steering in a direction I want to steer (or be steered) in. It's also very different from retaining the ability to continue to steer and course correct. 

Load More