TAG

Posts

Sorted by New

Wiki Contributions

Comments

Yes, you can still sort of do utility maximisation approximately with heuristics ...and you can only do sort of utility sort of maximisation approximately with heuristics.

The point isn't to make a string of words come out as true by diluting the meanings of the terms...the point is that the claim needs to be true in the relevant sense. If this half-baked sort-of utility sort-of-maximisation isn't the scary kind of fanatical utility maximisation, nothing has been achieved.

Note that ideal utility maximisation is computationally intractable.

I’m not sure what this means precisely.

Eg. https://royalsocietypublishing.org/doi/10.1098/rstb.2018.0138

I don’t think anybody is hung up on “the AI can one-shot predict a successful plan that doesn’t require any experimentation or course correction” as a pre-requisite for doom, or even comprise a substantial chunk of their doom %.

I would say that anyone stating...

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

(EY, or course)

...is assuming exactly that. Particularly given the "shortly".

By By humans are maximizers of something, I just meant that some humans (including myself) want to fill galaxies with stuff (e.g. happy sentient life), and there’s not any number of galaxies already filled at which I expect that to stop being true.

"humans are maximizers of something" would imply that most or all humans are maximisers. Lots of people don't think the way you do.

robust

That word sets of my BS detectors. It just seems to mean "good, not otherwise specified". It's suspicious that politicians use it all the time.

I think they mostly have the problems that have been criticised in the OP -- sneaking in assumptions, and so on,

Also, sufficiently smart and capable humans probably are maximizers of something, it’s just that the something is complicated.

That's just not a fact. Note that you can't say what it is humans are maximising. Note that ideal utility maximisation is computationally intractable. Note that the neurological evidence is ambiguous at best. https://www.lesswrong.com/posts/fa5o2tg9EfJE77jEQ/the-human-s-hidden-utility-function-maybe

Capabilities are instrumentally convergent, values and goals are not.

So how dangerous is capability convergence without fixed values and goals? If an AIs values and goals are corrigible by us, then we just have a very capable servant, for instance.

And that’s also my answer to Moore’s Open Question. Why is this big function I’m talking about, right? Because when I say “that big function”, and you say “right”, we are dereferencing two different pointers to the same unverbalizable abstract computation

If the big function is defined as the function that tells you what is really right, then it's telling you what is really right... tautologously.

But the same would be true of the omniscient god, Form of the Good etc.

Talking in terms of algorithms or functions isn't doing much lifting. The big function is no more physically real than the omniscient god, Form of the Good etc.

What moral view is leading you to think there’s any problem at all with enjoying a self-congratulatory narrative?

It doesn't have to be a moral objection. It's circular argumentation, so it already goes against epistemic norms.

I think another common source of disagreement is that people sometimes conflate a mind or system’s ability to comprehend and understand some particular cosmopolitan, human-aligned values and goals, with the system itself actually sharing those values, or caring about them at all.

I've noticed that. In the older material there's something like an assumption of intrinsic motivation.

Load More