elityre

Comments

How good is humanity at coordination?
if for some reason post-apocalyptic worlds rarely get simulated

To draw out the argument a little further, the reason that post-apocalyptic worlds don't get simulated is because most (?) of the simulations of our era are a way to simulate super intelligences in other parts of the multiverse, to talk or trade with.

(As in the basic argument of this Jan Tallinn talk)

If advanced civilization is wiped out by nuclear war, that simulation might be terminated, if it seems sufficiently unlikely to lead to a singularity.

How good is humanity at coordination?

I feel like this is a very important point that I have never heard made before.

AllAmericanBreakfast's Shortform
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.

Isn't programming even more like this?

I could get squidgy about whether a proof is "compelling", but when I write a program, it either runs and does what I expect, or it doesn't, with 0 wiggle room.

Eli's shortform feed

I’ve decided that I want to to make more of a point to write down my macro-strategic thoughts, because writing things down often produces new insights and refinements, and so that other folks can engage with them.

This is one frame or lens that I tend to think with a lot. This might be more of a lens or a model-let than a full break-down.

There are two broad classes of problems that we need to solve: we have some pre-paradigmatic science to figure out, and we have have the problem of civilizational sanity.

Preparadigmatic science

There are a number of hard scientific or scientific-philosophical problems that we’re facing down as a species.

Most notably, the problem of AI alignment, but also finding technical solutions to various risks caused by bio-techinlogy, possibly getting our bearings with regards to what civilization collapse means and how it is likely to come about, possibly getting a handle on the risk of a simulation shut-down, possibly making sense of the large scale cultural, political, cognitive shifts that are likely to follow from new technologies that disrupt existing social systems (like VR?).

Basically, for every x-risk, and every big shift to human civilization, there is work to be done even making sense of the situation, and framing the problem.

As this work progresses it eventually transitions into incremental science / engineering, as the problems are clarified and specified, and the good methodologies for attacking those problems solidify.

(Work on bio-risk, might already be in this phase. And I think that work towards human genetic enhancement is basically incremental science.)

To my rough intuitions, it seems like these problems, in order of pressingness are:

  1. AI alignment
  2. Bio-risk
  3. Human genetic enhancement
  4. Social, political, civilizational collapse

…where that ranking is mostly determined by which one will have a very large impact on the world first.

So there’s the object-level work of just trying to make progress on these puzzles, plus a bunch of support work for doing that object level work.

The support work includes

  • Operations that makes the research machines run (ex: MIRI ops)
  • Recruitment (and acclimation) of people who can do this kind of work (ex: CFAR)
  • Creating and maintaining infrastructure that enables intellectually fruitful conversations (ex: LessWrong)
  • Developing methodology for making progress on the problems (ex: CFAR, a little, but in practice I think that this basically has to be done by the people trying to do the object level work.)
  • Other stuff.

So we have a whole ecosystem of folks who are supporting this preparadgimatic development.

Civilizational Sanity

I think that in most worlds, if we completely succeeded at the pre-paradigmatic science, and the incremental science and engineering that follows it, the world still wouldn’t be saved.

Broadly, one way or the other, there are huge technological and social changes heading our way, and human decision makers are going to decide how to respond to those changes, possibly in ways that will have very long term repercussions on the trajectory of earth-originating life.

As a central example, if we more-or-less-completely solved AI alignment, from a full theory of agent-foundations, all the way down to the specific implementation, we would still find ourselves in a world, where humanity has attained god-like power over the universe, which we could very well abuse, and end up with a much much worse future than we might otherwise have had. And by default, I don’t expect humanity to refrain from using new capabilities rashly and unwisely.

Completely solving alignment does give us a big leg up on this problem, because we’ll have the aid of superintelligent assistants in our decision making, or we might just have an AI system implement our CEV in classic fashion.

I would say that “aligned superintelligent assistants” and “AIs implementing CEV”, are civilizational sanity interventions: technologies or institutions that help humanity’s high level decision-makers to make wise decisions in response to huge changes that, by default, they will not comprehend.

I gave some examples of possible Civ Sanity interventions here.

Also, think that some forms of governance / policy work that OpenPhil, OpenAI, and FHI have done, count as part of this category, though I want to cleanly distinguish between pushing for object-level policy proposals that you’ve already figured out, and instantiating systems that make it more likely that good policies will be reached and acted upon in general.

Overall, this class of interventions seems neglected by our community, compared to doing and supporting preparadigmatic research. That might be justified. There’s reason to think that we are well equipped to make progress on hard important research problems, but changing the way the world works, seems like it might be harder on some absolute scale, or less suited to our abilities.

Do Earths with slower economic growth have a better chance at FAI?

One countervailing thought: I want AGI to be developed in a high trust, low-scarcity, social-pyshoclogical context, because that seems like it matters a lot for safety.

Slow growth enough and society as a whole becomes a lot more bitter and cutthroat?

Eli's shortform feed

(Reasonably personal)

I spend a lot of time trying to build skills, because I want to be awesome. But there is something off about that.

I think I should just go after things that I want, and solve the problems that come up on the way. The idea of building skills sort of implies that if I don't have some foundation or some skill, I'll be blocked, and won't be able to solve some thing in the way of my goals.

But that doesn't actually sound right. Like it seems like the main important thing for people who do incredible things is their ability to do problem solving on the things that come up, and not the skills that they had previously built up in a "skill bank".

Raw problem solving is the real thing and skills are cruft. (Or maybe not cruft per se, but more like a side effect. The compiled residue of previous problem solving. Or like a code base from previous project that you might repurpose.)

Part of the problem with this is that I don't know what I want for my own sake, though. I want to be awesome, which in my conception, means being able to do things.

I note that wanting "to be able to do things" is a leaky sort of motivation: because the victory condition is not clearly defined, it can't be crisply compelling, and so there's a lot of waste somehow.

The sort of motivation that works is simply wanting to do something, not wanting to be able to do something. Like specific discrete goals that one could accomplish, know that one accomplished, and then (in most cases) move on from.

But most of the things that I want by default are of the sort "wanting to be able to do", because if I had more capabilities, that would make me awesome.

But again, that's not actually conforming with my actual model of the world. The thing that makes someone awesome is general problem solving capability, more than specific capacities. Specific capacities are brittle. General problem solving is not.

I guess that I could pick arbitrary goals that seem cool. But I'm much more emotionally compelled by being able to do something instead of doing something.

But I also think that I am notably less awesome and on a trajectory to be less awesome over time, because my goals tend to be shaped in this way. (One of those binds whereby if you go after x directly, you don't get x, but if you go after y, you get x as a side effect.)

I'm not sure what to do about this.

Maybe meditate on, and dialogue with, my sense that skills are how awesomeness is measured, as opposed to raw, general problem solving.

Maybe I need to undergo some deep change that causes me to have different sorts of goals at a deep level. (I think this would be a pretty fundamental shift in how I engage with the world: from a virtue ethics orientation (focused on one's own attributes) to one of consequentialism (focused on the states of the world).)

There are some exceptions to this, goals that are more consequentialist (although if you scratch a bit, you'll find they're about living an ideal of myself, more than they are directly about the world), including wanting a romantic partner who makes me better (note that "who makes me better is" is virtue ethics-y), and some things related to my moral duty, like mitigating x-risk. These goals do give me grounding in sort of the way that I think I need, but they're not sufficient? I still spend a lot of time trying to get skills.

Anyone have thoughts?

Sunny's Shortform

I want to give a big thumbs up of positive reinforcement. I thinks its great that I got to read an "oops! That was dumb, but now I've changed my mind."

Thanks for helping to normalize this.

Basic Conversational Coordination: Micro-coordination of Intention

Not a perfect solution, but a skilled facilitator can pick up some of the slack here: https://musingsandroughdrafts.wordpress.com/2018/12/24/using-the-facilitator-to-make-sure-that-each-persons-point-is-held/

But yeah, learning to put your point aside for a moment, without loosing the thread of it, is an important subskill.

The Basic Double Crux pattern

This is not really a response, but it is related: A Taxonomy of Cruxes.

The Basic Double Crux pattern
This makes me feel like whenever I “take a stance”, it’s an athletic stance with knees bent.

Hell yeah.

I might steal this metaphor.

Load More