Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.

Longer bio: www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team-page-under-construction#Ben_Pace___Benito

Sequences

AI Alignment Writing Day 2019
Transcript of Eric Weinstein / Peter Thiel Conversation
AI Alignment Writing Day 2018
Share Models, Not Beliefs

Wiki Contributions

Load More

Comments

Fair enough. Nonetheless, I have had this experience many times with Eliezer, including when dialoguing with people with much more domain-experience than Scott.

[Alexander][14:17]   

Can you expand on sexual recombinant hill-climbing search vs. gradient descent relative to a loss function, keeping in mind that I'm very weak on my understanding of these kinds of algorithms and you might have to explain exactly why they're different in this way?

[Yudkowsky][14:21]   

It's about the size of the information bottleneck. [followed by a 6 paragraph explanation]

It's sections like this that show me how many levels above me Eliezer is. When I read Scott's question I thought "I can see that these two algorithms are quite different but I don't have a good answer for how they're different", and then Eliezer not only had an answer, but a fully fleshed out mechanistic model of the crucial differences between the two that he could immediately explain clearly, succinctly, and persuasively, in 6 paragraphs. And he only spent 4 minutes writing it.

Curated. I continue to find more concrete examples of this helpful, and more walkthroughs of how the maziness levels rise, and this post has both.

The default situation we're dealing with is:

  • People who are self-interested get selected up the hierarchy
  • People who are willing to utilize short-termist ways of looking good get selected up the hierarchy
  • People who are good at playing internal politics get selected up the hierarchy

So if I imagine a cluster of self-interested, short-term thinking internal-politics-players... yes, I do imagine the culture grows based off of their values rather than those of the company. Good point.

I guess the culture is a function of the sorts of people there, rather than something that's explicit set from the top-down. I think that was my mistake.

I think this is a fantastically clear analysis of how power and politics work, that made a lot of things click for me. I agree it should be shorter but honestly every part of this is insightful. I find myself confused even how to review it, because I don't know how to compare this to how confusing the world was before this post. This is some of the best sense-making I have read about how governmental organizations function today.

There's a hope that you can just put the person who's most obviously right in charge. This post walks through the basic things that would break, and explains some reasons he is in an advantageous position relative to the person in charge (because Zvi can just optimize for being right, whereas the person in charge has to handle politics and leadership). It then walks through how the internals of power actually work, what sort of person is selected for (and shapes themselves to be), and also some counterintuitive reasons why it might work to put an outsider in charge (because the status quo is always right, and if handled well it would soon become the status quo).

Somehow the post could be better, it's hard for me to see the whole picture at once, because the post discusses a number of separate dynamics all occurring at the same time in an organization. Nonetheless I give this a +9.

The main thing I track is whether I expect healthy information flows when the information is relevant. 

If someone was arrested for robbery and I'm citing their work on Quantum Mechanics, I wouldn't think it relevant to bring up. If they were being considered for a job looking after finances I'd want to make sure the person hiring them knew. 

If I felt like nobody would tell them because it was all hush-hush, then I would be more likely to write something about it publicly... though not really in the stuff about quantum mechanics? Seems unfair to punish them in every possible channel, as long as they're receiving the actual costs involved (job opportunities, reputation amongst colleagues and coworkers, etc). 

If I thought it was being super quashed, I might have a footnote at the start of my discussion of quantum mechanics saying "For the record I have ethical concerns about this person's behavior in other situations, here is a link to a brief shortform comment by me on that" or a link to info about it, but otherwise not bring it up.

In general, I think most disclaimers aren't worth it.

I am still confused about moral mazes.

I understand that power-seekers can beat out people earnestly trying to do their jobs. In terms of the Gervais Principle, the sociopaths beat out the clueless.

What I don't understand is how the culture comes to reward corrupt and power-seeking behavior.

One reason someone said to me is that it's in the power-seekers interest to reward other power-seekers.

Is that true?

I think it's easier for them to beat out the earnest and gullible clueless people.

However, there's probably lots of ways that their sociopathic underlings can help them give a falsely good impression to their boss.

So perhaps it is the case that on-net they reward the other sociopaths, and build coalitions.

Then I can perhaps see it growing, in their interactions with other departments.

I'd still have hope that the upper management could punish bad cultural practices.

But by default they will have more important things on their plate than fighting for the culture. (Or, they think they do.)

One question is how the coalitions of sociopaths survive.

Wouldn't they turn on each other out as soon as it's politically convenient?

I don't actually know how often it is politically convenient.

And I guess that, as long as they're being paid and promoted, there is enough plenty and increasing wealth that they can afford to work together.

This throws into relief the extent to which they are selfish people, not evil. Selfish people can work together just fine. The point is that those who are in it for themselves in a company, can work together to rise its ranks and warp the culture (and functionality) of the company along the way.

Then, when a new smart and earnest person joins, they are confused to find that they are being rewarded for selling-themselves, for covering up mistakes, for looking good in meetings, and so forth.

And the people at the top feel unable to fix it, it's already gone too far.

There's free energy to be eaten by the self-interested, and unless you make it more costly to eat it than not (e.g. by firing them), they will do so.

I think Jim Babcock suggested having a leaderboard on every tag page, for who has the most points in that tag. So there's lots of different ladders to climb and be the leader of!

Epistemic status: thinking aloud, not got any plans, definitely not making any commitments.

I'm thinking about building a pipeline to produce a lot of LessWrong books based around authors. The idea being that a bunch of authors would have their best essays collated in a single book, with a coherent aesthetic and ML art and attractive typography.

This stands in contrast to making books around sequences, and I do really like sequences, but when I think about most authors whose work I love from the past 5 years there's a lot of standalone essays, and they don't tie together half as neatly as Eliezer's original sequences (and were not supposed to). For instance, Eliezer himself has written a lot of standalone dialogues in recent years that could be collated into a book, but that don't make sense as a 'sequence' on a single theme.

Well, the actual reason I'm writing this is because I sometimes feel a tension between my own taste (e.g. who I'd like to make books for) and not wanting to impose my opinions too dictatorially on the website. Like, I do want to show the best of what LessWrong has, but I think sometimes I'd want to have more discretion — for instance maybe I'd want to release a series of three books on similar themes, and but I don't know a good way to justify my arbitrary choice there of which books to make.

The review was a process of making a book that maximally took my judgment about content out of the decision, that well-represented the best of LW. I also don't know that I really want to use a voting process to determine which essays go into the books, I think the essays in the book are more personal and represent the author, and also a book coheres better together if it's put together with a single vision, and I'd rather that be a collaboration between me and the author (to the extent they wish to be involved).

Thoughts on ways for me to move forward on this?

Load More