lc

Sequences

The Territories
Mechanics of Tradecraft

Wiki Contributions

Comments

I've read your explanation of what happened, and it still seems like the board acted extremely incompetently. Call me an armchair general if you want. Specific actions that I take severe issue with:

  1. The decision to fire Sam, instead of just ejecting him from the board

Both kicking Sam off the board, and firing him, and kicking Greg off at the same time all at once with no real explanation is completely unnecessary and also what ultimately gives Sam the cassus belli for organizing the revolt in the first place. It's certainly not required to defend Helen from Sam's attacks.

Consider what happens if Sam had just lost his board seat. First, his cost-benefit analysis looks different: Sam still has most of what he had before to lose, namely his actual position at OpenAI, and so probably doesn't hold the entire organization hostage. Second, taking the nuclear actions he did - quitting in protest and moving to Microsoft - suddenly now look incredibly vindictive and are much less sympathetic actions to any of his allies inside the company, including Greg. Third, when Sam tries to use his position as CEO to sabotage the company or subvert the board further (lacking his own seat), he's giving you more ammunition to fire him later if you really need to.

If I had been on the board my actions after getting the five together would have been to, first, call Greg and Mira into an office and explain what was going on. Then after a long conversation about our motivations (whether or not they'd agreed with our decision), I immediately call Sam in/over the internet and deliver the news that he was no longer a board member, and that the vote had already been passed. I then overtly and clearly explain the reasoning behind why he's losing the board seat ("we felt you were trying to compromise the integrity of the board with your attacks on Helen and playing of board members against one another"), in front of everybody present. If it's appropriate, we offer him the option to save face and say that he voluntarily resigned to keep the board independent. Even if he doesn't go quietly, in this setting he's pretty much incapable of pulling any of the shenanigans he did over that weekend, and key people know the surface reason of why he's being ejected and so his next actions already look very sketchy in light of that.

If the objection is that Sam's mystical powers of persuasion will be used corrupt the organization and launch a counter-coup further down the road, well, now you've at least created common knowledge of his intent to capture OpenAI (or at least common suspicions) and you've removed a vote from him in the first place and you're writing down everything you've said to him and everything he's said so far, so it should be much harder for him to accomplish that.

  1. The decision never to explain why they ejected Sam.

Mindboggling. You can't just depose the favored leader of the organization and not at least lie in a satisfying way. People desperately wanted to know why the board fired him and whether or not it was something beyond EA affiliation. Which it was! So just fucking say that, and once you do now it's on Sam to prove or disprove your accusations. People, I'd wager that even people inside OpenAI who feel some semblance of loyalty to him, did not actually need that much evidence to believe that Sam Altman - Silicon Valley's career politician - is a snake and was trying to corrupt the organization he was a part of. Say you have private information, explain precisely the things you explain in the above comment. That's way better than saying nothing because if you say nothing then Sam and everybody watching gets to blame their pet opposing political faction for the crisis and substitute suspicious reasons for the board's actions.

  1. The decision not to be aggressive in denouncing Sam after he started actively threatening to destroy the company.

Beyond communicating the motivation behind the initial decision, the board (Ilya ideally, if you can get him to do this) should have been on Twitter the entire time screaming at the top of their lungs that Sam's actions were quinessentially petty and look how he was willing to burn the entire company down in service of his personal ambitions, and that while kicking Sam off the board was a tough call and much tears were shed, everything that happened over the last three days - destroying all of your hard-earned OpenAI equity and handing it to MIcrosoft etc. - was a resounding endorsement of their decision to fire him, and that they will never surrender, etc. etc. etc. The only reason Sam's strategies of feeding info to the press about his inevitable return worked in the first place was because the board stayed completely fucking silent the entire time and refused to give any hint as to what they were thinking to either the staff at OpenAI or the general public.

This post was actually quite elightening, and felt more immediately helpful for me in understanding the AI X-risk case than any other single document. I think that's because while other articles on AI risk can seem kind of abstract, this one considers concretely what kind of organization would be required for navigating around alignment issues, in the mildly inconvenient world where alignment is not solved without some deliberate effort, which put me firmly into near-mode.

This was my favorite non-AI post of 2022; perhaps partly because the story of Toni Kurz serves as a metaphor for the whole human enterprise. Reading about these mens' trials was both riveting and sent me into deep reflection about my own values.

I have literally never been remotely bothered by someone in front of me reclining their seat. It would have never occurred to me that people felt strongly about the decision until this twitter thread.

Just had a conversation with a guy where he claimed that the main thing that separates him from EAs was that his failure mode is us not conquering the universe. He said that, while doomers were fundamentally OK with us staying chained to Earth and never expanding to make a nice intergalactic civilization, he, an AI developer, was concerned about the astronomical loss (not his term) of not seeding the galaxy with our descendants. This P(utopia) for him trumped all other relevant expected value considerations.

What went wrong?

lc5d-20

Oh come on, I was on board with your other satire but no rationalist actually says this sort of thing

MadHatter is an original and funny satirist. The universally serious reaction to his jokeposts is a quintessential example of rationalist humorlessness.

Edit/Addendum: I have just read your post on infohazards. I retract my criticisms; as a fellow infopandora and jessicata stan, I think the site requires more of your work.

Load More