I would agree with this if Eliezer had never properly engaged with critics, but he's done that extensively. I don't think there should be a norm that you have to engage with everyone, and "ok choose one point, I'll respond to that" seems like better than not engaging with it at all. (Would you have been more enraged if he hadn't commented anything?)
it is almost inevitable that we will be a tedious, frustrating and, shall we say - stubborn and uncooperative "partner" who will be unduly complicating the implementation of whatever solutions the AGI will be proposing.
It will, then, have to conclude that you "can't deal" very well with us, and we have a rather over-inflated sense of ourselves and our nature. And this might take various forms, from the innocuous, to the downright counter-productive.
This all seems to rely on anthropomorphizing the AI to me.
I think you're making the mistake of not cleanly separating between boring objective facts and attitudes/should-statements/reactions/etc., and this is reponsible for almost 100% of the issues I have with your reasoning.
Like, AI will figure out we're irrational. Yup! It will know working with us is less effective at accomplishing a wide range of goals than working alone. Sure! It will know that our preferences are often inconsistent. Definitely! Working with us will be frustrating. What??? Why on earth would it feel frustration? That's a very specific, human emotion we have for evolutionary reasons. What specific things do you claim to know about its training procedure to justify the very specific claim that it would feel this particular thing? .... and so on. If you very strictly taboo all sorts of anthropomorphizing and only stick to cold inferences, can you see how your point no longer works?
I also don't really get your position. You say that,
[Eliezer] confidently dismisses ANNs
but you haven't shown this!
In Surface Analogies and Deep Causes, I read him as saying that neural networks don't automatically yield intelligence just because they share surface similarities with the brain. This is clearly true; at the very least, using token-prediction (which is a task for which (a) lots of training data exist and (b) lots of competence in many different domains is helpful) is a second requirement. If you take the network of GPT-4 and trained it to play chess instead, you won't get something with cross-domain competence.
In Failure by Analogy he makes a very similar abstract point -- and wrt to neural networks in particular, he says that the surface similarity to the brain is a bad reason to be confident in them. This also seems true. Do you really think that neural networks work because they are similar to brains on the surface?
You also said,
The important part is the last part. It's invalid. Finding a design X which exhibits property P, doesn't mean that for design Y to exhibit property P, Y must be very similar to X.
But Eliezer says this too in the post you linked! (Failure by Analogy). His example of airplanes not flapping is an example where the design that worked was less close to the biological thing. So clearly the point isn't that X has to be similar to Y; the point is that reasoning from analogy doesn't tell you this either way. (I kinda feel like you already got this, but then I don't understand what point you are trying to make.)
Which is actually consistent with thinking that large ANNs will get you to general intelligence. You can both hold that "X is true" and "almost everyone who thinks X is true does so for poor reasons". I'm not saying Eliezer did predict this, but nothing I've read proves that he didn't.
Also -- and this is another thing -- the fact that he didn't publicly make the prediction "ANNs will lead to AGI" is only weak evidence that he didn't privately think it because this is exactly the kind of prediction you would shut up about. One thing he's been very vocal on is that the current paradigm is bad for safety, so if he was bullish about the potential of that paradigm, he'd want to keep that to himself.
Didn't he? He at least confidently rules out a very large class of modern approaches.
because nothing you do with a loss function and gradient descent over 100 quadrillion neurons, will result in an AI coming out the other end which looks like an evolved human with 7.5MB of brain-wiring information and a childhood.
In that quote, he only rules out a large class of modern approaches to alignment, which again is nothing new; he's been very vocal about how doomed he thinks alignment is in this paradigm.
Something Eliezer does say which is relevant (in the post on Ajeya's biology anchors model) is
Or, more likely, it's not MoE [mixture of experts] that forms the next little trend. But there is going to be something, especially if we're sitting around waiting until 2050. Three decades is enough time for some big paradigm shifts in an intensively researched field. Maybe we'd end up using neural net tech very similar to today's tech if the world ends in 2025, but in that case, of course, your prediction must have failed somewhere else.
So here he's saying that there is a more effective paradigm than large neural nets, and we'd get there if we don't have AGI in 30 years. So this is genuinely a kind of bearishness on ANNs, but not one that precludes them giving us AGI.
If you mean how I accessed it at all, I used the official channel from OpenAI: https://chat.openai.com/chat
If you have a premium account (20$/month), you can switch to GPT-4 after starting a new chat.
I reject this terminology; I think #2 is superintelligence and #1 is a different dimension.
Also, I would actually differentiate two kinds of #1. There's how much stuff the AI can reason about, which is generality (you can have a "narrow superintelligence" like a chess engine), and there's how much it knows, which is knowledge base/resource access. But I wouldn't call either of them (super)intelligence.
This is pretty funny because the supposed board state has only 7 columns. Yet it's also much better than random. A lot of the pieces are correct... that is, if you count from the left (real board state is here).
Also, I've never heard of using upper and lowercase to differentiate white and black,
I think GPT-4 just made that up. (edit: or not; see reply.)
Extra twist: I just asked a new GPT-4 instance whether any chess notation differentiates lower and upper case, and it told me algebraic notation does, but that's the standard notation, and it doesn't. Wikipedia article also says nothing about it. Very odd.
TAG said that Libertarian Free Will is the relevant one for Newcomb's problem. I think this is true. However, I strongly suspect that most people who write about decision theory, at least on LW, agree that LFW doesn't exist. So arguably almost the entire problem is about analyzing Newcomb's problem in a world without LFW. (And ofc, a big part of the work is not to decide which action is better, but to formalize procedures that output that action.)
This is why differentiating different forms of Free Will and calling that a "Complete Solution" is dubious. It seems to miss the hard part of the problem entirely. (And the hard problem has arguably been solved anyway with Functional/Updateless Decision Theory. Not in the sense that there are no open problems, but that they don't involve Newcomb's problem.)
I recently listened to Gary Marcus speak with Stuart Russell on the Sam Harris podcast (episode 312, "The Trouble With AI," released on March 7th, 2023). Gary and Stuart seem to believe that current machine learning techniques are insufficient for reaching AGI, and point to the recent adversarial attacks on KataGo as one example. Given this position, I would like Gary Marcus to come up with a new set of prompts that (a) make GPT-4 look dumb and (b) mostly continue to work for GPT-5.
While this is all true, it's worth pointing out that Stuart's position was more nuanced and uncertain. He pushed back when Gary said it was obvious, and he mentioned that some prompts made him update in the opposite direction. I don't think they should be lumped into the same epistemic box.
Right, though 20 moves until a new game is very rare afaik (assuming the regular way of counting, where 1 move means one from both sides). But 15 is commonplace. According to chess.com (which I think only includes top games, though not sure) this one was new up from move 6 by white.
I assume you're asking if someone can query GPT-4 with this. if so, I did and here's the response.