Economist.
This multidimensionality is exactly why I think the term "human-level intelligence" should not be used. My impression is that it suggests a one-dimensional type of ability, with a threshold where the quality changes drastically; and the term even seems to suggest that this threshold to be at a level that is in fact not decisive.
I considered adding this possible interpretation. However, I do not see where creating an AI that crosses this particular threshold would be something extremely meaningful as opposed to, say, an IQ of 95 or an IQ of 107 or 126. Being "smarter" than 50% of humanity does not seem to constitute a discrete jump in risks.
Bing already has human-level intelligence and access to most of the internet. Speculating how long before Bing or another LLM becomes superhuman smart is a scary thought, especially because we haven't even managed to align Bing yet.
Given the distribution of human intelligence, I find it hard to say when something should be considered "superhuman smart". Given the behavior of Bing, I am unsure whether we could really realize it, even if there was a clear cut-off level.
People worried about AI alignment: At some point, an AI might succeed convincing people to give it access to the internet!
Big internet company seeing an AI: Hm we could just include the AI into our search engine.
Thank you very much, I added the missing question "Who were the scholars most influenced by that theorem?"
While I also really like this post, I am confused by your reasoning. You want to have it as a reference because "Each of the ideas here is something anyone doing 'rationalist discourse' should be familiar with as a modality, and shift into at least sometimes". I would like to know what you mean, because to me it sounds like having it as a reference to use when you think the other side in a debate should obey the standards, whereas you do not want to be restricted by the same set of norms. Would you like to elaborate?
Thanks, Christian! Of course you are right! In this case, the shortform entry was meant to be funny for people who would understand the languages, like you. But I see that can also be misleading for people who do not understand them.
That is true, but it is also true for the Nash thing, I would say.
German Jokes translated by ChatGPT
Me:
Here's a German joke, could you please translate it to English such that the joke is still funny? Zwei Kerzen unterhalten sich. Sagt die eine zur anderen: "Ist Wasser eigentlich gefährlich?" Die andere antwortet: "Ja, davon kannst du ausgehen!"
ChatGPT:
Here's a translation of the German joke into English:
Two candles are talking to each other. One says to the other, "Is water dangerous?" The other answers, "Yes, you can count on it!"
The joke still retains its humor in English, as it relies on a simple play on words and a pun, with the word "count" having both a numerical meaning and a meaning of "rely on."
Me:
In Zentralamerika gibt es zwar schlechte Lackierer, aber Guatemala.
ChatGPT:
Here's a translation of the German joke into English:
In Central America, there may be bad painters, but there's always Guatemala.
The joke still retains its humor in English, as it relies on a play on words and a pun, with the name "Guatemala" sounding similar to the German word "gut malen," which means "to paint well."
Me:
Frage: "Für wen kauft ein Egoist Obst?" Antwort: "Pfirsich!"
ChatGPT:
Here's a translation of the German joke into English:
Question: "For whom does an egotist buy fruit?" Answer: "A Peach!"
The joke still retains its humor in English, as it relies on a play on words and a pun, with the word "peach" sounding similar to the word "piece," as in a piece of fruit, but also having a secondary meaning of "an attractive or admirable person."
The Metaculus definition is very interesting as it is quite different from what M. Y. Zuo suggested to be the natural interpretation of "human-level intelligence".
I like the PASTA suggestion, thanks for quoting that! However, I wonder whether that bar is a bit too high.