As a kinda-maybe-normal person: I would simply say "Actually, I'm heading in the same direction" loud enough for them to hear (their non-interest be damned).
Had it got it right, that would have probably meant that it memorized this specific, very common question. Memorising things isn't that impressive and memorising one specific thing does not say anything about capabilties as a one line program could "memorize" this one sentence. This way, however, we can be sure that it thinks for itself, incorrectly in this case sure, but still.
Do people in that thread understand how gpt getting eg the ball+bat question wrong is more impressive than it getting it right or should I elaborate?
I have not seen the Simulation Trilemma and anthropic reasoning mentioned in any of the other comments, yet I think those topics are pretty interesting.
Also +1 for FDT.
And I haven't heard of anyone saying GPT-3 can be made into AGI with a bit of tweaking, scaling, and prompt engineering.
I am one who says that (not certain, but high probability), so i thought I will chime in. The main ideas of my belief is that
I don't think prompt engineering is relevant to agi.
I would be glad for any information that can help me update.
My guess is
recursion
On mobile with my throwaway acc the current karma threshold needed to push the button for me is 1600, but on desktop from my main account it is 1800. (Though while the page is loading 1600 appears but changes, users above threshold changes similarly) Possible bug?
Upd: now its the same with 1500 and 1700
Upd2: there is no discrepancy now
2023 is a deficient number. (This fact is not that fun.)