Throwaway2367

Posts

Sorted by New

Wiki Contributions

Comments

2023 is a deficient number. (This fact is not that fun.)

As a kinda-maybe-normal person: I would simply say "Actually, I'm heading in the same direction" loud enough for them to hear (their non-interest be damned).

Had it got it right, that would have probably meant that it memorized this specific, very common question. Memorising things isn't that impressive and memorising one specific thing does not say anything about capabilties as a one line program could "memorize" this one sentence. This way, however, we can be sure that it thinks for itself, incorrectly in this case sure, but still.

Do people in that thread understand how gpt getting eg the ball+bat question wrong is more impressive than it getting it right or should I elaborate?

I have not seen the Simulation Trilemma and anthropic reasoning mentioned in any of the other comments, yet I think those topics are pretty interesting.

Also +1 for FDT.

And I haven't heard of anyone saying GPT-3 can be made into AGI with a bit of tweaking, scaling, and prompt engineering.

I am one who says that (not certain, but high probability), so i thought I will chime in. The main ideas of my belief is that

  1. Kaplan paper/chinchilla paper shows the function between resources and cross entropy loss. With high probability I believe that this scaling won't break down significantly, ie. We can get ever closer to the theoretical irreducible entropy with transformer architectures.
  2. Cross entropy loss measures the distance between two probability distributions, in this case the distribution of human generated text (encoded with tokens) and the empirical distribution generated by the model. I believe with high probability that this measure is relevant, ie we can only get to a low enough cross entropy loss when the model is capable of doing human comparable intellectual work (irrespective of it actually doing it).
  3. After the model achieves the necessary cross entropy loss and consequently becomes capable somewhere in it to produce agi level work (as per 2.), we can get the model to output that level of work with minor tweaks (I don't have specifics, but think on the level of letting the model to recusrively call itself on some generated text with a special output command or some such)

I don't think prompt engineering is relevant to agi.

I would be glad for any information that can help me update.

My guess is

recursion

On mobile with my throwaway acc the current karma threshold needed to push the button for me is 1600, but on desktop from my main account it is 1800. (Though while the page is loading 1600 appears but changes, users above threshold changes similarly) Possible bug?

Upd: now its the same with 1500 and 1700

Upd2: there is no discrepancy now

Load More