Any idea what those optimizations are? I am drawing a blank.
my rough guess is that GPT-4 will have twice the context length: 8192 tokens.
There is a Twitter rumor, supposedly based on a document leaked from OpenAI, which implies GPT-4 will have the context length of at least 32K(!).
Note that OpenAI already provides fine-tuning API and it's not difficult or expensive to use the API to influence AI's values. See RightWingGPT for an example.
RightWingGPT post also demonstrates that despite OpenAI's insistence "our guidelines are explicit that reviewers should not favor any political group", ChatGPT has clear political bias and the process is failing. (Or, more likely, the process is working as designed and OpenAI is lying here.)
You are wrong. Of course TFP is calculated based on real GDP, otherwise it would be meaningless.
There are real issues with how to measure real GDP, because price index needs to be adjusted for quality. But that's different from calculating based on nominal GDP. As I understand, "the same product getting cheaper" you mentioned is nearly perfectly captured by current methods. What Jensen mentioned is different, it's "the same cost buying more", and that's more problematic.
I completely agree and this seems good? I very much want to ally with unproductive rent-seekers and idiots to reduce existential risk. Thanks a lot, unproductive rent-seekers and idiots! (though I most certainly shouldn't call them that to ally with them). I don't understand how this is in any way a strong case against the proposition.
Agreed. On the other hand, what I read suggests He Jiankui was bottlenecked on parental consent. For his first-in-human trial, he couldn't recruit any parents interested in editing PCSK9, but some parents, themselves HIV patients, whose contacts were relatively easily acquired from HIV support group, really really cared about (as you pointed out, and I agree, incorrectly) editing CCR5, and were easily recruited. It sometimes happens recruiting participants is the limiting factor in doing trials, and I think it was the case here.
Very interesting! Recently, US started to regulate export of computing power to China. Do you expect this to speed up AGI timeline in China, or do you expect regulation to be ineffective, or something else?
Reportedly, NVIDIA developed A800, which is just A100, to keep the letter but probably not the spirit of the regulation. I am trying to follow closely how A800 fares, because it seems to be an important data point on feasibility of regulating computing power.
AP exams are scored on a scale of 1 to 5, so yes, getting the exact same score with zero difference makes sense.