Wiki Contributions


I understand feeling frustrated given the state of affairs, and I accept your apology.

Have a great day.

You don’t have an accurate picture of my beliefs, and I’m currently pessimistic about my ability to convey them to you. I’ll step out of this thread for now.

I find the accusation that I'm not going to do anything slightly offensive.

Of course, I cannot share what I have done and plan to do without severely de-anonymizing myself. 

I'm simply not going to take humanity's horrific odds of success as a license to make things worse, which is exactly what you seem to be insisting upon.

Default comment guidelines:

  • Aim to explain, not persuade
  • Try to offer concrete models and predictions
  • If you disagree, try getting curious about what your partner is thinking
  • Don't be afraid to say 'oops' and change your mind

Your reply does not even remotely resemble good faith engagement. 

You can unilaterally slow down AI progress by not working on it. Each additional day until the singularity is one additional day to work on alignment.

"Becoming the fire" because you're doomer-pilled is maximally undignified. 

Why not create non-AI startups that are way less likely to burn capabilities commons?

Random thoughts:

  1. Wouldn't it be best for the rolling admissions MATS be part of MATS? 
  2. Some ML safety engineering bootcamps scare me. Once you're taking in large groups of new-to-EA/new-to-safety people and teaching them how to train transformers, I'm worried about downside risks. I have heard that Redwood has been careful about this. Cool if true. 
  3. What does building a New York-based hub look like?

What sort of value do you expect to get out of "crossing the theory-practice gap"?

Do you think that this will result in better insights about which direction to focus in during your research, for example? 

I filled out an application. This looks like a very promising program.

Load More