I listened to some podcasts that he appeared on that seemed to place more emphasis on the risks.I wonder how much of the impact of releasing a book these days comes from the book itself and how much comes from appearing on various podcasts or from the various reviews that people write about it. I wouldn't be surprised if the impact of the latter was actually greater.So I suspect the release of this book will end up being quite beneficial for us.
“Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai - there’s a reason this is on LessWrong and not the EA forum”
As the EA community has become less intense, sometimes I’ve wondered whether there would be value in someone starting an LW or EA adjacent group that’s on the more intense part of the spectrum.
I definitely see risks associated with this (people pushing themselves too hard, fanaticism) and I probably wouldn’t want to be part of it myself, but I imagine that it could be a good fit for some people.
Either that or anthrax has accurate instructions for it scattered everywhere on the internet and is an unusually easy biological agent to make such that Llama-2 did pick it up -- but again that means Llama-2 isn't particularly a problem!
Hard disagree. These techniques are so much more worrying if you don't have to piece together instructions from different locations and assess the reliability of comments on random forums.
So I listened to the conversation.Summary:
I think it could be very valuable to use a language model to iterate over thousands of problems and identify the most common data structures and algorithms in order of how common they are.
An alternative framing that might be useful: What do you see as the main bottleneck for people having better predictions of timelines (as you see it)?
Do you in fact think that having such a list is it?
What are you worried about CAIS doing?
What we need to find, for a given agent to be constrained by being a 'utility maximiser' is to consider it as having a member of a class of utility functions where the actions that are available to it systematically alter the expected utility available to it - for all utility functions within this class.
This sentence is extremely difficult for me to parse. Any chance you could clarify it?
In most situations, were these preferences over my store of dollars for example, this would seem to be outside the class of utility functions that would meaningfully constrain my action, since this function is not at all smooth over the resource in question.
Could you explain smoothness is typically required for meaningly constraining our actions?
I'm very pleased to see that the LessWrong team is thinking about these kinds of topics.I just wanted to add a few more thoughts on this topic myself.
I suspect that one important aspect of creating a new paradigm is characterising the previous paradigm and its underlying assumptions. Often once these assumptions are stated out loud, it becomes clearer where they might break down.
Another important aspect of allowing a new paradigm to form is having a space where it can form. This can often be quite difficult as many people may be mostly happy with the existing spaces that work within the paradigm or at least not unhappy enough to want to join something new. There's also the problem that people who disagree with the paradigm might want to take it all kinds of different directions, preventing any from building critical mass. When an existing paradigm has many possible issues that you could focus on, there's something of an art in picking off an area that contains a group of sufficiently important and compelling differences, which also has a certain level of coherence such that you can explain the value of what you're doing to other people without them being confused.