Posts

Sorted by New

Wiki Contributions

Comments

The button isn't showing up for me. Well, it shows up for like a second after I re-load the page but then it's gone. I tried Opera GX browser and Chrome, it happens in both. Is this intended behaviour? I use Windows 7, maybe thats why...

I would argue that the most complex information exchange system in the known Universe will be "hard to emulating". I don't see how it can be any other way. We already understand the neurons well enough to emulate them. This is not nearly enough. You will not be able to do whole brain emulation without understanding of the inner workings of the system.

If we look at 17!Austin and 27!Austin as two different people, then I don't see why 27!Austin would have any obligation to do anything for 17!Austin if 27!Austin doesn't want to do it, just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to. 

If we look at 17!Austin and 27!Austin as a continuation of the same person, then 27!Austin can do whatever he wants, because everybody has a right to change their mind and perspective, to evolve and to correct mistakes of their past.

If we consider information preservation to be important and valuable, then I would argue that 27!Austin already keeps much more of 17!Austin by simply existing than he could by attending masses. 27!Austin and any future version of Austin is an evolution of 17!Austin, and the best he can do to honor 17!Austin is to just stay alive.

And it keeps giving me photorealistic faces as a component of images where I wasn't even asking for that, meaning that per the terms and conditions I can't share those images publicly.

Could you just blur out the faces? Or is that still not allowed?

For typos there should be an option to just select the error in the text and submit it to the author though the web page. That's what they do on some fanfiction websites. The only downside is that a troll could potentially abuse the system.

Answer by MawrakApr 15, 202213

I think people who are trying to accurately describe the future that will happen more than 3 years from now are overestimating their predictive abilities. There are so many unknowns that just trying to come up with accurate odds of survival should make your head spin. We have no idea how exactly transformative AI will function, how soon is it coming, what will the future researches do or not do in order to keep it under control (I am talking about specific technological implementations here, not just abstract solutions), whether it will even need something to keep it under control... 

Should we be concerned about AI alignment? Absolutely! There are undeniable reasons to be concerned, and to come up with ideas and possible solutions. But predictions like "there is a 99+% chance that AGI will destroy humanity no matter what we do, we're practically doomed" seem like jumping the gun to me. One simply cannot make an accurate estimation of probabilities about such a thing at this time, there are too many unknown variables. It's just guessing at this point.

That Washington Post about Bucha... thats just insane. So many lives lost. And the pro-Russian sources are completely silent on this, which is also telling.

Inb4 rationalists intentionally develop an unaligned AI designed to destroy humanity. Maybe the real x-risks were the friends we made along the way...

Brain is the most complex information exchange system in the known Universe. Whole Brain Emulation is going to be really hard. I would probably go with a different solution. I think myopic AI has potential.

EDIT: It may also be worth considering building an AI with no long-term memory. If you want it to do a thing, you put in some parameters ("build a house that looks like this"), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI cannot rewrite itself to not lose memory. If it doesn't remember things, it probably can't come up with a plan to prevent itself from being reset/turned off, or kill all humans, or build a new AI with no limitations. And then you also reset the whole thing every day just in case.

Could it be possible to build an AI with no long-term memory? Just make it's structure static. If you want it to do a thing, you put in some parameters ("build a house that looks like this"), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI cannot rewrite itself to not lose memory, and it probably can't build a new similar AI either (remember, it's still an early AGI, not a God-like Superintelligence yet). If it doesn't remember things, it probably can't come up with a plan to prevent itself from being reset/turned off, or kill all humans. And then you also reset the whole thing every day just in case. 

This approach may not work in the long term (an AI with memory is just too useful not to make), but it might give us more time to come up with other solutions.

Load More