Mawrak
Mawrak has not written any posts yet.

If you can recreate even 1% of his consciousness with this kind of data I would be surprised.
The button isn't showing up for me. Well, it shows up for like a second after I re-load the page but then it's gone. I tried Opera GX browser and Chrome, it happens in both. Is this intended behaviour? I use Windows 7, maybe thats why...
I would argue that the most complex information exchange system in the known Universe will be "hard to emulating". I don't see how it can be any other way. We already understand the neurons well enough to emulate them. This is not nearly enough. You will not be able to do whole brain emulation without understanding of the inner workings of the system.
If we look at 17!Austin and 27!Austin as two different people, then I don't see why 27!Austin would have any obligation to do anything for 17!Austin if 27!Austin doesn't want to do it, just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to.
If we look at 17!Austin and 27!Austin as a continuation of the same person, then 27!Austin can do whatever he wants, because everybody has a right to change their mind and perspective, to evolve and to correct mistakes of their past.
If we consider information preservation to be important and valuable, then I would argue that 27!Austin already keeps much more of 17!Austin by simply existing than he could by attending masses. 27!Austin and any future version of Austin is an evolution of 17!Austin, and the best he can do to honor 17!Austin is to just stay alive.
And it keeps giving me photorealistic faces as a component of images where I wasn't even asking for that, meaning that per the terms and conditions I can't share those images publicly.
Could you just blur out the faces? Or is that still not allowed?
For typos there should be an option to just select the error in the text and submit it to the author though the web page. That's what they do on some fanfiction websites. The only downside is that a troll could potentially abuse the system.
I think people who are trying to accurately describe the future that will happen more than 3 years from now are overestimating their predictive abilities. There are so many unknowns that just trying to come up with accurate odds of survival should make your head spin. We have no idea how exactly transformative AI will function, how soon is it coming, what will the future researches do or not do in order to keep it under control (I am talking about specific technological implementations here, not just abstract solutions), whether it will even need something to keep it under control...
Should we be concerned about AI alignment? Absolutely! There are undeniable reasons to... (read more)
That Washington Post about Bucha... thats just insane. So many lives lost. And the pro-Russian sources are completely silent on this, which is also telling.
Inb4 rationalists intentionally develop an unaligned AI designed to destroy humanity. Maybe the real x-risks were the friends we made along the way...
Just looking at the list of "subtle cases of unwholesomeness" makes me not want to adopt the model of wholesomeness in my behaviour. All of these things, except the second one, seem reasonable to me. Not "sometimes", but as a concept of available actions. Model of wholesomeness feels very restrictive and ineffective. I'm not sure I understand why wholesomeness should be implemented when we have other common ideologies of morality that would condemn all of the things on the first list (list of extreme cases) as "bad" (and I think those should be considered bad) without restricting us in a way this system would. The text mentions that sometimes unwholesomeness should be... (read more)