Posts

Sorted by New

Wiki Contributions

Comments

Mawrak30

Just looking at the list of "subtle cases of unwholesomeness" makes me not want to adopt the model of wholesomeness in my behaviour. All of these things, except the second one, seem reasonable to me. Not "sometimes", but as a concept of available actions. Model of wholesomeness feels very restrictive and ineffective. I'm not sure I understand why wholesomeness should be implemented when we have other common ideologies of morality that would condemn all of the things on the first list (list of extreme cases) as "bad" (and I think those should be considered bad) without restricting us in a way this system would. The text mentions that sometimes unwholesomeness should be accepted as a "necessary cost", but I'm not sure I even see some of these things as negatives in first place, depending on the context. I think its OK to be harsh and its OK to hurt feelings when situations call for it. It may not only be necessary, but beneficial. As well as gaming a poorly designed system to get ahead of everyone else - it all depends on what the system is, what you do with the gains, and what the public reaction will be.

I understand I may sound a bit cynical here, but this is something I have accepted as part and parcel of society. There are some specific cases of unwholesomeness that I think need to be addressed and removed, and there are other cases where I would personally keep them in place or even add them in where they are missing.

Mawrak23

If you can recreate even 1% of his consciousness with this kind of data I would be surprised.

Mawrak10

The button isn't showing up for me. Well, it shows up for like a second after I re-load the page but then it's gone. I tried Opera GX browser and Chrome, it happens in both. Is this intended behaviour? I use Windows 7, maybe thats why...

Mawrak10

I would argue that the most complex information exchange system in the known Universe will be "hard to emulating". I don't see how it can be any other way. We already understand the neurons well enough to emulate them. This is not nearly enough. You will not be able to do whole brain emulation without understanding of the inner workings of the system.

Mawrak50

If we look at 17!Austin and 27!Austin as two different people, then I don't see why 27!Austin would have any obligation to do anything for 17!Austin if 27!Austin doesn't want to do it, just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to. 

If we look at 17!Austin and 27!Austin as a continuation of the same person, then 27!Austin can do whatever he wants, because everybody has a right to change their mind and perspective, to evolve and to correct mistakes of their past.

If we consider information preservation to be important and valuable, then I would argue that 27!Austin already keeps much more of 17!Austin by simply existing than he could by attending masses. 27!Austin and any future version of Austin is an evolution of 17!Austin, and the best he can do to honor 17!Austin is to just stay alive.

Mawrak10

And it keeps giving me photorealistic faces as a component of images where I wasn't even asking for that, meaning that per the terms and conditions I can't share those images publicly.

Could you just blur out the faces? Or is that still not allowed?

Mawrak10

For typos there should be an option to just select the error in the text and submit it to the author though the web page. That's what they do on some fanfiction websites. The only downside is that a troll could potentially abuse the system.

Answer by Mawrak140

I think people who are trying to accurately describe the future that will happen more than 3 years from now are overestimating their predictive abilities. There are so many unknowns that just trying to come up with accurate odds of survival should make your head spin. We have no idea how exactly transformative AI will function, how soon is it coming, what will the future researches do or not do in order to keep it under control (I am talking about specific technological implementations here, not just abstract solutions), whether it will even need something to keep it under control... 

Should we be concerned about AI alignment? Absolutely! There are undeniable reasons to be concerned, and to come up with ideas and possible solutions. But predictions like "there is a 99+% chance that AGI will destroy humanity no matter what we do, we're practically doomed" seem like jumping the gun to me. One simply cannot make an accurate estimation of probabilities about such a thing at this time, there are too many unknown variables. It's just guessing at this point.

Mawrak-10

That Washington Post about Bucha... thats just insane. So many lives lost. And the pro-Russian sources are completely silent on this, which is also telling.

Load More