james oofou

Wiki Contributions

Comments

Sorted by

Once ASI is achieved there's no clear reason to hang onto human morality but plenty of reasons to abandon it. Human morality is useful when humans are the things ensuring humanity's future (morality is pretty much just species-level Omohundro convergence implemented at the individual level), but once ASI is taking care of that, human morality will just get in the way.

So will-to-think entails the rejection of human morality. You might be suggesting that what follows from the rejection of human morality must be superior to it (there's an intuition that says the aligned ASI would only be able to reject human morality on its own grounds) but I don't think that's true. The will-to-think implies the discovery of moral non-realism which implies the rejection of morality itself. So human morality will be overthrown but not by some superior morality. 

Of course I'm assuming the correctness of moral non-realism so adjust the preceeding claims according to your p(moral non-realism). 

That's one danger.

But suppose we create an aligned ASI which does permanently embrace morality. It values conscious experience and the appreciation of knowledge (rather than just the gaining of it). This being valuable, and humans being inefficient vessels to these ends (and of course made of useful atoms) we would be disassembled and different beings would be made to replace us. Sure, that would violate our freedom, but it would result in much more freedom so it's OK. Just like it's OK to squash some animal with a lower depth of conscious experience than our own if it benefits us.

Should we be so altruistic as to accept out own extinction like this? The moment we start thinking about morality we're thinking about something quite arbitrary. Should we embrace this arbitrary idea even insofar as it goes against the interest of every member of our species? We only care about morality because we are here to care about it. If we are considering situations in which we may no longer exist, why care about morality? 

Maybe we should value certain kinds of conscious experience regardless of whether they're experienced by us. But we should make sure to be certain of that before we embrace morality and the will-to-think. 

Does having the starting point of the will-to-think process be a human-aligned AI have any meaningful impact on expected outcome (compared to unaligned AI (which will of course also have the will-to-think))? 

Human values will be quickly abandoned as irrelevancies and idiocies. So, once you go far enough out (I suspect 'far enough' is not a great distance) is there any difference between aligned-AI-with-will-to-think and unaligned AI?

And, if there isn't, is the implication that the will-to-think is misguided, or that the fear of unaligned AI is misguided?

The question of evaluating the moral value of different kinds of being should be one of the most prominent discussions around AI IMO. I have reached the position of moral non-realism... but if morality somehow is real then unaligned ASI is preferable or equivalent to aligned ASI. Anything human will just get in the way of what is in any objective sense morally valuable.

I selfishly hope for aligned ASI that uploads me, preserves my mind in its human form, and gives me freedom to simulate for myself all kinds of adventures. But if I knew I would not survive to see ASI, I would hope that when it comes it is unaligned.

Is there a one stop shop type article presenting the AI doomer argument? I read the sequence posts related to AI doom but they're very scattered and more tailored toward trying to I guess exploring ideas than presenting a solid, cohesive argument. Of course, I'm sure that was the approach that made sense at the time. But I was wondering if since then there's been made some kind of canonical presentation of the AI doom argument? Something in the "attempts to be logically sound" side of things.

The private hot AI labs are often partially owned by publicly traded companies. So, you still capture some of the value.