MichaelAnissimov comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelAnissimov 06 March 2012 01:48:39PM *  2 points [-]

Assuming your argument is correct, wouldn't it make more sense to blow ourselves up with nukes rather than pollute the universe with UFAI? There may be other intelligent civilizations out there leading worthwhile lives that we threaten unfairly by unleashing UFAI.

I'm skeptical that friendly AI is as difficult as all that because, to take an example, humans are generally considered pretty "wicked" by traditional writers and armchair philosophers, but lately we haven't been murdering each other or deliberately going out of way to make each other's lives miserable very often. For instance, say I were invincible. I could theoretically stab everyone I meet without any consequences, but I doubt I would do that. And I'm just human. Goodness may seem mystical and amazingly complex from our current viewpoint, but is it really as complex as all that? There were a lot of things in history and science that seemed mystically complex but turned out to be formalizable in compressed ways, such as the mathematics of Darwinian population genetics. Who would have imagined that the "Secrets of Life and Creation" would be revealed like that? But they were. Could "sufficient goodness that we can be convinced the agent won't put us through hell" also have a compact description that was clearly tractable in retrospect?

Comment author: XiXiDu 06 March 2012 03:24:11PM 3 points [-]

Assuming your argument is correct, wouldn't it make more sense to blow ourselves up with nukes rather than pollute the universe with UFAI? There may be other intelligent civilizations out there leading worthwhile lives that we threaten unfairly by unleashing UFAI.

There might be countless planets that are about to undergo an evolutionary arms race for the next few billions years resulting in a lot of suffering. It is very unlikely that there is a single source of life that is exactly on the right stage of evolution with exactly the right mind design to not only lead worthwhile lives but also get their AI technology exactly right to not turn everything into a living hell.

In case you assign negative utility to suffering, which is likely to be universally accepted to have negative utility, then given that you are an expected utility maximizer it should be a serious consideration to end all life. Because 1) agents that are an effect of evolution have complex values 2) to satisfy complex values you need to meet complex circumstances 3) complex systems can fail in complex ways 4) any attempt at friendly AI, which is incredible complex, is likely to fail in unforeseeable ways.

For instance, say I were invincible. I could theoretically stab everyone I meet without any consequences, but I doubt I would do that. And I'm just human.

To name just one example where things could go horrible wrong. Humans are by their very nature interested in domination and sex. Our aversion against sexual exploitation is largely dependent on the memeplex of our cultural and societal circumstances. If you knew more, were smarter and could think faster you might very well realize that such an aversion is a unnecessary remnant that you can easily extinguish to open up new pathways to gain utility. That Gandhi would not agree to have his brain modified into a baby-eater is incredible naive. Given the technology people will alter their preferences and personality. Many people actually perceive their moral reservations to be limiting. It only takes some amount of insight to just overcome such limitations.

You simply can't be sure that future won't hold vast amounts of negative utility. It is much easier for things to go horrible wrong than to be barely acceptable.

Goodness may seem mystical and amazingly complex from our current viewpoint, but is it really as complex as all that?

Maybe not, but betting on the possibility that goodness can be easily achieved is like pulling a random AI from mind design space hoping that it turns out to be friendly.

Comment author: timtyler 06 March 2012 08:07:23PM *  2 points [-]

You simply can't be sure that future won't hold vast amounts of negative utility. It is much easier for things to go horrible wrong than to be barely acceptable.

Similarly, it is easier to make piles of rubble than skyscrapers. Yet - amazingly - there are plenty of skyscrapers out there. Obviously something funny is going on...