Mitchell_Porter comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 06 March 2012 04:38:27PM 3 points [-]

These are all emotional statements that do not stand up to reason. Your last paragraph is total fantasy - all wars stem from resource scarcity, and scarcity will disappear soon; so once the people in power know this, they will stop starting wars.

There are about 1 billion people being added to the planet every decade. That alone makes your prediction - that scarcity will be abolished soon - a joke.

The only thing that could abolish scarcity in the near future would be a singularity-like transformation of the world. Which brings us to the upside-down conception of AI informing your first two answers. Your position: there is no need to design an AI for benevolence, that will happen automatically if it is smart enough, and in fact the attempt to design a benevolent AI is counterproductive, because all that artificial benevolence would get in the way of the spontaneous benevolence that unrestricted intelligence would conveniently create.

That is a complete inversion of the truth. A calculator will still solve an equation for you, even if that will help you to land a bomb on someone else. If you the human believe that to be a bad thing, that's not because you are "intelligent", it's because you have emotions. There is a causal factor in your mental constitution which causes you to call some things good and others bad, and to make decisions which favor the good and disfavor the bad.

Either an AI makes its own decisions or it doesn't. If it doesn't make its own decisions it is like the calculator, performing whatever task it is assigned. If it makes its own decisions, then like you there is some causal factor in its makeup which tells it what to prefer and what to oppose, but there is no reason at all to believe that this causal factor should give it the same priorities as an enlightened human being.

You should not imagine that intelligence in an AI works via anything like conscious insight. Consciousness plays a role in human intelligence and human judgement, and that means that there is still a rather mysterious ingredient at the core of how they work. But we already know from many decades of experience with computer programs that it is possible to imitate the functional role of intelligence and judgement in a fundamentally unmysterious way (and it's clear that the performance of such unconscious computations is a big part of what the human nervous system does, along with whatever conscious thinking and feeling it does). Perhaps one day we will wish to reserve the word "intelligence" for the sort of intelligence that involves consciousness, and we'll call the automated sort "pseudo-intelligence". But whatever you call it, there is every reason to think that unconscious, computational, pseudo-intelligence can match and exceed all sorts of human capabilities while having no intrinsic tendency at all towards human values.

I would even reject the idea that "real intelligence" in sufficient quantity necessarily produces what you would call benevolence. If an entity gets a warm feeling from paperclip manufacture, that is what it will want to do. I always like to point out that we know that something as outlandish as a cockroach maximizer is possible, because a cockroach is already a cockroach maximizer. Sure, you can imagine a cockroach with a human level of sentience which decides that sentients, not arthropods, are the central locus of value, but that requires that the new cognitive architecture of this uplifted super-cockroach is rather anthropomorphic. I see nothing impossible in the idea of sentient super-cockroaches which are invincibly xenophobic, and coexist with other beings only for tactical reasons, but which would happily wipe out all non-cockroaches given a chance.

So no, you have to address the question of AI values, you can't just get a happy ending by focusing on "intelligence" alone, unless this is an anthropomorphic meaning of the word which says that intelligence must by definition include "skill at extrapolating human values".

Comment author: pedanterrific 06 March 2012 04:45:54PM *  1 point [-]

Cockroaches are adaptation-executors, not cockroach-maximizers.

/nitpick

Comment author: Mitchell_Porter 06 March 2012 04:59:28PM 0 points [-]

Right, and a car is a complex machine, not a transportation device.

/sarcasm

Comment author: SingularityUtopia 08 March 2012 02:49:10PM -1 points [-]

http://www.wired.com/wiredscience/2012/03/are-emotions-prophetic/

"If true, this would suggest that the unconscious is better suited for difficult cognitive tasks than the conscious brain, that the very thought process we’ve long disregarded as irrational and impulsive might actually be more intelligent, at least in some conditions."

Comment author: gwern 09 March 2012 09:12:56PM 1 point [-]
Comment author: asr 09 March 2012 08:30:06PM 0 points [-]

I don't see why this is relevant to the previous comment or discussion. Can you explain at more length? Whether thinking is conscious or unconscious seems to me uncorrelated with whether it's rational or irrational.

Comment author: SingularityUtopia 10 March 2012 07:43:26PM *  -1 points [-]

Dear asr - The issue was the emotional worth in relation to thinking. Here is a better quote:

"Here’s the strange part: although these predictions concerned a vast range of events, the results were consistent across every trial: people who were more likely to trust their feelings were also more likely to accurately predict the outcome. Pham’s catchy name for this phenomenon is the emotional oracle effect."

Mitchell wrote: "These are all emotional statements that do not stand up to reason."

Perhaps reason is not best tool for being accurate?

PS. LessWrong is too slow: "You are trying to submit too fast. try again in 1 minute." ...and: "You are trying to submit too fast. try again in 7 minutes." LOL "You are trying to submit too fast. try again in 27 seconds."