seanwelsh77 comments on Welcome to Less Wrong! (5th thread, March 2013) - Less Wrong

27 Post author: orthonormal 01 April 2013 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1750)

You are viewing a single comment's thread. Show more comments above.

Comment author: seanwelsh77 30 April 2013 10:16:20PM -2 points [-]

In my experience homo sapiens does not come 'out of a box.' Are you a MacBook Pro? :-)

But seriously, I have seen some interestingly flawed 'decision-making systems' in Psych Wards. And I think Reason (whatever it is taught to be) matters. Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don't think Reason alone (however you construe it) is up to the job of friendly AI.

Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are 'valid' or 'correct?'

Comment author: John_D 01 May 2013 05:59:34PM *  4 points [-]

"Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together."

That statement is too strong. I can think of several instances where certain emotions, especially negative ones, can impair decision making. It is reasonable to assume that impaired decision making can extend into making ethical decisions.

The first page of the paper linked below provides a good summary of when emotions, and what emotions, can be helpful or harmful in making decisions. I do acknowledge that some emotions can be helpful in certain situations. Perhaps you should modify your statement.

http://www.cognitive-neuroscience.ro/pdf/11.%20Anxiety%20impairs%20decision-making.pdf

Comment author: Juno_Watt 01 May 2013 11:49:40AM 2 points [-]

A thousand sci-fi authors would agree with you that AIs are not going to have emotion. One prominent AI researcher will disagree

Comment author: MugaSofer 01 May 2013 05:28:16PM -1 points [-]

Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don't think Reason alone (however you construe it) is up to the job of friendly AI.

Certainly, our desires are emotional in nature; "reason" is merely how we achieve them. But wouldn't it be better to have a Superintelligent AI deduce our emotions itself, rather than programming it in ourselves? Introspection is hard.

Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are 'valid' or 'correct?'

Have you read the Metaethics Sequence? It's pretty good at this sort of question.

Comment author: Juno_Watt 01 May 2013 05:49:21PM 0 points [-]

But wouldn't it be better to have a Superintelligent AI deduce our emotions itself, rather than programming it in ourselves?

Would it be easier?

Introspection is hard.

Especially about other people

Comment author: MugaSofer 01 May 2013 07:07:12PM -1 points [-]

Would it be easier?

Well, if you can build the damn thing, it should be better equipped than we are, being superintelligent and all.

Comment author: Juno_Watt 01 May 2013 08:31:04PM 0 points [-]

Having only the disadvantages of no emotions itself, and an outside view...

..but if we build an Intelligence based on the only template we have, our own, its likely to be emotional. That seems to be the easy way.

Comment author: MugaSofer 01 May 2013 10:21:10PM -2 points [-]

That's why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we'll need to understand them using our own rationality, which is sadly lacking, I fear.

Comment author: Juno_Watt 01 May 2013 10:57:51PM 1 point [-]

That seems to imply we understand our rationality...

Comment author: seanwelsh77 09 May 2013 10:42:39AM 0 points [-]

More research...

Gerd Gigerenzer's views on heuristics in moral decision making are very interesting though.

Comment author: MugaSofer 12 May 2013 10:01:12PM -2 points [-]

Hah. Well, yes. I don't exactly have a working AI in my pocket, even an unFriendly one.

I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they're both out of my grasp right now.

There's some good stuff on this floating around this site; try searching for "complexity of value" to start off. There's likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.