MugaSofer comments on Welcome to Less Wrong! (5th thread, March 2013) - Less Wrong

27 Post author: orthonormal 01 April 2013 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1750)

You are viewing a single comment's thread. Show more comments above.

Comment author: Juno_Watt 01 May 2013 08:31:04PM 0 points [-]

Having only the disadvantages of no emotions itself, and an outside view...

..but if we build an Intelligence based on the only template we have, our own, its likely to be emotional. That seems to be the easy way.

Comment author: MugaSofer 01 May 2013 10:21:10PM -2 points [-]

That's why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we'll need to understand them using our own rationality, which is sadly lacking, I fear.

Comment author: Juno_Watt 01 May 2013 10:57:51PM 1 point [-]

That seems to imply we understand our rationality...

Comment author: seanwelsh77 09 May 2013 10:42:39AM 0 points [-]

More research...

Gerd Gigerenzer's views on heuristics in moral decision making are very interesting though.

Comment author: MugaSofer 12 May 2013 10:01:12PM -2 points [-]

Hah. Well, yes. I don't exactly have a working AI in my pocket, even an unFriendly one.

I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they're both out of my grasp right now.

There's some good stuff on this floating around this site; try searching for "complexity of value" to start off. There's likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.