David_Gerard comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: David_Gerard 04 March 2012 02:36:45PM *  3 points [-]

Has anyone constructed even a vaguely plausible outline, let alone a definition, of what would constitute a "human-friendly intelligence", defined in terms other than effects you don't want it to have? As you note, humans aren't human-friendly intelligences, or we wouldn't have internal existential risk.

The CEV proposal seems to attempt to move the hard bit to technological magic (a superintelligence scanning human brains and working out a solution to human desires that is possible, is coherent and won't destroy us all) - this is saying "then a miracle occurs" in more words.

Comment author: John_Maxwell_IV 04 March 2012 11:51:25PM 2 points [-]

As you note, humans aren't human-friendly intelligences, or we wouldn't have internal existential risk.

It's possible that particular humans might approximate human friendly intelligences.

Comment author: David_Gerard 05 March 2012 08:02:55AM -1 points [-]

Assuming it's not impossible, how would you know? What constitutes a human-friendly intelligence, in other than negative terms?

Comment author: timtyler 05 March 2012 08:01:45PM -2 points [-]

Has anyone constructed even a vaguely plausible outline, let alone a definition, of what would constitute a "human-friendly intelligence", defined in terms other than effects you don't want it to have?

Er, that's how it is defined - at least by Yudkowsky. You want to argue definitions? Without even offering one of your own? How will that help?