thomblake comments on Welcome to Heaven - Less Wrong

23 Post author: denisbider 25 January 2010 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (242)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 29 January 2010 02:00:23PM 0 points [-]

Starting with the assumption of utilitarianism, I believe you're correct. I think the folks working on this stuff assign a low probability to "kill all humans" being Friendly. But I'm pretty sure people aren't supposed to speculate about the output of CEV.

Comment author: RobertWiblin 29 January 2010 05:24:58PM 1 point [-]

Probably the proportion of 'kill all humans' AIs that are friendly is low. But perhaps the proportion of FAIs that 'kill all humans' is large.

Comment author: gregconen 30 January 2010 03:17:42AM 0 points [-]

That depends on your definition of Friendly, which in turn depends on your values.

Comment author: Vladimir_Nesov 29 January 2010 11:53:04PM *  0 points [-]

But perhaps the proportion of FAIs that 'kill all humans' is large.

Maybe probability you estimate for that to happen is high, but "proportion" doesn't makes sense, since FAI is defined as an agent acting for specific preference, so FAIs have to agree on what to do.

Comment author: RobertWiblin 30 January 2010 04:07:38AM 0 points [-]

OK, I'm new to this.