Nick_Tarleton comments on Is Kiryas Joel an Unhappy Place? - Less Wrong

20 Post author: gwern 23 April 2011 12:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 23 April 2011 06:34:32PM 15 points [-]

The likely outcome of a Malthusian/Darwinian upload scenario isn't many near-subsistence human-like lives, it's something seriously inhuman and probably valueless. The analogy is incredibly weak.

Comment author: bokov 26 September 2013 03:26:42PM 2 points [-]

You know, his scenario of erasing humanity as a byproduct of an optimization process indifferent to human values amounts to the unfriendly AI scenarios we discuss, just relaxing the requirement that the optimization process be sentient.

I wonder if the following is a valid generalization of the specific problem that motivates the MIRI folks:

Our ability to scale up and speed up achievement of goals has outpaced or will soon outpace our ability to find goals that we won't regret.

Comment author: torekp 23 April 2011 07:33:22PM 1 point [-]

Thanks for the link to that Nick Bostrom paper. It's the best writing I've yet seen on the posthuman prospect.

Comment author: bokov 26 September 2013 03:31:08PM 0 points [-]

Or, more succinctly, if we don't solve coherent extraoplated volition, we are screwed regardless of whether Kruel or Yudkowski is right about the specific threat of unfriendly AI.