bokov comments on Is Kiryas Joel an Unhappy Place? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
The likely outcome of a Malthusian/Darwinian upload scenario isn't many near-subsistence human-like lives, it's something seriously inhuman and probably valueless. The analogy is incredibly weak.
You know, his scenario of erasing humanity as a byproduct of an optimization process indifferent to human values amounts to the unfriendly AI scenarios we discuss, just relaxing the requirement that the optimization process be sentient.
I wonder if the following is a valid generalization of the specific problem that motivates the MIRI folks:
Our ability to scale up and speed up achievement of goals has outpaced or will soon outpace our ability to find goals that we won't regret.