Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

RedMan comments on Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering - Less Wrong Discussion

1 Post author: Kaj_Sotala 03 January 2018 02:39PM

Comments (6)

You are viewing a single comment's thread. Show more comments above.

Comment author: RedMan 08 January 2018 02:10:42AM *  0 points [-]

An ethical injunction doesn't work for me in this context, killing can be justified with lots of more base motives than 'preventing infinity suffering'.

So, instead of a blender, I could sell hats with tiny brain pulping shaped charges that will be remotely detonated when mind uploading is proven to be possible, or when the wearer dies of some other cause. As long as my marketing reaches some percentage of people who might plausibly be interested, then I've done my part.

I assess that the number is small, and that anyone seriously interested in such a device likely reads lesswrong, and may be capable of making some arrangement for brain destruction themselves. So, by making this post and encouraging a potential upload to pulp themselves prior to upload. I have some > 0 probability of preventing infinity suffering.

I'm pretty effectively altruistic, dang. It's not even February.

I prefer your borg scenarios to individualized uploading. I feel like it's technically feasible using extant technology, but I'm not sure how much interest there really is in mechanical telepathy.