Konkvistador comments on Should humanity give birth to a galactic civilization? - Less Wrong

-6 [deleted] 17 August 2010 01:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread.

Comment author: [deleted] 17 August 2010 01:34:30PM *  0 points [-]

First a few minor things I would like to get out there:

We are according to consensus which I do not dispute (since its well founded) slowly approaching heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?

I don't quite see the difference between real and simulated torture in the context of a civilization as advanced as the one you are arguing against we let develop. So I'm not sure you are getting at by mentioning them as separate things.

You need to read up on fun theory. And if you disregard it, let me just point out that worrying about people not having fun is different concern than from assuming they will experience mental anguish at the prospect of suicide or a inevitable death. Actually not having fun can be neatly solved by suicide if you exhaust all other options as long as you aren't built to find it stressful committing to it.

Now assuming your overall argument has merit: My value function says its better to have loved and lost than not to have loved at all.

Humans may have radically different values once they are blown up to scale. Unless you get your finger first into the first AI's values, there will always be a nonzero fraction of agents who would wish to carry on even knowing it will increase total suffering, because they feel their values are worth suffering for. I am basically talking about practicality now: So what if you are right? The only way to do anything about it is to make sure your AI eliminates anything be it human or alien AI that can paperclip anything like beings capable of suffering. To do this in the long run (not just kill or sterilize all humans which is easy) properly you need to understand friendliness much better than we do now.

If you want to learn about friendliness you better try and learn to deceive agents with whom you might be able to work together to figure out more about it, especially concerning your motives. ;)

Comment author: humpolec 17 August 2010 01:53:06PM *  1 point [-]

We are according to consensus which I do not dispute since its well founded slowly approach heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?

Dyson's eternal intelligence. Unfortunately I know next to nothing about physics so I have no idea how this is related to what we know about the universe.

Comment author: Baughn 18 August 2010 03:55:18PM *  1 point [-]

It runs into edge conditions we know little about; like, are protons stable or not. (The answer appears to be no, by the way.)

At this point in time I would not expect to be able to do infinite computation in the future. The future has a way of surprising, though; I'd prefer to wait and see.