First a few minor things I would like to get out there:
We are according to consensus which I do not dispute (since its well founded) slowly approaching heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?
I don't quite see the difference between real and simulated torture in the context of a civilization as advanced as the one you are arguing against we let develop. So I'm not sure you are getting at by mentioning them as separate things.
You need to read up on fun theory. And if you disregard it, let me just point out that worrying about people not having fun is different concern than from assuming they will experience mental anguish at the prospect of suicide or a inevitable death. Actually not having fun can be neatly solved by suicide if you exhaust all other options as long as you aren't built to find it stressful committing to it.
Now assuming your overall argument has merit: My value function says its better to have loved and lost than not to have loved at all.
Humans may have radically different values once they are blown up to scale. Unless you get your finger first into the first AI's values, there will always be a nonzero fraction of agents who would wish to carry on even knowing it will increase total suffering, because they feel their values are worth suffering for. I am basically talking about practicality now: So what if you are right? The only way to do anything about it is to make sure your AI eliminates anything be it human or alien AI that can paperclip anything like beings capable of suffering. To do this in the long run (not just kill or sterilize all humans which is easy) properly you need to understand friendliness much better than we do now.
If you want to learn about friendliness you better try and learn to deceive agents with whom you might be able to work together to figure out more about it, especially concerning your motives. ;)
I don't quite see the difference between real and simulated torture...
I tried to highlight the increased period of time you have to take into account. This allows for even more suffering than the already huge time span implies from a human perspective.
You need to read up on fun theory.
Indeed, but I felt this additional post was required as many people were questioning this point in the other post. Also, I came across a post by a physicist which triggered this post. I simply have my doubts that the sequence you mention has resolved this issue? But I ...
Followup to: Should I believe what the SIAI claims? (Point 4: Is it worth it?)
Imagine humanity to succeed. To spread out into the galaxy and beyond. Trillions of entities...
Then, I wonder, what in the end? Imagine if our dreams of a galactic civilization come true. Will we face unimaginable war over resources and torture as all this beauty will face its inevitable annihilation as the universe approaches absolute zero temperature?
What does this mean? Imagine how many more entities of so much greater consciousness and intellect will be alive in 10^20 years. If they are doomed to face that end or commit suicide, how much better would it be to face extinction now? That is, would the amount of happiness until then balance the amount of suffering to be expected at the beginning of the end? If we succeed to pollinate the universe, is the overall result ethical justifiable? Or might it be ethical to abandon the idea of reaching out to stars?
The question is, is it worth it? Is it ethical? Should we worry about the possibility that we'll never make it to the stars? Or should we rather worry about the prospect that trillions of our distant descendants may face, namely unimaginable misery?
And while pondering the question of overall happiness, all things considered, how sure are we that on balance there won't be much more suffering in the endless years to come? Galaxy spanning wars, real and simulated torture? Things we cannot even imagine now.
One should also consider that it is more likely than not that we'll see the rise of rogue intelligences. It might also be possible that humanity succeeds to create something close to a friendly AI, which however fails to completely follow CEV (Coherent Extrapolated Volition). Ultimately this might not lead to our inevitable extinction but even more suffering, on our side or that of other entities out there.
Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 1000 and still have fun? What if soon after the singularity we discover that all that is left is endless repetition? If we've learnt all there is to learn, done all there is to do. All games played, all dreams dreamed, what if nothing new under the sky is to be found anymore? And don’t we all experience this problem already these days? Have you people never thought and felt that you’ve already seen that movie, read that book or heard that song before for that they all featured the same plot, the same rhythm?
If it is our responsibility to die for our children to live, for the greater public good, if we are in charge of the upcoming galactic civilization, if we bear a moral responsibility for those entities to be alive, why don't the face the same responsibility for the many more entities to be alive but suffering? Is it the right thing to do, to live at any cost, to give birth at any price?
What if it is not about "winning" and "not winning" but about losing or gaining one possibility among millions that could go horrible wrong?
Isn't even the prospect of a slow torture to death enough to consider to end our journey here, a torture that spans a possible period from 10^20 years up to the Dark Era from 10^100 years and beyond? This might be a period of war, suffering and suicide. It might be the Era of Death and it might be the lion's share of the future. I personally know a few people who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.
To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.
So what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the relative amount of suffering increasing. What if the ultimate payoff is notably negative? If it is our moral responsibility to minimize suffering and if we are unable minimize suffering by actively shaping the universe, but rather risk to increase it, what should we do about it? Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?
Hereby I ask the Less Wrong community to help me resolve potential fallacies and biases in my framing of the above ideas.
See also
The Fun Theory Sequence
"Should This Be the Last Generation?" By PETER SINGER (thanks timtyler)