Do not attempt a literal interpretation, rather try to consider the gist of the matter, if possible.
I have a better idea. Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading.
Let's reach the stars first and worry later about how many zillions of years of fun remain. If we eventually run out, then we can abort or wirehead. For now, it seems like the expected awesome of creating an intergalactic posthuman civilization is pretty high.
Even if we create unimaginably many posthumans having unimaginable posthuman fun, and then get into some bitter resource struggles as we approach the heat death of the universe, I think it will have been worth it.
This post is centered on a false dichotomy, to address its biggest flaw in reasoning. If we're at time t=0, and widespread misery occurs at time t=10^10, then solutions other than "Discontinue reproducing at t=0" exist. Practical concerns aside - as without practical concerns aside, there is no point in even talking about this - the appropriate solution would be to end reproduction at, say, t=10^9.6. This post arbitrarily says "Act now, or never" when, practically, we can't really act now, so any later time is equally feasible and otherwise simply better.
Most of these questions are beyond our current ability to answer. We can speculate and counter-speculate, but we don't know. The immediate barrier to understanding is that we do not know what pleasure and pain, happiness and suffering are, the way that we think we know what a star or a galaxy is.
We have a concept of matter. We have a concept of computation. We have a concept of goal-directed computation. So we can imagine a galaxy of machines, acting according to shared or conflicting utility functions, and constrained by competition and the death of the universe. But we do not know how that would or could feel; we don't even know that it needs to feel like anything at all. If we imagine the galaxy populated with people, that raises another problem - the possibility of the known range of human experience, including its worst dimensions, being realized many times over. That is a conundrum in itself. But the biggest unknown concerns the forms of experience, and the quality of life, of "godlike AIs" and other such hypothetical entities.
The present reality of the world is that humanity is reaching out for technological power in a thousand ways and in a thousand places. That...
Updated the post to suit criticism.
Let's see, if I can't write a good post maybe I can tweak one to become good based on feedback.
I applaud this approach (and upvoted this comment), but I think any future posts would be better received if you did more tweaking prior to publishing them.
These are very important questions which deserve to be addressed, and I hope this post isn't downvoted severely. However, at least one subset of them has been addressed already:
Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 150 and still have fun?
See the Fun Theory Sequence for discussion.
It is probably premature to ask such questions now. We have no idea how the world will look like in 10^20 years. And when I write no idea, I don't mean we have several theories from which we have to choose the right one, but still can't do that. I mean that (if human race doesn't get extinct soon and future will not turn out to be boringly same as the present or the recent history) we can't possibly imagine how would the world function, and even if told, we wouldn't understand. If there will be intelligent creatures in 10^20 years, they will certainly have...
We are looking at very long time scales here, so how wide should our scope be? If we use a very wide scope like this, we get issues, but if we widen it still further we might get even more. Suppose the extent of reality were unlimited, and that the scope of effect of an individual action were unlimited, so that if you do something it affects something, which affects something else, which affects something else, and so on, without limit. This doesn't necessarily need infinite time: We might imagine various cosmologies where the scope could be widened in oth...
If this was the case, a true FAI will just kill us more painlessly than we could. And then go out and stop life evolved in other places from causing such suffering.
But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.
A different (more likely?) scenario is that the god-like entities will not gradually decline their resource usage--they'll store up energy reserves, then burn through them as efficiently as possible, then shut down. It will be really sad each time a god-like entity dies, but not ne...
Thanks for posting. Upvoted.
I have always had an uncomfortable feeling whenever I have been asked to include distant-future generations in my utilitarian moral considerations. Intuitively, I draw on my background in economics, and tell myself that the far-distant future should be discounted toward zero weight. But how do I justify the discounting morally? Let me try to sketch an argument.
I will claim that my primary moral responsibility is to the people around me. I also have a lesser responsibility to the next generation, and a responsibility lesser ...
what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized
If it does turn out that the overwhelmingly likely future is one of extreme negative utility, voluntary extinction (given some set of assumptions) IS maximizing utility.
Also, if the example really is as tangental as you're implying, it should probably not account for 95% of the text (and the title, and the links) in your post.
I cannot fathom the confusion that would lead to this question. Of course it's better for humanity to survive than to not survive. Of course it's better to go extinct in a million years than to go extinct now. The future is more wondrous and less scary than you imagine.
I think freedom should win when contemplating how to ethically shape the future. I don't have any direct evidence that posthumans in a post-Singularity universe will be "happy" throughout their lives in a way that we value, but you certainly don't have evidence to the contrary, either.
As long as neither of us know the exact outcome, I think the sensible thing to do is to maximize freedom, by trying to change technology and culture to unburden us and make us more capable. Then the future can decide for itself, instead of relying on you or I to w...
I added to the post:
To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.
What if there isn't even enough fun for normal human beings to live up until an age of 150 and still have fun?
Really?
my intention is to inquire about the perception that it is our moral responsibility to minimize suffering
It's okay for us to cause more entities to exist, for a greater sum of suffering, provided that it's one of the better possible outcomes.
While intervening in a way that (with or without) consent inflicts a certain amount of additional net suffering on others (such as causing them to be created) is to be avoided all other things e...
What do y'all think about John Smart's thesis that an inward turn is more likely that the traditional script galactic colonization?
http://www.accelerating.org/articles/answeringfermiparadox.html
Rather wild read, but perhaps worth a thought. Would that alternative trajectory affect your opinion of the prospect, XiXiDu?
Interesting thoughts. I also haven't finished the fun sequence, so this may be malformed. The way I see it is this: You can explore and modify your environment for fun and profit (socializing counts here too), and you can modify your goals to get more fun and profit without changing your knowledge.
Future minds may simply have a "wirehead suicide contingency" they choose to abide by, by which, upon very, very strong evidence that they can have no more fun with their current goals, they could simply wirehead themselves. Plan it so that the value of...
Relevant literature: "Should This Be the Last Generation?" By PETER SINGER
I'd like to ask those people who downvote this post for their reasons. I thought this is a reasonable antiprediction to the claims made regarding the value of a future galactic civilisation. Based on economic and scientific evidence it is reasonable to assume that the better part of the future, namely the the time from 10^20 to 10^100 years (and beyond) will be undesirable.
If you spend money and resources on the altruistic effort of trying to give birth to this imaginative galactic civilisation, why don't you take into account the more distant and much lar...
Note that multifoliaterose's recent posts and comments have been highly upvoted: he's gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.
First a few minor things I would like to get out there:
We are according to consensus which I do not dispute (since its well founded) slowly approaching heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?
I don't quite see the difference between real and simulated torture...
On balance I'm not too happy with the history of existence. As Douglas Adams wrote, "In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move." I'd rather not be here myself, so I find the creation of other sentients a morally questionable act. On the other hand, artificial intelligence offers a theoretical way out of this mess. Worries about ennui strike me as deeply misguided. Oppression, frailty, and stupidity makes hanging out in this world unpleasant, not any lack of worth...
Followup to: Should I believe what the SIAI claims? (Point 4: Is it worth it?)
Imagine humanity to succeed. To spread out into the galaxy and beyond. Trillions of entities...
Then, I wonder, what in the end? Imagine if our dreams of a galactic civilization come true. Will we face unimaginable war over resources and torture as all this beauty will face its inevitable annihilation as the universe approaches absolute zero temperature?
What does this mean? Imagine how many more entities of so much greater consciousness and intellect will be alive in 10^20 years. If they are doomed to face that end or commit suicide, how much better would it be to face extinction now? That is, would the amount of happiness until then balance the amount of suffering to be expected at the beginning of the end? If we succeed to pollinate the universe, is the overall result ethical justifiable? Or might it be ethical to abandon the idea of reaching out to stars?
The question is, is it worth it? Is it ethical? Should we worry about the possibility that we'll never make it to the stars? Or should we rather worry about the prospect that trillions of our distant descendants may face, namely unimaginable misery?
And while pondering the question of overall happiness, all things considered, how sure are we that on balance there won't be much more suffering in the endless years to come? Galaxy spanning wars, real and simulated torture? Things we cannot even imagine now.
One should also consider that it is more likely than not that we'll see the rise of rogue intelligences. It might also be possible that humanity succeeds to create something close to a friendly AI, which however fails to completely follow CEV (Coherent Extrapolated Volition). Ultimately this might not lead to our inevitable extinction but even more suffering, on our side or that of other entities out there.
Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 1000 and still have fun? What if soon after the singularity we discover that all that is left is endless repetition? If we've learnt all there is to learn, done all there is to do. All games played, all dreams dreamed, what if nothing new under the sky is to be found anymore? And don’t we all experience this problem already these days? Have you people never thought and felt that you’ve already seen that movie, read that book or heard that song before for that they all featured the same plot, the same rhythm?
If it is our responsibility to die for our children to live, for the greater public good, if we are in charge of the upcoming galactic civilization, if we bear a moral responsibility for those entities to be alive, why don't the face the same responsibility for the many more entities to be alive but suffering? Is it the right thing to do, to live at any cost, to give birth at any price?
What if it is not about "winning" and "not winning" but about losing or gaining one possibility among millions that could go horrible wrong?
Isn't even the prospect of a slow torture to death enough to consider to end our journey here, a torture that spans a possible period from 10^20 years up to the Dark Era from 10^100 years and beyond? This might be a period of war, suffering and suicide. It might be the Era of Death and it might be the lion's share of the future. I personally know a few people who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.
To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.
So what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the relative amount of suffering increasing. What if the ultimate payoff is notably negative? If it is our moral responsibility to minimize suffering and if we are unable minimize suffering by actively shaping the universe, but rather risk to increase it, what should we do about it? Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?
Hereby I ask the Less Wrong community to help me resolve potential fallacies and biases in my framing of the above ideas.
See also
The Fun Theory Sequence
"Should This Be the Last Generation?" By PETER SINGER (thanks timtyler)