Should humanity give birth to a galactic civilization?
Followup to: Should I believe what the SIAI claims? (Point 4: Is it worth it?)
It were much better that a sentient being should never have existed, than that it should have existed only to endure unmitigated misery. — Percy Bysshe Shelley
Imagine humanity to succeed. To spread out into the galaxy and beyond. Trillions of entities...
Then, I wonder, what in the end? Imagine if our dreams of a galactic civilization come true. Will we face unimaginable war over resources and torture as all this beauty will face its inevitable annihilation as the universe approaches absolute zero temperature?
What does this mean? Imagine how many more entities of so much greater consciousness and intellect will be alive in 10^20 years. If they are doomed to face that end or commit suicide, how much better would it be to face extinction now? That is, would the amount of happiness until then balance the amount of suffering to be expected at the beginning of the end? If we succeed to pollinate the universe, is the overall result ethical justifiable? Or might it be ethical to abandon the idea of reaching out to stars?
The question is, is it worth it? Is it ethical? Should we worry about the possibility that we'll never make it to the stars? Or should we rather worry about the prospect that trillions of our distant descendants may face, namely unimaginable misery?
And while pondering the question of overall happiness, all things considered, how sure are we that on balance there won't be much more suffering in the endless years to come? Galaxy spanning wars, real and simulated torture? Things we cannot even imagine now.
One should also consider that it is more likely than not that we'll see the rise of rogue intelligences. It might also be possible that humanity succeeds to create something close to a friendly AI, which however fails to completely follow CEV (Coherent Extrapolated Volition). Ultimately this might not lead to our inevitable extinction but even more suffering, on our side or that of other entities out there.
Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 1000 and still have fun? What if soon after the singularity we discover that all that is left is endless repetition? If we've learnt all there is to learn, done all there is to do. All games played, all dreams dreamed, what if nothing new under the sky is to be found anymore? And don’t we all experience this problem already these days? Have you people never thought and felt that you’ve already seen that movie, read that book or heard that song before for that they all featured the same plot, the same rhythm?
If it is our responsibility to die for our children to live, for the greater public good, if we are in charge of the upcoming galactic civilization, if we bear a moral responsibility for those entities to be alive, why don't the face the same responsibility for the many more entities to be alive but suffering? Is it the right thing to do, to live at any cost, to give birth at any price?
What if it is not about "winning" and "not winning" but about losing or gaining one possibility among millions that could go horrible wrong?
Isn't even the prospect of a slow torture to death enough to consider to end our journey here, a torture that spans a possible period from 10^20 years up to the Dark Era from 10^100 years and beyond? This might be a period of war, suffering and suicide. It might be the Era of Death and it might be the lion's share of the future. I personally know a few people who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.
To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.
So what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the relative amount of suffering increasing. What if the ultimate payoff is notably negative? If it is our moral responsibility to minimize suffering and if we are unable minimize suffering by actively shaping the universe, but rather risk to increase it, what should we do about it? Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?
Hereby I ask the Less Wrong community to help me resolve potential fallacies and biases in my framing of the above ideas.
See also
"Should This Be the Last Generation?" By PETER SINGER (thanks timtyler)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (85)
This is why Buddhism is dangerous.
If this was the case, a true FAI will just kill us more painlessly than we could. And then go out and stop life evolved in other places from causing such suffering.
What do y'all think about John Smart's thesis that an inward turn is more likely that the traditional script galactic colonization?
http://www.accelerating.org/articles/answeringfermiparadox.html
Rather wild read, but perhaps worth a thought. Would that alternative trajectory affect your opinion of the prospect, XiXiDu?
Most of these questions are beyond our current ability to answer. We can speculate and counter-speculate, but we don't know. The immediate barrier to understanding is that we do not know what pleasure and pain, happiness and suffering are, the way that we think we know what a star or a galaxy is.
We have a concept of matter. We have a concept of computation. We have a concept of goal-directed computation. So we can imagine a galaxy of machines, acting according to shared or conflicting utility functions, and constrained by competition and the death of the universe. But we do not know how that would or could feel; we don't even know that it needs to feel like anything at all. If we imagine the galaxy populated with people, that raises another problem - the possibility of the known range of human experience, including its worst dimensions, being realized many times over. That is a conundrum in itself. But the biggest unknown concerns the forms of experience, and the quality of life, of "godlike AIs" and other such hypothetical entities.
The present reality of the world is that humanity is reaching out for technological power in a thousand ways and in a thousand places. That is the reality that will issue either in catastrophe or in superintelligence. The idea of simply halting that process through cautionary persuasion is futile. To actually stop it, and not just slow it down, would require force. So I think the most constructive attitude towards these doubts about the further future is to see them as input to the process which will create superintelligence. If this superintelligence acts with even an approximation of humaneness, it will be sensitive to such issues, and if it really does embody something like the extrapolated volition of humanity, it will resolve them as we would wish to see them resolved.
Therefore, I propose that your title question - "Should humanity give birth to a galactic civilization?" - should be regarded as a benchmark of progress towards an exact concept of friendliness. A friendly AI should be able to answer that question, and explain its answer; and a formal strategy for friendly AI should be able to explain how its end product - the AI itself - would be capable of answering the question.
Caterpillars discussing the wisdom of flight.
On balance I'm not too happy with the history of existence. As Douglas Adams wrote, "In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move." I'd rather not be here myself, so I find the creation of other sentients a morally questionable act. On the other hand, artificial intelligence offers a theoretical way out of this mess. Worries about ennui strike me as deeply misguided. Oppression, frailty, and stupidity makes hanging out in this world unpleasant, not any lack of worthwhile pursuits. Believe me, I could kill a few millennia no problem. If Kurzweil's dreams of abundance (in every sense) come true, I won't be complaining.
Now, the notion of a negative but nonfatal Singularity deserves consideration. The way I typically see things, there's either death or Singularity in the long run and both are good. Indefinite life extension without revolutionary economic and social change would be a nightmare, though perhaps better at every individual point than the pain of aging.
Your concerns about the ultimate fate of the universe are intriguing but too distant to arouse much emotion from me. Who knows what will happen then? Such entities might travel to other universes or forge their own. I'll just say that judging by the present record, intelligence and suffering go together. Whether we can escape this remains to be seen.
A different (more likely?) scenario is that the god-like entities will not gradually decline their resource usage--they'll store up energy reserves, then burn through them as efficiently as possible, then shut down. It will be really sad each time a god-like entity dies, but not necessarily painful.
Actually, if evolutionary pressures continue (i.e. no singleton) it seems fairly likely that usable resources will collapse suddenly, and resource starvation is relatively brief. Right now, we have an energy diet from the sun--it only releases so much energy at once. But future entities may try to break up stars to use their energy more efficiently (solar fusion is highly inefficient compared to possible levels).
We are looking at very long time scales here, so how wide should our scope be? If we use a very wide scope like this, we get issues, but if we widen it still further we might get even more. Suppose the extent of reality were unlimited, and that the scope of effect of an individual action were unlimited, so that if you do something it affects something, which affects something else, which affects something else, and so on, without limit. This doesn't necessarily need infinite time: We might imagine various cosmologies where the scope could be widened in other ways. Where would that leave the ethical value of any action we commit?
I will give an analogy, which we can call "Almond's Puppies" (That's a terrible name really, but it is too late now.)
Suppose we are standing at the end of two lines of boxes. Each line continues without end, and each box contains a puppy - so each line contains an infinity of puppies. You can choose to press a button to blow up the first box or another button to spare it. After you press the button, some mechanism, that you can't predict, will decide to blow up the second box or spare it, based on your decision, and then it will decide to blow up the third box or spare it, based on your decision, and so on. So you press that button, and either the first box is blown up or spared, and then boxes get blown up or spared right along the line, with no end to it.
You have to press a button to start one line off. You choose to press the button to spare the first puppy. Someone else chooses to press the button to blow up the first puppy. The issue now is: Did the other person do a bad thing? If so, why? Did he kill more puppies than you? Does the fact that he was nicer to the nearby puppies matter? Does it matter that the progress of the wave of puppy explosions along the line of boxes will take time, and at any instant of time, only a finite number of puppies will have been blown up, even though there is no end to it in the future?
If we are looking at distant future scenarios, we might ask if we are sure that reality is limited.
I don't understand your Puppies question. When you say:
.... what do you mean by "based on your decision"? They decide the same as you did? The opposite? There's a relationship to your decision but you don't know which one.
I am really quite confused, and don't see what moral dilemma there is supposed to be beyond "should I kill a puppy or not?" - which on the grand scale of things isn't a very hard Moral Dilemma :P
"There's a relationship to your decision but you don't know which one". You won't see all the puppies being spared or all the puppies being blown up. You will see some of the puppies being spared and some of them being blown up, with no obvious pattern - however you know that your decision ultimately caused whatever sequence of sparing/blowing up the machine produced.
Gradually reducing mental processing speed, approaching the universes heatdeath (ie. at a point where nothing else of interest is occuring) and death (painlessly) are analogous.
Neither of those options is, in any sense, torture. They're just death.
So I'm really not sure what you're getting at.
Really?
It's okay for us to cause more entities to exist, for a greater sum of suffering, provided that it's one of the better possible outcomes.
While intervening in a way that (with or without) consent inflicts a certain amount of additional net suffering on others (such as causing them to be created) is to be avoided all other things equal, it's justifiable if the net fun is increased by some multiple of the suffering (the requisite multiple depends on consent, on who gets the fun, i.e. if you're gaining fun by torturing another, the multiple may have to be huge).
I agree that we should consider the possibility of suffering. Suicide (by radical modification into something that is not suffering, or actual termination) seems like an easy solution.
I imagine some "artist" eventually creating a creature that is sentient, feels great pain, and eloquently insists that it does not want to be changed, or ended. Sick bastard. Or perhaps it would merely be programmed to elaborately fake great pain, to others' discomfort, while secretly reveling in it. I imagine technology would be able to tell the difference.
I have at least a century of interesting math waiting for me that I will never get to. I feel really bad every time I think about that.
Seconded.
And more new interesting math seems to get created all the time. It's like drinking from the firehose.
I applaud this approach (and upvoted this comment), but I think any future posts would be better received if you did more tweaking prior to publishing them.
How about a Less Wrong peer review system? This could be especially good for Less Wrongers who are non-native speakers. I'll volunteer to review a few posts--dreamalgebra on google's email service. (Or private message, but I somewhat prefer the structure of email since it's easier for me to see the messages I've written.)
Fictional evidence: Q as allegedly the last born - but where are his parents? And what about the 'true Q' from her own episode? They fight a freaking war over Qs wish to reproduce, but do not allow one guy to commit suicide. But handing out powers or taking them away is easy as cake. Not particular consistent.
If time comes to build a universe wide civilization, then there will be many minds to ponder all the questions. We do not have to get that right now. (Current physics only allows for colonization of the local group anyhow.) If we put in enough effort to solve the GUT there might be some way around the limitations of the universe, or we will find another way to deal with them - as has been the case many times before. Now is a great time to build an amazing future, but not yet the time to end reproduction.
That's often a good strategy.
This post is centered on a false dichotomy, to address its biggest flaw in reasoning. If we're at time t=0, and widespread misery occurs at time t=10^10, then solutions other than "Discontinue reproducing at t=0" exist. Practical concerns aside - as without practical concerns aside, there is no point in even talking about this - the appropriate solution would be to end reproduction at, say, t=10^9.6. This post arbitrarily says "Act now, or never" when, practically, we can't really act now, so any later time is equally feasible and otherwise simply better.
The false dichotomy is when to do something about it. The solution to the above problem would be that those last 100 entities were never created. That does not require us to stop creating entities right now. If the entity is never created, its utility is undefined. That's why this is a false dichotomy: you say do something now or never do something, when we could wait until very near the ultimate point of badness to remedy the problem.
If it does turn out that the overwhelmingly likely future is one of extreme negative utility, voluntary extinction (given some set of assumptions) IS maximizing utility.
Also, if the example really is as tangental as you're implying, it should probably not account for 95% of the text (and the title, and the links) in your post.
Let's reach the stars first and worry later about how many zillions of years of fun remain. If we eventually run out, then we can abort or wirehead. For now, it seems like the expected awesome of creating an intergalactic posthuman civilization is pretty high.
Even if we create unimaginably many posthumans having unimaginable posthuman fun, and then get into some bitter resource struggles as we approach the heat death of the universe, I think it will have been worth it.
Interesting thoughts. I also haven't finished the fun sequence, so this may be malformed. The way I see it is this: You can explore and modify your environment for fun and profit (socializing counts here too), and you can modify your goals to get more fun and profit without changing your knowledge.
Future minds may simply have a "wirehead suicide contingency" they choose to abide by, by which, upon very, very strong evidence that they can have no more fun with their current goals, they could simply wirehead themselves. Plan it so that the value of just being alive and experiencing the slow end of the world goes up as other sources of fun diminish. (And leave in there a huge reward for discovering that you are wrong, just not motive to seek it out irrationally).
You would need a threshold of probability that life is going to suck forever from here on out, only after which the contingency was initiated.
Relevant literature: "Should This Be the Last Generation?" By PETER SINGER
's OK. That lists a whole book about the topic - and there is also:
"The Voluntary Human Extinction Movement"
If any movement is dysgenic that surely must be it.
Lets see people who are altruistic and in control of their instincts and emotions enough to not have children in order to alleviate very distant future suffering which to top it all of is a very very abstract argument to begin with, yeah those are the kind who should stop having children first. Great plan.
I first wanted to write "self-defeating" but I soon realized they may actually get their wish, but only if they convince enough of the people who's kids should be working on friendly AI in 20 something years to rather not have the second or even first one.
But it won't leave the Earth to "nature" as they seem to be hoping.
Thanks for posting. Upvoted.
I have always had an uncomfortable feeling whenever I have been asked to include distant-future generations in my utilitarian moral considerations. Intuitively, I draw on my background in economics, and tell myself that the far-distant future should be discounted toward zero weight. But how do I justify the discounting morally? Let me try to sketch an argument.
I will claim that my primary moral responsibility is to the people around me. I also have a lesser responsibility to the next generation, and a responsibility lesser yet to the generation after that, and so on. A steep discount rate - 30% per generation or so. I will do my duty to the next generation, but in turn I expect the next generation to do its duty to the generation after that. After all, the next generation is in a far better position than me to forsee what problems the generation after that really faces. Their efforts will be much less likely than mine to be counterproductive.
If I were to spread my concern over too many generations, I would be shortchanging the next generation of their fair share of my concern. Far-future generations have plenty of predecessor generations to worry about their welfare. The next generation has only us. We mustn't shortchange them!
This argument is just a sketch, of course. I just invented it today. Feedback is welcome.
I don't think you need any discounting. Your effect on the year 2012 is somewhat predictable. It is possible to choose a course of action based on known effect's on the year 2012.
You effect on the year 3000 is unpredictable. You can't even begin to predict what effect your actions will have on the human race in the year 3000.
Thus, there is an automatic discounting effect. An act is only as valuable as it's expected outcome. The expected outcome on the year 1,000,000 is almost always ~zero, unless there is some near-future extinction possibility, because the probability of you having a desired impact is essentially zero.
I tend to agree, in that I also have a steep discount across time and distance (though I tend to think of it as "empathetic distance", more about perceived self-similarity than measurable time or distance, and I tend to think of weightings in my utility function rather than using the term "moral responsibility").
That said, it's worth asking just how steep a discount is justifiable - WHY do you think you're more responsible to a neighbor than to four of her great-grandchildren, and do you think this is the correct discount to apply?
And even if you do think it's correct, remember to shut up and multiply. It's quite possible for there to be more than 35x as much sentience in 10 generations as there is today.
In nature, the best way you can help your great grand-kids is to help your children. If there was a way to help your grandchildren at the expense of your children that ultimately benefitted the grandchildren, nature might favour it - but usually there is simply no easy way to do that.
Grandparents do sometimes favour more distant offspring in their wills - if they think the direct offspring are compromised or irresponsible, for example. Such behaviour is right and natural.
Temporal discounting is a reflection of your ignorance and impotence when it comes to the distant future. It is not really that you fundamentally care less about the far future - it is more that you don't know and can't help - so investing mental resources would be rather pointless.
According to Robin Hanson, our behavior proves that we don't care about the far future.
Robin argues that few are prepared to invest now to prevent future destruction of the planet. The conclusion there seems to be that humans are not utilitarian agents.
Robin seems to claim that humans do not invest in order to pass things on to future generations - whereas in fact they do just that whenever they invest in their own offspring.
Obviously you don't invest in your great-grandchildren directly. You invest in your offspring - they can manage your funds better than you can do so from your wheelchair or grave.
Temporal discounting make sense. Organisms do it becasue they can't see or control the far future as well as their direct descendants can. In those rare cases where that is not true, direct descendants can sometimes be bypassed.
However, you wouldn't want to build temporal discounting into the utility function of a machine intelligence. It knows better than you do its prediction capablities - and can figure out such things for itself.
Since that exact point was made in the Eliezer essay Robin's post was a reply to, it isn't clear that Robin understands that.
Future is the stuff you build goodness out of. The properties of stuff don't matter, what matters is the quality and direction of decisions made about arranging it properly. If you suggest a plan with obvious catastrophic problems, chances are it's not what will be actually chosen by rational agents (that or your analysis is incorrect).
Moral analysis.
You lost the context. Try not to drift.
Is this really worth your time (or Carl Shulman's)? Surely you guys have better things to do?
In my opinion, the post doesn't warrant -90 karma points. That's pretty harsh. I think you have plenty to contribute to this site -- I hope the negative karma doesn't discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)
Note that multifoliaterose's recent posts and comments have been highly upvoted: he's gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.
On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which "reads" angry, and doesn't fit with norms of politeness and discourse here).
Substantively, I'll consider the major pieces individually.
The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar's arguments and made your points more explicit, but instead simply stated the conclusion.
The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce's Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.
For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the "lifeboat ethics" scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer's doesn't work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.
In several places throughout the post you use "what if" language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
Edit: I misread the "likely" in this sentence and mistakenly objected to it.
I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.
I don't believe that is a reasonable prediction. You're dealing with timescales so far beyond human lifespans that assuming they will never think of the things you think of is entirely implausible.
In this horrendous future of yours, why do people keep reproducing? Why don't the last viable generation (knowing they're the last viable generation) cease reproduction?
If you think that this future civilisation will be incapable of understanding the concepts you're trying to convey, what makes you think we will understand them?
Ah, I get it now, you believe that all life is necessarily a net negative. That existing is less of a good than dying is of a bad.
I disagree, and I suspect almost everyone else here does too. You'll have to provide some justification for that belief if you wish us to adopt it.
I'm not sure I disagree, but I'm also not sure that dying is a necessity. We don't understand physics yet, much less consciousness; it's too early to assume it as a certainty, which means I have a significantly nonzero confidence of life being an infinite good.
Well, first off..
What kind of decisions were you planning to take? You surely wouldn't want to make a "friendly AI" that's hardcoded to wipe out humanity; you'd expect it to come to the conclusion that that's the best option by itself, based on CEV. I'd want it to explain its reasoning in detail, but I might even go along with that.
My argument is that it's too early to take any decisions at all. We're still in the data collection phase, and the state of reality is such that I wouldn't trust anything but a superintelligence to be right about the consequences of our various options anyway.
We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
Negative utilitarianism is.. interesting, but I'm pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?
That's not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.
Doesn't that make most expected utility calculations make no sense?
A problem with the math, not with reality.
There are all kinds of mathematical tricks to deal with infinite quantities. Renormalization is something you'd be familiar with from physics; from my own CS background, I've got asymptotic analysis (which can't see the fine details, but easily can handle large ones). Even something as simple as taking the derivative of your utility function would often be enough to tell which alternative is best.
I've also got a significantly nonzero confidence of infinite negative utility, mind you. Life isn't all roses.
It's a worthwhile question, but probably fits better on an open thread for the first round or two of comments, so you can refine the question to a specific proposal or core disagreement/question.
My first response to what I think you're asking is that this question applies to you as an individual just as much as it does to humans (or human-like intelligences) as a group. There is a risk of sadness and torture in your future. Why keep living?
I didn't downvote the post - it is thought-provoking, though I don't agree with it.
But I had a negative reaction to the title (which seems borderline deliberately provocative to attract attention), and the disclaimer - as thomblake said, "Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading."
I have a better idea. Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading.
It is probably premature to ask such questions now. We have no idea how the world will look like in 10^20 years. And when I write no idea, I don't mean we have several theories from which we have to choose the right one, but still can't do that. I mean that (if human race doesn't get extinct soon and future will not turn out to be boringly same as the present or the recent history) we can't possibly imagine how would the world function, and even if told, we wouldn't understand. If there will be intelligent creatures in 10^20 years, they will certainly have emotions we don't possess, thoughts we can't fathom, values we would call perverse, if even the description in the language of emotions and values would make sense in that world.
Why should we care about the world we don't understand one bit? Trying to answer questions about such distant future puts us in the situaton of a Homo erectus evaluating the risks of inventing fire. Do we imagine any of ideas that a Homo erectus could invent would be even marginally valuable for us today? And given that we are no more than several hundert thousand years younger, flatworm would perhaps be a more fitting analogy than Homo erectus.
Darwin answered the question of why we care.
No, Darwin explained what actually happens. There is no should there; we invent those ourselves. Unless you meant that the consequences of evolution give us a better reason to care; but that would in itself be a personal judgement.
I care, too, but there's no law of nature stating that all other humans must also care.
Darwin answered the question of: "why do we care...".
Ah. Point taken; though of course he didn't literally do so for humans, evolution definitely has a lot to do with it.
Is anyone going to propose this as an answer to (what some say is) the Fermi paradox?
I thought people would be too bored of it to mention. I've heard it proposed dozens of times as a possible explanation. I should probably spend less time with philosophy majors.
Anyway the strong version of the statement is much more interesting. Not only do naturally evolved intelligences all have values that for some reason or another choose to rather let it all end than endure existence, it also means that they never spawn AI's with values sufficiently radical to disagree. The mind space that encompasses is mind boggling.
Either its hard for a civ to build a AI with truly alien values, they go extinct before they can build AIs (different argument), decide to kill themselves before doing so (odd) or nearly all possible minds agree nonexistance is good.
We may have very very weird minds if the last option is the answer.
First a few minor things I would like to get out there:
We are according to consensus which I do not dispute (since its well founded) slowly approaching heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?
I don't quite see the difference between real and simulated torture in the context of a civilization as advanced as the one you are arguing against we let develop. So I'm not sure you are getting at by mentioning them as separate things.
You need to read up on fun theory. And if you disregard it, let me just point out that worrying about people not having fun is different concern than from assuming they will experience mental anguish at the prospect of suicide or a inevitable death. Actually not having fun can be neatly solved by suicide if you exhaust all other options as long as you aren't built to find it stressful committing to it.
Now assuming your overall argument has merit: My value function says its better to have loved and lost than not to have loved at all.
Humans may have radically different values once they are blown up to scale. Unless you get your finger first into the first AI's values, there will always be a nonzero fraction of agents who would wish to carry on even knowing it will increase total suffering, because they feel their values are worth suffering for. I am basically talking about practicality now: So what if you are right? The only way to do anything about it is to make sure your AI eliminates anything be it human or alien AI that can paperclip anything like beings capable of suffering. To do this in the long run (not just kill or sterilize all humans which is easy) properly you need to understand friendliness much better than we do now.
If you want to learn about friendliness you better try and learn to deceive agents with whom you might be able to work together to figure out more about it, especially concerning your motives. ;)
Dyson's eternal intelligence. Unfortunately I know next to nothing about physics so I have no idea how this is related to what we know about the universe.
It runs into edge conditions we know little about; like, are protons stable or not. (The answer appears to be no, by the way.)
At this point in time I would not expect to be able to do infinite computation in the future. The future has a way of surprising, though; I'd prefer to wait and see.
I cannot fathom the confusion that would lead to this question. Of course it's better for humanity to survive than to not survive. Of course it's better to go extinct in a million years than to go extinct now. The future is more wondrous and less scary than you imagine.
That only makes sense if you think life is always better than death. But that certainly isn't my view -- I think some possible futures are so bad that extinction would be preferable. In that case, the answer to the title question depends on the probabilities of such futures.
EDIT: For the record, I don't think we need to resort to pulling the plug on ourselves anytime soon.
I don't think life is always better than death according to my utility function.
I do however think that the most likley outcome considering the priorities of the blind idiot god or perhaps even self described benevolent minds is that inhabitants of such spaces in the very long term, are minds who are quite ok being there.
On "Benevolent" minds: If I knew beyond a doubt that something which I would consider hell exists and that everyone goes there after being resurrected on judgment day, and I also knew that it was very unlikely that I could stop everyone from ever being born or being resurrected I would opt for trying to change or create people that would enjoy living in that hell.
To decide to lose intentionally, I need to know how much it costs to try to win, what the odds of success are, and what the difference in utility is if I win.
I feel like people weigh those factors unconsciously and automatically (using bounded resources and rarely with perfect knowledge or accuracy).
I think freedom should win when contemplating how to ethically shape the future. I don't have any direct evidence that posthumans in a post-Singularity universe will be "happy" throughout their lives in a way that we value, but you certainly don't have evidence to the contrary, either.
As long as neither of us know the exact outcome, I think the sensible thing to do is to maximize freedom, by trying to change technology and culture to unburden us and make us more capable. Then the future can decide for itself, instead of relying on you or I to worry about these things.
Also, how many people do you know who could honestly claim, "I wish I had never been born?" Although there are certainly some, I don't think there are very many, and life down here on Earth isn't even that great.
And there is nearly a infinite space of minds that wold look at all life today and consider it better to have never existed.
The minds most likley to live for long periods in a situation that we would judge them to be better off never been born at all are either extremely unfree (no suicide) or are already adapted to consider it perfectly tolerable or perhaps even enjoyable.
I agree, that sounds very depressing. However, I don't understand the minds, emotions, or culture of the entities that will exist then, and as such, I don't think it's ethical for me to decide in advance how bad it is. We don't kill seniors with Alzheimer's, because it's not up to us to judge whether their life is worth living or not.
Plus, I just don't see the point in making a binding decision now about potential suffering in the far future, when we could make it N years from now. I don't see how suicide would be harder later, if it turns out to be actually rational (as long as we aim to maintain freedom.)
I don't really understand your greater argument. Inaction (e.g. sitting on Earth, not pursuing AI, not pursuing growth) is not morally neutral. By failing to act, we're risking suffering in various ways; insufficiency of resources on the planet, political and social problems, or a Singularity perpetrated by actors who are not acting in the interest of humanity's values. All of these could potentially result in the non-existence of all the future actors we're discussing. That's got to be first and foremost in any discussion of our moral responsibility toward them.
We can't opt out of shaping the universe, so we ought to do a good a job as we can as per our values. The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.
If we can't stop entropy, then we can't stop entropy, but I still don't see why our descendants should be less able to deal with this fact than we are. We appreciate living regardless, and so may they.
Surely posthuman entities living at the 10^20 year mark can figure out much more accurately than us whether it's ethical to continue to grow and/or have children at that point.
As far as I can tell, the single real doomsday scenario here is, what if posthumans are no longer free to commit suicide, but they nevertheless continue to breed; heat death is inevitable, and life in a world with ever-decreasing resources is a fate worse than death. That would be pretty bad, but the first and last seem to me unlikely enough, and all four conditions are inscrutable enough from our limited perspective that I don't see a present concern.
These are very important questions which deserve to be addressed, and I hope this post isn't downvoted severely. However, at least one subset of them has been addressed already:
See the Fun Theory Sequence for discussion.