Followup to: Should I believe what the SIAI claims? (Point 4: Is it worth it?)

It were much better that a sentient being should never have existed, than that it should have existed only to endure unmitigated misery.  Percy Bysshe Shelley

Imagine humanity to succeed. To spread out into the galaxy and beyond. Trillions of entities...

Then, I wonder, what in the end? Imagine if our dreams of a galactic civilization come true. Will we face unimaginable war over resources and torture as all this beauty will face its inevitable annihilation as the universe approaches absolute zero temperature?

What does this mean? Imagine how many more entities of so much greater consciousness and intellect will be alive in 10^20 years. If they are doomed to face that end or commit suicide, how much better would it be to face extinction now? That is, would the amount of happiness until then balance the amount of suffering to be expected at the beginning of the end? If we succeed to pollinate the universe, is the overall result ethical justifiable? Or might it be ethical to abandon the idea of reaching out to stars?

The question is, is it worth it? Is it ethical? Should we worry about the possibility that we'll never make it to the stars? Or should we rather worry about the prospect that trillions of our distant descendants may face, namely unimaginable misery? 

And while pondering the question of overall happiness, all things considered, how sure are we that on balance there won't be much more suffering in the endless years to come? Galaxy spanning wars, real and simulated torture? Things we cannot even imagine now.

One should also consider that it is more likely than not that we'll see the rise of rogue intelligences. It might also be possible that humanity succeeds to create something close to a friendly AI, which however fails to completely follow CEV (Coherent Extrapolated Volition). Ultimately this might not lead to our inevitable extinction but even more suffering, on our side or that of other entities out there. 

Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 1000 and still have fun? What if soon after the singularity we discover that all that is left is endless repetition? If we've learnt all there is to learn, done all there is to do. All games played, all dreams dreamed, what if nothing new under the sky is to be found anymore? And don’t we all experience this problem already these days? Have you people never thought and felt that you’ve already seen that movie, read that book or heard that song before for that they all featured the same plot, the same rhythm?

If it is our responsibility to die for our children to live, for the greater public good, if we are in charge of the upcoming galactic civilization, if we bear a moral responsibility for those entities to be alive, why don't the face the same responsibility for the many more entities to be alive but suffering? Is it the right thing to do, to live at any cost, to give birth at any price?

What if it is not about "winning" and "not winning" but about losing or gaining one possibility among millions that could go horrible wrong?

Isn't even the prospect of a slow torture to death enough to consider to end our journey here, a torture that spans a possible period from 10^20 years up to the Dark Era from 10^100 years and beyond? This might be a period of war, suffering and suicide. It might be the Era of Death and it might be the lion's share of the future. I personally know a few people who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.

To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.

So what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the relative amount of suffering increasing. What if the ultimate payoff is notably negative? If it is our moral responsibility to minimize suffering and if we are unable minimize suffering by actively shaping the universe, but rather risk to increase it, what should we do about it? Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?

Hereby I ask the Less Wrong community to help me resolve potential fallacies and biases in my framing of the above ideas.



See also

The Fun Theory Sequence

"Should This Be the Last Generation?" By PETER SINGER (thanks timtyler)

New Comment
122 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Do not attempt a literal interpretation, rather try to consider the gist of the matter, if possible.

I have a better idea. Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading.

[-][anonymous]110

You are right, I was being an idiot there.

-6[anonymous]
[-]ata160

Let's reach the stars first and worry later about how many zillions of years of fun remain. If we eventually run out, then we can abort or wirehead. For now, it seems like the expected awesome of creating an intergalactic posthuman civilization is pretty high.

Even if we create unimaginably many posthumans having unimaginable posthuman fun, and then get into some bitter resource struggles as we approach the heat death of the universe, I think it will have been worth it.

This post is centered on a false dichotomy, to address its biggest flaw in reasoning. If we're at time t=0, and widespread misery occurs at time t=10^10, then solutions other than "Discontinue reproducing at t=0" exist. Practical concerns aside - as without practical concerns aside, there is no point in even talking about this - the appropriate solution would be to end reproduction at, say, t=10^9.6. This post arbitrarily says "Act now, or never" when, practically, we can't really act now, so any later time is equally feasible and otherwise simply better.

0[anonymous]
It is not a matter of reproduction but the fact that there will be trillions of entities at the point of fatal decay. That is, let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe But I think practical considerations are also rather important. For one, no entity, not even a FAI, might be able to influence parts of the universe that are no more causally connected due to the accelerated expansion of the universe. There will be many island universes.
0Psychohistorian
The false dichotomy is when to do something about it. The solution to the above problem would be that those last 100 entities were never created. That does not require us to stop creating entities right now. If the entity is never created, its utility is undefined. That's why this is a false dichotomy: you say do something now or never do something, when we could wait until very near the ultimate point of badness to remedy the problem.
2[anonymous]
Look, I'm using the same argumentation as EY and others, that the existence of those being depends on us, just in reverse. Why not their suffering too? I never said this is sound, I don't think it is. I argued before that all those problems only arise if you try to please imaginary entities.

Most of these questions are beyond our current ability to answer. We can speculate and counter-speculate, but we don't know. The immediate barrier to understanding is that we do not know what pleasure and pain, happiness and suffering are, the way that we think we know what a star or a galaxy is.

We have a concept of matter. We have a concept of computation. We have a concept of goal-directed computation. So we can imagine a galaxy of machines, acting according to shared or conflicting utility functions, and constrained by competition and the death of the universe. But we do not know how that would or could feel; we don't even know that it needs to feel like anything at all. If we imagine the galaxy populated with people, that raises another problem - the possibility of the known range of human experience, including its worst dimensions, being realized many times over. That is a conundrum in itself. But the biggest unknown concerns the forms of experience, and the quality of life, of "godlike AIs" and other such hypothetical entities.

The present reality of the world is that humanity is reaching out for technological power in a thousand ways and in a thousand places. That... (read more)

[-][anonymous]100

Updated the post to suit criticism.

Let's see, if I can't write a good post maybe I can tweak one to become good based on feedback.

I applaud this approach (and upvoted this comment), but I think any future posts would be better received if you did more tweaking prior to publishing them.

2John_Maxwell
How about a Less Wrong peer review system? This could be especially good for Less Wrongers who are non-native speakers. I'll volunteer to review a few posts--dreamalgebra on google's email service. (Or private message, but I somewhat prefer the structure of email since it's easier for me to see the messages I've written.)
2[anonymous]
I'll post open thread comments from now on. This was just something that has been on my mind for so long that it became too familiar to be identified as imprudent. I've been watching Star Trek: The Next Generation as a kid and still remember a subset of this problem to be faced by the Q Continuum. I think Q said that most of its kind committed suicide for that there was nothing new to be discovered out there. But the main point came to my mind when skimming over what some utilitarians had to say. And people on LW considering the amount of happiness that a future galactic civilization may bear. Now if the universe was infinite, that be absolutely true. But if indeed most of the time is that of decay, especially given an once striving civilization, is the overall payoff still positive?
0MartinB
Fictional evidence: Q as allegedly the last born - but where are his parents? And what about the 'true Q' from her own episode? They fight a freaking war over Qs wish to reproduce, but do not allow one guy to commit suicide. But handing out powers or taking them away is easy as cake. Not particular consistent. If time comes to build a universe wide civilization, then there will be many minds to ponder all the questions. We do not have to get that right now. (Current physics only allows for colonization of the local group anyhow.) If we put in enough effort to solve the GUT there might be some way around the limitations of the universe, or we will find another way to deal with them - as has been the case many times before. Now is a great time to build an amazing future, but not yet the time to end reproduction.
1[anonymous]
Oh my god! Are you really telling me about fictional evidence here? That is what I criticize to be the case with this whole community. Why am I not allowed to use it? Anyway, my post is not based on fictional evidence but physics and basic economics. It's a matter of not creating so many minds in the first place. Minds that the universe is unable to sustain in the short run. Yes, most of the future will be unable to support all those minds.
3thomblake
That's often a good strategy.

These are very important questions which deserve to be addressed, and I hope this post isn't downvoted severely. However, at least one subset of them has been addressed already:

Further, although less dramatic, what if we succeed to transcendent, to become posthuman and find out that the universe does not contain enough fun for entities with mental attributes far exceeding those of baseline humanity? What if there isn't even enough fun for normal human beings to live up until an age of 150 and still have fun?

See the Fun Theory Sequence for discussion.

1[anonymous]
I knew about the sequence but forgot to mention it. It is an issue to be integrated into the overall question, and that is why I included it. Thanks for reminding me though.

It is probably premature to ask such questions now. We have no idea how the world will look like in 10^20 years. And when I write no idea, I don't mean we have several theories from which we have to choose the right one, but still can't do that. I mean that (if human race doesn't get extinct soon and future will not turn out to be boringly same as the present or the recent history) we can't possibly imagine how would the world function, and even if told, we wouldn't understand. If there will be intelligent creatures in 10^20 years, they will certainly have... (read more)

-1timtyler
Darwin answered the question of why we care.
3Baughn
No, Darwin explained what actually happens. There is no should there; we invent those ourselves. Unless you meant that the consequences of evolution give us a better reason to care; but that would in itself be a personal judgement. I care, too, but there's no law of nature stating that all other humans must also care.
1timtyler
Darwin answered the question of: "why do we care...".
1Baughn
Ah. Point taken; though of course he didn't literally do so for humans, evolution definitely has a lot to do with it.

Caterpillars discussing the wisdom of flight.

We are looking at very long time scales here, so how wide should our scope be? If we use a very wide scope like this, we get issues, but if we widen it still further we might get even more. Suppose the extent of reality were unlimited, and that the scope of effect of an individual action were unlimited, so that if you do something it affects something, which affects something else, which affects something else, and so on, without limit. This doesn't necessarily need infinite time: We might imagine various cosmologies where the scope could be widened in oth... (read more)

0Emile
I don't understand your Puppies question. When you say: .... what do you mean by "based on your decision"? They decide the same as you did? The opposite? There's a relationship to your decision but you don't know which one. I am really quite confused, and don't see what moral dilemma there is supposed to be beyond "should I kill a puppy or not?" - which on the grand scale of things isn't a very hard Moral Dilemma :P
0PaulAlmond
"There's a relationship to your decision but you don't know which one". You won't see all the puppies being spared or all the puppies being blown up. You will see some of the puppies being spared and some of them being blown up, with no obvious pattern - however you know that your decision ultimately caused whatever sequence of sparing/blowing up the machine produced.

If this was the case, a true FAI will just kill us more painlessly than we could. And then go out and stop life evolved in other places from causing such suffering.

[-]knb50

But this is nothing compared to the time from 10^20 to 10^100 years where possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse, much longer and without any hope.

A different (more likely?) scenario is that the god-like entities will not gradually decline their resource usage--they'll store up energy reserves, then burn through them as efficiently as possible, then shut down. It will be really sad each time a god-like entity dies, but not ne... (read more)

Thanks for posting. Upvoted.

I have always had an uncomfortable feeling whenever I have been asked to include distant-future generations in my utilitarian moral considerations. Intuitively, I draw on my background in economics, and tell myself that the far-distant future should be discounted toward zero weight. But how do I justify the discounting morally? Let me try to sketch an argument.

I will claim that my primary moral responsibility is to the people around me. I also have a lesser responsibility to the next generation, and a responsibility lesser ... (read more)

5timtyler
In nature, the best way you can help your great grand-kids is to help your children. If there was a way to help your grandchildren at the expense of your children that ultimately benefitted the grandchildren, nature might favour it - but usually there is simply no easy way to do that. Grandparents do sometimes favour more distant offspring in their wills - if they think the direct offspring are compromised or irresponsible, for example. Such behaviour is right and natural. Temporal discounting is a reflection of your ignorance and impotence when it comes to the distant future. It is not really that you fundamentally care less about the far future - it is more that you don't know and can't help - so investing mental resources would be rather pointless.
0Unknowns
According to Robin Hanson, our behavior proves that we don't care about the far future.
0timtyler
Robin argues that few are prepared to invest now to prevent future destruction of the planet. The conclusion there seems to be that humans are not utilitarian agents. Robin seems to claim that humans do not invest in order to pass things on to future generations - whereas in fact they do just that whenever they invest in their own offspring. Obviously you don't invest in your great-grandchildren directly. You invest in your offspring - they can manage your funds better than you can do so from your wheelchair or grave. Temporal discounting make sense. Organisms do it becasue they can't see or control the far future as well as their direct descendants can. In those rare cases where that is not true, direct descendants can sometimes be bypassed. However, you wouldn't want to build temporal discounting into the utility function of a machine intelligence. It knows better than you do its prediction capablities - and can figure out such things for itself. Since that exact point was made in the Eliezer essay Robin's post was a reply to, it isn't clear that Robin understands that.
1Kingreaper
I don't think you need any discounting. Your effect on the year 2012 is somewhat predictable. It is possible to choose a course of action based on known effect's on the year 2012. You effect on the year 3000 is unpredictable. You can't even begin to predict what effect your actions will have on the human race in the year 3000. Thus, there is an automatic discounting effect. An act is only as valuable as it's expected outcome. The expected outcome on the year 1,000,000 is almost always ~zero, unless there is some near-future extinction possibility, because the probability of you having a desired impact is essentially zero.
1Dagon
I tend to agree, in that I also have a steep discount across time and distance (though I tend to think of it as "empathetic distance", more about perceived self-similarity than measurable time or distance, and I tend to think of weightings in my utility function rather than using the term "moral responsibility"). That said, it's worth asking just how steep a discount is justifiable - WHY do you think you're more responsible to a neighbor than to four of her great-grandchildren, and do you think this is the correct discount to apply? And even if you do think it's correct, remember to shut up and multiply. It's quite possible for there to be more than 35x as much sentience in 10 generations as there is today.

Is anyone going to propose this as an answer to (what some say is) the Fermi paradox?

4[anonymous]
I thought people would be too bored of it to mention. I've heard it proposed dozens of times as a possible explanation. I should probably spend less time with philosophy majors. Anyway the strong version of the statement is much more interesting. Not only do naturally evolved intelligences all have values that for some reason or another choose to rather let it all end than endure existence, it also means that they never spawn AI's with values sufficiently radical to disagree. The mind space that encompasses is mind boggling. Either its hard for a civ to build a AI with truly alien values, they go extinct before they can build AIs (different argument), decide to kill themselves before doing so (odd) or nearly all possible minds agree nonexistance is good. We may have very very weird minds if the last option is the answer.
[-][anonymous]30

what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized

If it does turn out that the overwhelmingly likely future is one of extreme negative utility, voluntary extinction (given some set of assumptions) IS maximizing utility.

Also, if the example really is as tangental as you're implying, it should probably not account for 95% of the text (and the title, and the links) in your post.

I cannot fathom the confusion that would lead to this question. Of course it's better for humanity to survive than to not survive. Of course it's better to go extinct in a million years than to go extinct now. The future is more wondrous and less scary than you imagine.

6komponisto
That only makes sense if you think life is always better than death. But that certainly isn't my view -- I think some possible futures are so bad that extinction would be preferable. In that case, the answer to the title question depends on the probabilities of such futures. EDIT: For the record, I don't think we need to resort to pulling the plug on ourselves anytime soon.
1[anonymous]
I don't think life is always better than death according to my utility function. I do however think that the most likley outcome considering the priorities of the blind idiot god or perhaps even self described benevolent minds is that inhabitants of such spaces in the very long term, are minds who are quite ok being there. On "Benevolent" minds: If I knew beyond a doubt that something which I would consider hell exists and that everyone goes there after being resurrected on judgment day, and I also knew that it was very unlikely that I could stop everyone from ever being born or being resurrected I would opt for trying to change or create people that would enjoy living in that hell.
-2[anonymous]
My question was not meant to be interpreted literally but was rather instrumental in highlighting the idea of what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the amount of suffering increasing. Instrumentally, isn't it better to believe that winning is impossible, than that it's likely, if the actual probability is very low?
1Jonathan_Graehl
To decide to lose intentionally, I need to know how much it costs to try to win, what the odds of success are, and what the difference in utility is if I win. I feel like people weigh those factors unconsciously and automatically (using bounded resources and rarely with perfect knowledge or accuracy).
[-]cata30

I think freedom should win when contemplating how to ethically shape the future. I don't have any direct evidence that posthumans in a post-Singularity universe will be "happy" throughout their lives in a way that we value, but you certainly don't have evidence to the contrary, either.

As long as neither of us know the exact outcome, I think the sensible thing to do is to maximize freedom, by trying to change technology and culture to unburden us and make us more capable. Then the future can decide for itself, instead of relying on you or I to w... (read more)

1[anonymous]
And there is nearly a infinite space of minds that wold look at all life today and consider it better to have never existed. The minds most likley to live for long periods in a situation that we would judge them to be better off never been born at all are either extremely unfree (no suicide) or are already adapted to consider it perfectly tolerable or perhaps even enjoyable.
1[anonymous]
I personally know a few who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time between 10^20 and 10^100 where possible trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer's, just much worse and longer, without any hope.
1cata
I agree, that sounds very depressing. However, I don't understand the minds, emotions, or culture of the entities that will exist then, and as such, I don't think it's ethical for me to decide in advance how bad it is. We don't kill seniors with Alzheimer's, because it's not up to us to judge whether their life is worth living or not. Plus, I just don't see the point in making a binding decision now about potential suffering in the far future, when we could make it N years from now. I don't see how suicide would be harder later, if it turns out to be actually rational (as long as we aim to maintain freedom.)
1[anonymous]
To pull the plug later could (1) be impossible, (2) result in more death than it would now. However, I agree with you. It was not my intention to suggest we should abord humanity but rather to inquire about the similarities to the abortion of a fetus that is predicted to suffer from severe disabilities in its possible future life. Further, my intention was to inquire about the perception that it is our moral responsibility to minimize suffering. If we cannot minimize it by actively shaping the universe, but rather risk to increase it, what should we do about it?
7cata
I don't really understand your greater argument. Inaction (e.g. sitting on Earth, not pursuing AI, not pursuing growth) is not morally neutral. By failing to act, we're risking suffering in various ways; insufficiency of resources on the planet, political and social problems, or a Singularity perpetrated by actors who are not acting in the interest of humanity's values. All of these could potentially result in the non-existence of all the future actors we're discussing. That's got to be first and foremost in any discussion of our moral responsibility toward them. We can't opt out of shaping the universe, so we ought to do a good a job as we can as per our values. The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.
0[anonymous]
The argument is about action. We forbid inbreeding because it causes suffering in future generations. Now if there is no way that the larger future could be desirable, i.e. if suffering is prevailing, then I ask how many entities have to suffer to forbid humanity to seed the universe? What is your expected number of entities born after 10^20 years who'll face a increasing lack of resources until the end at around 10^100 years? All of them are doomed to face a future that might be shocking and undesirable. This is not a small part but most of it. But what is there that speaks for our future ability to stop entropy?
6cata
If we can't stop entropy, then we can't stop entropy, but I still don't see why our descendants should be less able to deal with this fact than we are. We appreciate living regardless, and so may they. Surely posthuman entities living at the 10^20 year mark can figure out much more accurately than us whether it's ethical to continue to grow and/or have children at that point. As far as I can tell, the single real doomsday scenario here is, what if posthumans are no longer free to commit suicide, but they nevertheless continue to breed; heat death is inevitable, and life in a world with ever-decreasing resources is a fate worse than death. That would be pretty bad, but the first and last seem to me unlikely enough, and all four conditions are inscrutable enough from our limited perspective that I don't see a present concern.
[-][anonymous]20

I added to the post:

To exemplify this let's assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either the ruling FAI (friendly AI) is going to kill one entity or reduce the mental capabilities of all 100. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.

2Kingreaper
Gradually reducing mental processing speed, approaching the universes heatdeath (ie. at a point where nothing else of interest is occuring) and death (painlessly) are analogous. Neither of those options is, in any sense, torture. They're just death. So I'm really not sure what you're getting at.

What if there isn't even enough fun for normal human beings to live up until an age of 150 and still have fun?

Really?

my intention is to inquire about the perception that it is our moral responsibility to minimize suffering

It's okay for us to cause more entities to exist, for a greater sum of suffering, provided that it's one of the better possible outcomes.

While intervening in a way that (with or without) consent inflicts a certain amount of additional net suffering on others (such as causing them to be created) is to be avoided all other things e... (read more)

2[anonymous]
I updated the unreasonable age of 150 to 1000 in the OP. I was thinking about myself and how movies seem to become less interesting the more I watch as the amount of unique plots and general content they expose continues to decrease. Thanks for your insightful comment.
7[anonymous]
I have at least a century of interesting math waiting for me that I will never get to. I feel really bad every time I think about that.
3cousin_it
Seconded. And more new interesting math seems to get created all the time. It's like drinking from the firehose.
[-][anonymous]20

I added to the post. Please read the To clarify addendum. Thank you.

This is why Buddhism is dangerous.

What do y'all think about John Smart's thesis that an inward turn is more likely that the traditional script galactic colonization?

http://www.accelerating.org/articles/answeringfermiparadox.html

Rather wild read, but perhaps worth a thought. Would that alternative trajectory affect your opinion of the prospect, XiXiDu?

[-][anonymous]10

Interesting thoughts. I also haven't finished the fun sequence, so this may be malformed. The way I see it is this: You can explore and modify your environment for fun and profit (socializing counts here too), and you can modify your goals to get more fun and profit without changing your knowledge.

Future minds may simply have a "wirehead suicide contingency" they choose to abide by, by which, upon very, very strong evidence that they can have no more fun with their current goals, they could simply wirehead themselves. Plan it so that the value of... (read more)

Relevant literature: "Should This Be the Last Generation?" By PETER SINGER

1[anonymous]
Great, thank you.
1timtyler
's OK. That lists a whole book about the topic - and there is also: "The Voluntary Human Extinction Movement" * http://www.vhemt.org/
3[anonymous]
If any movement is dysgenic that surely must be it. Lets see people who are altruistic and in control of their instincts and emotions enough to not have children in order to alleviate very distant future suffering which to top it all of is a very very abstract argument to begin with, yeah those are the kind who should stop having children first. Great plan. I first wanted to write "self-defeating" but I soon realized they may actually get their wish, but only if they convince enough of the people who's kids should be working on friendly AI in 20 something years to rather not have the second or even first one. But it won't leave the Earth to "nature" as they seem to be hoping.
[-][anonymous]10

I'd like to ask those people who downvote this post for their reasons. I thought this is a reasonable antiprediction to the claims made regarding the value of a future galactic civilisation. Based on economic and scientific evidence it is reasonable to assume that the better part of the future, namely the the time from 10^20 to 10^100 years (and beyond) will be undesirable.

If you spend money and resources on the altruistic effort of trying to give birth to this imaginative galactic civilisation, why don't you take into account the more distant and much lar... (read more)

5Emile
I didn't downvote the post - it is thought-provoking, though I don't agree with it. But I had a negative reaction to the title (which seems borderline deliberately provocative to attract attention), and the disclaimer - as thomblake said, "Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading."
-1[anonymous]
It is the disclaimer. I was rather annoyed at all the comments to my other post. People claimed things I to my understanding never said. And if what I said was analyzed I'm sure nobody could show me how to come to such conclusions. As was obvious, not even EY read my post but simply took something out of context and run with it.
3Vladimir_Nesov
Future is the stuff you build goodness out of. The properties of stuff don't matter, what matters is the quality and direction of decisions made about arranging it properly. If you suggest a plan with obvious catastrophic problems, chances are it's not what will be actually chosen by rational agents (that or your analysis is incorrect).
0[anonymous]
The analysis is incorrect? Well, ask the physicists.
-1Vladimir_Nesov
Moral analysis.
-3[anonymous]
Yes, I think so too. But I haven't seen any good arguments against Negative utilitarianism in the comments yet. (More here)
0Vladimir_Nesov
You lost the context. Try not to drift.
2Wei Dai
Is this really worth your time (or Carl Shulman's)? Surely you guys have better things to do?
1[anonymous]
If you tell me where my argumentation differs from arguments like this, I'll know if it is a waste or not. I can't figure it out.
0[anonymous]
Since XiXiDu and multifoliaterose's posts have all been made during the Singularity Summit, when everyone at SIAI is otherwise occupied and so cannot respond, I thought someone familiar with the issues should engage rather than leave a misleading appearance of silence. And giving a bit of advice that I think has a good chance of improving XiXiDu's contributions seemed reasonable and not too costly.
1[anonymous]
There is not enough stuff to sustain a galactic civilization for very long (relative to the expected time of the universe to sustain intelligence). There is no way to alter the quality or direction of the fundamental outcome in any way to overcome this problem (given what we know right now). That's what I am inquiring about, is it rational given that we adopt a strategy of minimizing suffering? Or are we going to create trillions to have fun for a relatively short period and then have them suffering or commit suicide for a much longer period?
2Dagon
It's a worthwhile question, but probably fits better on an open thread for the first round or two of comments, so you can refine the question to a specific proposal or core disagreement/question. My first response to what I think you're asking is that this question applies to you as an individual just as much as it does to humans (or human-like intelligences) as a group. There is a risk of sadness and torture in your future. Why keep living?
1Kingreaper
I don't believe that is a reasonable prediction. You're dealing with timescales so far beyond human lifespans that assuming they will never think of the things you think of is entirely implausible. In this horrendous future of yours, why do people keep reproducing? Why don't the last viable generation (knowing they're the last viable generation) cease reproduction? If you think that this future civilisation will be incapable of understanding the concepts you're trying to convey, what makes you think we will understand them?
1[anonymous]
It is not about reproduction but that at that time there'll already be much more entities than ever before. And they all will have to die. Now only a few will have to die or suffer. And it is not my future. It's much more based on evidence than the near-term future talked about on LW.
0Kingreaper
Ah, I get it now, you believe that all life is necessarily a net negative. That existing is less of a good than dying is of a bad. I disagree, and I suspect almost everyone else here does too. You'll have to provide some justification for that belief if you wish us to adopt it.
1Baughn
I'm not sure I disagree, but I'm also not sure that dying is a necessity. We don't understand physics yet, much less consciousness; it's too early to assume it as a certainty, which means I have a significantly nonzero confidence of life being an infinite good.
3ata
Doesn't that make most expected utility calculations make no sense?
0Baughn
A problem with the math, not with reality. There are all kinds of mathematical tricks to deal with infinite quantities. Renormalization is something you'd be familiar with from physics; from my own CS background, I've got asymptotic analysis (which can't see the fine details, but easily can handle large ones). Even something as simple as taking the derivative of your utility function would often be enough to tell which alternative is best. I've also got a significantly nonzero confidence of infinite negative utility, mind you. Life isn't all roses.
0[anonymous]
We already donate based on the assumption that superhuman AI is possible and that it is right to base our decisions on extrapolated utility of it and a possible galactic civilisation. Why are we not able to make decisions based on a more evidence based economic and physical assumption of a universe that is unable to sustain a galactic civilisation for most of its lifespan and the extrapolated suffering that is a conclusion of this prediction?
1Baughn
Well, first off.. What kind of decisions were you planning to take? You surely wouldn't want to make a "friendly AI" that's hardcoded to wipe out humanity; you'd expect it to come to the conclusion that that's the best option by itself, based on CEV. I'd want it to explain its reasoning in detail, but I might even go along with that. My argument is that it's too early to take any decisions at all. We're still in the data collection phase, and the state of reality is such that I wouldn't trust anything but a superintelligence to be right about the consequences of our various options anyway. We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
0[anonymous]
True, I have to read up on CEV and see if there was a possibility that a friendly AI could decide to kill us all to reduce suffering in the long-term. The whole idea in the OP stems from the kind of negative utilitarianism that sgggests that it is not worth to torture 100 people infinitel to make billions happy. So I thought to extrapolate this and see what if we figure out that in the long run most entities will be suffering?
0Baughn
Negative utilitarianism is.. interesting, but I'm pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong? That's not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.
0neq1
In my opinion, the post doesn't warrant -90 karma points. That's pretty harsh. I think you have plenty to contribute to this site -- I hope the negative karma doesn't discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)
1[anonymous]
That I get bad karma here is completely biased in my opinion. People just don't realize that I'm basing extrapolated conclusions on some shaky premises just like LW does all the time when talking about the future galactic civilization and risks from AI. The difference is, my predictions are much more based on evidence. It's a mock of all that is wrong with this community. I already thought I'd get bad karma for my other post but was surprised not to. I'll probably get really bad karma now that I say this. Oh well :-) To be clear, this is a thought experiment about asking what we can and should do if we ultimately are prone to cause more suffering than happiness. It's nothing more than that. People suspect that I'm making strong arguments, that it is my opinion, that I ask for action. Which is all wrong, I'm not the SIAI. I can argue for things I don't support and not even think are sound.

Note that multifoliaterose's recent posts and comments have been highly upvoted: he's gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.

-1[anonymous]
You are one of the few people here whose opinion I'm actually taking serious, after many insightful and polite comments. What is the bone of contention in the OP? I took a few different ingredients: Robin Hanson's argumentation about resource problems in the far future (the economic argument); Questions based on Negative utilitarianism (the ethical argument); The most probable fate of the universe given current data (the basic premise) -- then I extrapolated from there and created an antiprediction. That is, I said that it is too unlikely that the outcome will be good to believe that it is possible. Our responsibility is to prevent a lot of suffering over 10^100 years. I never said I support this conclusion or think that it is sound. But I think it is very similar to other arguments within this community.
8CarlShulman
On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which "reads" angry, and doesn't fit with norms of politeness and discourse here). Substantively, I'll consider the major pieces individually. The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar's arguments and made your points more explicit, but instead simply stated the conclusion. The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce's Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals. For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the "lifeboat ethics" scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer's doesn't work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cog
-1[anonymous]
I copied that sentence from here (last sentence). Thanks, I'll quit making top-level posts as I doubt I'll ever be able to exhibit the attitude required for the level of thought and elaboration you demand. That was actually my opinion before making the last and first post. But all this, in my opinion, laughable attitude around Roko's post made me sufficiently annoyed to signal my incredulity. ETA The SIAI = What If?
0[anonymous]
5Kevin
I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.
1[anonymous]
I'm talking about these kind of statements: http://www.vimeo.com/8586168 (5:45) "If you confront it rationally full on then you can't really justify trading off any part of galactic civilization for anything that you could get now days." So why, I ask you directly, am I not to argue that we can't really justify to balance the happiness and utility of a galactic civilization with the MUCH longer time of decay? There is this whole argument about how we have to give rise to the galactic civilization and have to survive now. But I predict that suffering will prevail. That it is too unlikely that the outcome will be positive. What is wrong with that?
[-][anonymous]10

First a few minor things I would like to get out there:

We are according to consensus which I do not dispute (since its well founded) slowly approaching heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?

I don't quite see the difference between real and simulated torture... (read more)

2humpolec
Dyson's eternal intelligence. Unfortunately I know next to nothing about physics so I have no idea how this is related to what we know about the universe.
2Baughn
It runs into edge conditions we know little about; like, are protons stable or not. (The answer appears to be no, by the way.) At this point in time I would not expect to be able to do infinite computation in the future. The future has a way of surprising, though; I'd prefer to wait and see.
1[anonymous]
I tried to highlight the increased period of time you have to take into account. This allows for even more suffering than the already huge time span implies from a human perspective. Indeed, but I felt this additional post was required as many people were questioning this point in the other post. Also, I came across a post by a physicist which triggered this post. I simply have my doubts that the sequence you mention has resolved this issue? But I will read it of course. Mine too. I would never recommend to give up. I want to see the last light shine. But I perceive many people here to be focused on the amount of possible suffering, so I thought to inquire on what they would recommend if it is more likely that the overall suffering will increase. Would they rather pull the plug?

On balance I'm not too happy with the history of existence. As Douglas Adams wrote, "In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move." I'd rather not be here myself, so I find the creation of other sentients a morally questionable act. On the other hand, artificial intelligence offers a theoretical way out of this mess. Worries about ennui strike me as deeply misguided. Oppression, frailty, and stupidity makes hanging out in this world unpleasant, not any lack of worth... (read more)