XiXiDu comments on Should humanity give birth to a galactic civilization? - Less Wrong

-6 [deleted] 17 August 2010 01:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread.

Comment deleted 17 August 2010 03:29:10PM [-]
Comment author: Emile 17 August 2010 03:55:44PM 4 points [-]

I didn't downvote the post - it is thought-provoking, though I don't agree with it.

But I had a negative reaction to the title (which seems borderline deliberately provocative to attract attention), and the disclaimer - as thomblake said, "Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading."

Comment author: Vladimir_Nesov 18 August 2010 12:41:38AM *  3 points [-]

Future is the stuff you build goodness out of. The properties of stuff don't matter, what matters is the quality and direction of decisions made about arranging it properly. If you suggest a plan with obvious catastrophic problems, chances are it's not what will be actually chosen by rational agents (that or your analysis is incorrect).

Comment deleted 18 August 2010 09:12:22AM [-]
Comment author: Vladimir_Nesov 18 August 2010 09:30:22AM 0 points [-]

Moral analysis.

Comment deleted 18 August 2010 09:51:09AM [-]
Comment author: Vladimir_Nesov 18 August 2010 10:05:33AM *  0 points [-]

You lost the context. Try not to drift.

Comment author: Wei_Dai 18 August 2010 10:14:41AM 2 points [-]

Is this really worth your time (or Carl Shulman's)? Surely you guys have better things to do?

Comment author: Dagon 17 August 2010 06:21:00PM 2 points [-]

It's a worthwhile question, but probably fits better on an open thread for the first round or two of comments, so you can refine the question to a specific proposal or core disagreement/question.

My first response to what I think you're asking is that this question applies to you as an individual just as much as it does to humans (or human-like intelligences) as a group. There is a risk of sadness and torture in your future. Why keep living?

Comment author: Kingreaper 18 August 2010 12:13:05AM *  1 point [-]

I thought this is a reasonable antiprediction to the claims made regarding the value of a future galactic civilisation. Based on economic and scientific evidence it is reasonable to assume that the better part of the future, namely the the time from 10^20 to 10^100 years (and beyond) will be undesirable.

I don't believe that is a reasonable prediction. You're dealing with timescales so far beyond human lifespans that assuming they will never think of the things you think of is entirely implausible.

In this horrendous future of yours, why do people keep reproducing? Why don't the last viable generation (knowing they're the last viable generation) cease reproduction?

If you think that this future civilisation will be incapable of understanding the concepts you're trying to convey, what makes you think we will understand them?

Comment deleted 18 August 2010 09:00:58AM [-]
Comment author: Kingreaper 18 August 2010 11:15:10AM *  0 points [-]

Ah, I get it now, you believe that all life is necessarily a net negative. That existing is less of a good than dying is of a bad.

I disagree, and I suspect almost everyone else here does too. You'll have to provide some justification for that belief if you wish us to adopt it.

Comment author: Baughn 18 August 2010 03:48:12PM *  0 points [-]

I'm not sure I disagree, but I'm also not sure that dying is a necessity. We don't understand physics yet, much less consciousness; it's too early to assume it as a certainty, which means I have a significantly nonzero confidence of life being an infinite good.

Comment author: ata 18 August 2010 03:52:15PM *  2 points [-]

I have a significantly nonzero confidence of life being an infinite good.

Doesn't that make most expected utility calculations make no sense?

Comment author: Baughn 18 August 2010 04:02:45PM 0 points [-]

A problem with the math, not with reality.

There are all kinds of mathematical tricks to deal with infinite quantities. Renormalization is something you'd be familiar with from physics; from my own CS background, I've got asymptotic analysis (which can't see the fine details, but easily can handle large ones). Even something as simple as taking the derivative of your utility function would often be enough to tell which alternative is best.

I've also got a significantly nonzero confidence of infinite negative utility, mind you. Life isn't all roses.

Comment deleted 18 August 2010 03:53:47PM *  [-]
Comment author: Baughn 18 August 2010 04:06:32PM *  1 point [-]

Well, first off..

What kind of decisions were you planning to take? You surely wouldn't want to make a "friendly AI" that's hardcoded to wipe out humanity; you'd expect it to come to the conclusion that that's the best option by itself, based on CEV. I'd want it to explain its reasoning in detail, but I might even go along with that.

My argument is that it's too early to take any decisions at all. We're still in the data collection phase, and the state of reality is such that I wouldn't trust anything but a superintelligence to be right about the consequences of our various options anyway.

We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.

Comment deleted 18 August 2010 04:11:38PM [-]
Comment author: Baughn 18 August 2010 04:17:28PM 0 points [-]

Negative utilitarianism is.. interesting, but I'm pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?

That's not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.

Comment author: neq1 18 August 2010 12:32:00AM 0 points [-]

In my opinion, the post doesn't warrant -90 karma points. That's pretty harsh. I think you have plenty to contribute to this site -- I hope the negative karma doesn't discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)

Comment deleted 18 August 2010 09:08:14AM *  [-]
Comment author: CarlShulman 18 August 2010 09:34:39AM *  12 points [-]

Note that multifoliaterose's recent posts and comments have been highly upvoted: he's gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.

Comment deleted 18 August 2010 09:47:19AM [-]
Comment author: CarlShulman 18 August 2010 10:27:25AM *  6 points [-]

On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which "reads" angry, and doesn't fit with norms of politeness and discourse here).

Substantively, I'll consider the major pieces individually.

The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar's arguments and made your points more explicit, but instead simply stated the conclusion.

The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce's Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.

For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the "lifeboat ethics" scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer's doesn't work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.

In several places throughout the post you use "what if" language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.

Edit: I misread the "likely" in this sentence and mistakenly objected to it.

Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?

Comment deleted 18 August 2010 10:41:31AM *  [-]
Comment author: Kevin 18 August 2010 09:15:56AM 3 points [-]

I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.