neq1 comments on Should humanity give birth to a galactic civilization? - Less Wrong

-6 [deleted] 17 August 2010 01:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread. Show more comments above.

Comment author: neq1 18 August 2010 12:32:00AM 0 points [-]

In my opinion, the post doesn't warrant -90 karma points. That's pretty harsh. I think you have plenty to contribute to this site -- I hope the negative karma doesn't discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)

Comment deleted 18 August 2010 09:08:14AM *  [-]
Comment author: CarlShulman 18 August 2010 09:34:39AM *  12 points [-]

Note that multifoliaterose's recent posts and comments have been highly upvoted: he's gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.

Comment deleted 18 August 2010 09:47:19AM [-]
Comment author: CarlShulman 18 August 2010 10:27:25AM *  6 points [-]

On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which "reads" angry, and doesn't fit with norms of politeness and discourse here).

Substantively, I'll consider the major pieces individually.

The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar's arguments and made your points more explicit, but instead simply stated the conclusion.

The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce's Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.

For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the "lifeboat ethics" scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer's doesn't work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.

In several places throughout the post you use "what if" language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.

Edit: I misread the "likely" in this sentence and mistakenly objected to it.

Might it be better to believe that winning is impossible, than that it's likely, if the actual probability is very low?

Comment deleted 18 August 2010 10:41:31AM *  [-]
Comment author: Kevin 18 August 2010 09:15:56AM 3 points [-]

I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.