Baughn comments on Should humanity give birth to a galactic civilization? - Less Wrong

-6 [deleted] 17 August 2010 01:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread. Show more comments above.

Comment author: Baughn 18 August 2010 04:06:32PM *  1 point [-]

Well, first off..

What kind of decisions were you planning to take? You surely wouldn't want to make a "friendly AI" that's hardcoded to wipe out humanity; you'd expect it to come to the conclusion that that's the best option by itself, based on CEV. I'd want it to explain its reasoning in detail, but I might even go along with that.

My argument is that it's too early to take any decisions at all. We're still in the data collection phase, and the state of reality is such that I wouldn't trust anything but a superintelligence to be right about the consequences of our various options anyway.

We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.

Comment deleted 18 August 2010 04:11:38PM [-]
Comment author: Baughn 18 August 2010 04:17:28PM 0 points [-]

Negative utilitarianism is.. interesting, but I'm pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?

That's not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.