Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
steven60

Eliezer, "more AIs are in the hurting class than in the disassembling class" is a distinct claim from "more AIs are in the hurting class than in the successful class", which is the one I interpreted Yvain as attributing to you.

steven00

Nick, I'm now sitting here being inappropriately amused at the idea of Hal Finney as Dark Lord of the Matrix.

Eliezer, thanks for responding to that. I'm never sure how much to bring up this sort of morbid stuff. I agree as to what the question is.

Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else.

It was Vladimir who pointed that out, I just said it doesn't apply to egoists. I actually don't agree that it applies to altruists either; presumably most anything that cared that much about torturing newly created people would also use cryonauts for raw materials. Also, maybe there are "people who are still alive" considerations.

steven20

Does nobody want to address the "how do we know U(utopia) - U(oblivion) is of the same order of magnitude as U(oblivion) - U(dystopia)" argument? (I hesitate to bring this up in the context of cryonics, because it applies to a lot of other things and because people might be more than averagely emotionally motivated to argue for the conclusion that supports their cryonics opinion, but you guys are better than that, right? right?)

Carl, I believe the point is that until I know of a specific argument why one is more likely than the other, I have no choice but to set the probability of christianity equal to the probability of anti-christianity, even though I don't doubt such arguments exist. (Both irrationality-punishers and immorality-punishers seem far less unlikely than nonchristianity-punishers, so it's moot as far as I can tell.)

Vladimir, your argument doesn't apply to moralities with an egoist component of some sort, which is surely what we were discussing even though I'd agree they can't be justified philosophically.

I stand by all the arguments I gave against Pascal's wager in the comments to Utilitarian's post, I think.

steven30

Vladimir, hell is only one bit away from heaven (minus sign in the utility function). I would hope though that any prospective heaven-instigators can find ways to somehow be intrinsically safe wrt this problem.

steven00

There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

Expected utility is the product of two things, probability and utility. Saying the probability is smaller is not a complete argument.

steven30

The Superhappies can expand very quickly in principle, but it's not clear that they're doing so

We (or "they" rather; I can't identify with your fanatically masochist humans) should have made that part of the deal, then. Also, exponential growth quickly swamps any reasonable probability penalty.

I'm probably missing something but like others I don't get why the SHs implemented part of BE morality if negotiations failed.

steven00

Shutting up and multiplying suggests that we should neglect all effects except those on the exponentially more powerful species.

steven00

Peter, destroying Huygens isn't obviously the best way to defect, as in that scenario the Superhappies won't create art and humor or give us their tech.

steven10

If they're going to play the game of Chicken, then symbolically speaking the Confessor should perhaps stun himself to help commit the ship to sufficient insanity to go through with destroying the solar system.

steven51

Well... would you prefer a life entirely free of pain and sorrow, having sex all day long?

False dilemma.

Load More