Strange7 comments on The Fundamental Question - Less Wrong

43 Post author: MBlume 19 April 2010 04:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (277)

You are viewing a single comment's thread. Show more comments above.

Comment author: PeerInfinity 25 April 2010 08:20:17PM *  3 points [-]

(edit: The version of utilitarianism I'm talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don't bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.)

I totally agree!!!

Astronomical waste is bad! (or at least, severely suboptimal)

Wild-animal suffering is bad! (no, there is nothing "sacred" or "beautiful" about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.)

Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, "This way is more fun", or "This way would generate a wider variety of possible outcomes" are not acceptable answers, at least not according to utilitarianism.)

Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all!

Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!

I also agree with your concerns about CEV.

Though of course we're talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can't explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid.

Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it's a fact about the universe, because these beliefs were generated by a process that involves observing the universe. We observe that pleasure really does feel good, and that pain really does feel bad, and therefore we want to maximize pleasure and minimize pain. But not everyone agrees with us. Eliezer himself doesn't even agree with us anymore, even though some of his previous writing implied that he did before. (I still can't get over the idea that he would consider it a good idea to kill a whole planet just to PREVENT an alien species from removing the human ability to feel pain, and a few other minor aesthetic preferences. Yeah, I'm so totally over any desire to treat Eliezer as an Ultimate Source of Wisdom...)

Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with. I still don't see how this could be possible, but maybe that's just a result of my own ignorance. And then there's the extreme difficulty of actually implementing CEV...

And no, I still don't claim to have a better plan. And I'm not at all comfortable with advocating the creation of a purely Utilitarian AI.

Your plan of trying to spead good memes before the CEV extrapolates everyone's volition really does feel like a good idea, but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation. I suspect that if you can't incorporate this process into CEV somehow, then any other possible strategy must involve cheating somehow.

Oh, I had another conversation recently on the topic of whether it's possible to convince a rational agent to change its core values through rational discusson alone. I may be misinterpreting this, but I think the conversation was inconclusive. The other person believed that... er, wait, I think we actually agreed on the conclusion, but didn't notice at the time. The conclusion was that if an agent's core values are inconsistent, then rational discussion can cause the agent to resolve this inconsistency. But if two agents have different core values, and neither agent has internally inconsistent core values, then neither agent can convince the other, without cheating. There's also the option of trading utilons with the other agent, but that's not the same as changing the other agent's values.

Anyway, I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I'm estimating the probability that this is the case at... significantly less than 50%. Not because I have any specific evidence about this, but as a result of applying the Pessimistic Prior. (Is that a standard term?)

Anyway, if this is the case, then the CEV algorithm will end up resulting in the outcome that you wanted. Specifically, an end to all suffering, and some form of utilitronium shockwave.

Oh, and I should point out that the utilitronium shockwave doesn't actually require the murder of everyone now living. Surely even us hardcore utilitarians should be able to afford to leave one planet's worth of computronium for the people now living. Or one solar system's worth. Or one galaxy's worth. It's a big universe, after all.

Oh, and if it turns out that some people's value systems would make them terribly unsatisfied to live without the ability to feel pain, or with any of the other brain modifications that a utilitarian might recommend... then maybe we could even afford to leave their brains unmodified. Just so long as they don't force any other minds to experience pain. Though the ethics of who is allowed to create new minds, and what sorts of new minds they're allowed to create... is kinda complicated and controversial.

Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world's population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it's a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It's a big universe, plenty of room for everyone. Just so long as they don't force any other mind to suffer.

Oh, and maybe there should also be rules against creating a mind that's forced to be wireheaded. There will be some complex and controversial issues involved in the design of the optimally efficient form of utilitronium that doesn't involve any ethical violations. One strategy that might work is a cross between the utilitronium scenario and the Solipsist Nation scenario. That is, anyone who wants to retreat entirely into solipsism, let them do their own experiments with what experiences generate the most utility. There's no need to fill the whole universe with boring, uniform bricks of utilitronium that contain minds that consist entirely of an extremely simple pleasure center, endlessly repeating the same optimally pleasurable experience. After all, what if you missed something when you originally designed the utilitronium that you were planning to fill the universe with? What if you were wrong about what sorts of experiences generate the most utility? You would need to allocate at least some resources to researching new forms of utilitronium, why not let actual people do the research? And why not let them do the research on their own minds?

I've been thinking about these concepts for a long time now. And this scenario is really fun for a solipsist utilitarian like me to fantasize about. These concepts have even found their way into my dreams. One of these dreams was even long, interesting, and detailed enough to make into a short story. Too bad I'm no good at writing. Actually, that story I just linked to is an example of this scenario going bad...

Anyway, these are just my thoughts on these topics. I have spent lots of time thinking about them, but I'm still not confident enough about this scenario to advocate it too seriously.

Comment author: Strange7 27 April 2010 02:48:43AM 1 point [-]

Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world's population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it's a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It's a big universe, plenty of room for everyone. Just so long as they don't force any other mind to suffer.

You could also almost certainly convert a considerable percentage of the planet's mass to computronium without impacting the planet's ability to support life. A planet isn't a very mass-efficient habitat, and I doubt many people would even notice if most of the core was removed, provided it was replaced with something structurally and electrodynamically equivalent.

Comment author: NancyLebovitz 27 April 2010 09:04:53AM 2 points [-]

You need the mass of the core to maintain the gravity. What sort of physics do you have in mind?

Comment author: Strange7 27 April 2010 08:07:14PM 2 points [-]

If computronium is of density equal to or greater than iron, physics wouldn't need to be changed. Remove the core, replace it with a roughly spherical wad of perfected brain-matter, plus whatever structural supports are necessary to keep the crust in place, and Newton's Shell Theorem says gravity would be the same. Add some electromagnets for the poles, and channel waste heat from the mechanisms inside to simulate volcanism where appropriate.

Even if computronium turns out to have lower density than iron, and for whatever reason it's unacceptable to reduce surface gravity or transplant the luddites to an otherwise earthlike planet of correspondingly greater diameter, some of the core's mass could be converted and the remainder compressed into a black hole. Again, shell theorem means there's no difference from the outside.

Comment author: PeerInfinity 27 April 2010 04:50:46AM 1 point [-]

good point, thanks for mentioning that.

heh, that's actually what I meant by leaving the planet "mostly intact", but I should have made that clearer.