thomblake comments on The Fundamental Question - Less Wrong

43 Post author: MBlume 19 April 2010 04:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (277)

You are viewing a single comment's thread. Show more comments above.

Comment author: PeerInfinity 25 April 2010 08:20:17PM *  3 points [-]

(edit: The version of utilitarianism I'm talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don't bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.)

I totally agree!!!

Astronomical waste is bad! (or at least, severely suboptimal)

Wild-animal suffering is bad! (no, there is nothing "sacred" or "beautiful" about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.)

Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, "This way is more fun", or "This way would generate a wider variety of possible outcomes" are not acceptable answers, at least not according to utilitarianism.)

Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all!

Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!

I also agree with your concerns about CEV.

Though of course we're talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can't explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid.

Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it's a fact about the universe, because these beliefs were generated by a process that involves observing the universe. We observe that pleasure really does feel good, and that pain really does feel bad, and therefore we want to maximize pleasure and minimize pain. But not everyone agrees with us. Eliezer himself doesn't even agree with us anymore, even though some of his previous writing implied that he did before. (I still can't get over the idea that he would consider it a good idea to kill a whole planet just to PREVENT an alien species from removing the human ability to feel pain, and a few other minor aesthetic preferences. Yeah, I'm so totally over any desire to treat Eliezer as an Ultimate Source of Wisdom...)

Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with. I still don't see how this could be possible, but maybe that's just a result of my own ignorance. And then there's the extreme difficulty of actually implementing CEV...

And no, I still don't claim to have a better plan. And I'm not at all comfortable with advocating the creation of a purely Utilitarian AI.

Your plan of trying to spead good memes before the CEV extrapolates everyone's volition really does feel like a good idea, but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation. I suspect that if you can't incorporate this process into CEV somehow, then any other possible strategy must involve cheating somehow.

Oh, I had another conversation recently on the topic of whether it's possible to convince a rational agent to change its core values through rational discusson alone. I may be misinterpreting this, but I think the conversation was inconclusive. The other person believed that... er, wait, I think we actually agreed on the conclusion, but didn't notice at the time. The conclusion was that if an agent's core values are inconsistent, then rational discussion can cause the agent to resolve this inconsistency. But if two agents have different core values, and neither agent has internally inconsistent core values, then neither agent can convince the other, without cheating. There's also the option of trading utilons with the other agent, but that's not the same as changing the other agent's values.

Anyway, I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I'm estimating the probability that this is the case at... significantly less than 50%. Not because I have any specific evidence about this, but as a result of applying the Pessimistic Prior. (Is that a standard term?)

Anyway, if this is the case, then the CEV algorithm will end up resulting in the outcome that you wanted. Specifically, an end to all suffering, and some form of utilitronium shockwave.

Oh, and I should point out that the utilitronium shockwave doesn't actually require the murder of everyone now living. Surely even us hardcore utilitarians should be able to afford to leave one planet's worth of computronium for the people now living. Or one solar system's worth. Or one galaxy's worth. It's a big universe, after all.

Oh, and if it turns out that some people's value systems would make them terribly unsatisfied to live without the ability to feel pain, or with any of the other brain modifications that a utilitarian might recommend... then maybe we could even afford to leave their brains unmodified. Just so long as they don't force any other minds to experience pain. Though the ethics of who is allowed to create new minds, and what sorts of new minds they're allowed to create... is kinda complicated and controversial.

Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world's population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it's a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It's a big universe, plenty of room for everyone. Just so long as they don't force any other mind to suffer.

Oh, and maybe there should also be rules against creating a mind that's forced to be wireheaded. There will be some complex and controversial issues involved in the design of the optimally efficient form of utilitronium that doesn't involve any ethical violations. One strategy that might work is a cross between the utilitronium scenario and the Solipsist Nation scenario. That is, anyone who wants to retreat entirely into solipsism, let them do their own experiments with what experiences generate the most utility. There's no need to fill the whole universe with boring, uniform bricks of utilitronium that contain minds that consist entirely of an extremely simple pleasure center, endlessly repeating the same optimally pleasurable experience. After all, what if you missed something when you originally designed the utilitronium that you were planning to fill the universe with? What if you were wrong about what sorts of experiences generate the most utility? You would need to allocate at least some resources to researching new forms of utilitronium, why not let actual people do the research? And why not let them do the research on their own minds?

I've been thinking about these concepts for a long time now. And this scenario is really fun for a solipsist utilitarian like me to fantasize about. These concepts have even found their way into my dreams. One of these dreams was even long, interesting, and detailed enough to make into a short story. Too bad I'm no good at writing. Actually, that story I just linked to is an example of this scenario going bad...

Anyway, these are just my thoughts on these topics. I have spent lots of time thinking about them, but I'm still not confident enough about this scenario to advocate it too seriously.

Comment author: thomblake 27 April 2010 01:40:52PM 5 points [-]

Your comments are tending to be a bit too long.

Comment author: PeerInfinity 27 April 2010 02:13:57PM *  1 point [-]

Thanks for the feedback. I kinda suspected that my comments were too long.

So, um... what would you prefer for me to do instead?

  • split them into multiple comments?
  • post them somewhere else (the Transhumanist Wiki?) and link to them from here?
  • refrain from posting the long comments entirely?
  • find some way to cut them down?
  • stick to a single topic per comment, and create multiple comments if I want to discuss multiple topics?
  • wait longer between posting these comments?
  • something else I haven't thought of?
Comment author: thomblake 27 April 2010 02:29:20PM *  1 point [-]

Yes, to various extents. (I should have been more helpful in the grandparent comment.)

I think the main problem is you seem to have a "stream of consciousness" style of writing. If you add an additional step of editing after (I'm just assuming you're not doing much of this now), then you can figure out which points are most important to make and put them succinctly.

The advantage of this, from a utilitarian point of view, is that you can spend less time editing than it will take any particular person to otherwise figure out what you're trying to say, and thus cause a net benefit to lots of people.

(ETA: note that the great-grandparent comment seems less subject to this particular criticism than some others)

Comment author: PeerInfinity 27 April 2010 04:02:25PM 3 points [-]

Thanks again for the feedback.

As I was writing the following points, I noticed that I was just making excuses. But instead of deleting them, I left them in, but commented on them, because they felt important and relevant.

  • I was already aware of the utilitarian argument that it's worth 1 minute of effort at rewriting in order to save 60 people one second each at reading, and I am making at least some attempt to do that. (correction: no, I didn't actually do the math. I should at least try to do the math.)

  • I already spend lots of time reviewing my comments before I post them. I don't post them until I scan through them once without noticing anything wrong. (correction: no, lately I've been posting them before I complete a full scan without finding any new issues, and I've been fixing some things by editing the comments after posting them. I should be more strict about following this rule. and as I mention below, I should add new issues to the list of things to scan for.)

  • Normally I have the opposite problem, spending way too much time reviewing what I wrote, which ends up resulting in other important things not getting said, because I'm spending too much time reviewing and never get around to writing the next thing. (correction: this will probably become less of an issue now that I've finished writing all of these "about me" comments.)

  • It usually feels like there's a sense of urgency, that if I take too long to write a reply, then everyone will have moved on to other topics, and noone will end up reading my comment. (correction: sometimes there is a reason to post stuff asap, other times there isn't. I need to learn how to tell the difference.)

But these are just excuses. If I'm going to continue posting comments, then I had better learn how to improve the quality of my comments.

The stream-of-consciousness style comments were something I wanted feedback on, and now I got the feedback, thanks. The feedback says that stream-of-consciousness-style comments are not acceptable. I'll try to stop doing that.

And that means that in addition to the issues I'm already scanning for, I'll also scan for... the specific reasons why stream-of-consciousness-style writing is annoying to read:

  • I need to present the points in the order that would make the most sense to the reader, not in just whatever order I happen to think of them in.

  • I need to erase points that I discover make no sense, rather than leaving them in just because it feels like there may be some reason to document the mistake.

  • I need to cut out off-topic side-comments entirely

  • I need to stop using phrases like "oh, by the way"

  • I need to cut out any meta-comments from inside my comments, unless for some reason they really are necessary

  • I especially need to cut out any comments about things like "my brain's excuse-generator". I need to remove the offending text, rather than explaining what caused me to write it. Unless it happens to be specifically on-topic, like in this comment.

  • probably some more things I haven't thought of.

But so far that just answers what to do about the stream-of-consciousness-style writing. It doesn't answer what to do about the excessive length of the comments. This comment is also really long, but I'm posting it anyway, because it feels necessary.

Actually, I should ask what everyone else does. Or maybe I should ask just what you, in particular do, Thom. Though this is already far off the original post's topic...

Comment author: NancyLebovitz 27 April 2010 04:17:41PM *  2 points [-]

The "excuse generator" points at something I suspect is a very fast and active part of a lot of people's minds, but it's probably worth a post or at least an extended open thread comment of its own.

As far as I can tell, I write so as to make things clear to the state of mind I was in just before I thought of something I'm trying to get across.

Comment author: PeerInfinity 27 April 2010 04:32:57PM 0 points [-]

Thanks for the feedback, that last sentence sounds like a good idea, I'll go ahead and try it.

There probably have already been lots posts about the "excuse generator", though not specifically by that name. For example, Eliezer's post Against Devil's Advocacy Though that's not quite the same thing.

And then there's all the posts on rationalization.

Comment author: wallowinmaya 12 May 2011 11:45:20PM 1 point [-]

The stream-of-consciousness style comments were something I wanted feedback on, and now I got the feedback, thanks.

This is probably too late, but I really love your writing style, especially your stream of consciousness.