You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gram_Stone comments on Rationality Reading Group: Part V: Value Theory - Less Wrong Discussion

6 Post author: Gram_Stone 10 March 2016 01:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 17 March 2016 03:26:53PM *  1 point [-]

not a good heuristic

OK, so I agree that that's part of what Eliezer is saying under "Say not 'complexity'". But let's be a bit more precise about it. He makes (at least) two separate claims.

The first is that "complexity should never be a goal in itself". I strongly agree with that, and I bet Gram_Stone does too and isn't proposing to chase after complexity for its own sake.

[EDITED to add: Oops, as SquirrelInHell points out later I actually mean not Gram_Stone but whatever other people Gram_Stone had in mind who hold that theories of ethics should not be very simple. Sorry, Gram_Stone!]

The second is that "saying 'complexity' doesn't concentrate your probability mass". This I think is almost right, but that "almost" is important sometimes. Eliezer's point is that there are vastly many "complex" things, which have nothing much in common besides not being very simple, so that "let's do something complex" doesn't give you any guidance to speak of. All of that is true. But suppose you're trying to solve a problem whose solution you have good reason to think is complex, and suppose that for whatever reason you (or others) have a strong temptation to look for solutions that you're pretty sure are simpler than the simplest actual solution. Then saying "no, that won't do; the solution will not be that simple" does concentrate your probability mass and does guide you -- by steering you away from something specific that won't work and that you'd otherwise have been inclined to try.

Again, this is dependent on your being right when you say "no, the solution will not be that simple". That's often not something you can have any confidence in. But if what you're trying to do is to model something formed by millions of years of arbitrary contingencies in a complicated environment -- like, e.g., human values -- I think you can be quite confident that no really simple model is very accurate. More so, if lots of clever people have looked for simple answers and not found anything good enough.

Here's another of Eliezer's posts that maybe comes closer to agreeing explicitly with Gram_Stone: Value is Fragile. Central thesis: "Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth." Note that if our values could be adequately captured by a genuinely simple model, this would be false.

(I am citing things Eliezer has written not because there's anything wrong with disagreeing with Eliezer, but because your application here of what he wrote in "Say not 'complexity'" seems to lead to conclusions at variance with other things he's written, which suggests that you might be misapplying it.)

Comment author: Gram_Stone 19 March 2016 11:29:31PM *  0 points [-]

Sorry, Gram_Stone!

Heh, it's okay. I had no idea that the common ancestor comment had generated so much discussion.

Also, I agree that neither is the complex approach obviously wrong to me, and that it seems that until there's something that makes it seem obviously wrong, we might as well let the two research paths thrive.