Rain comments on Shut Up and Divide? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (258)
Why do we have to decide between them? Long before I ever heard of "Shut Up and Multiply," I used a test that produced the same results, but worked equally well for "Shut Up and Divide." My general statement was, "Be consistent." I would put things in the appropriate context and make sure to apply similar value functions regardless of size or scope - or, perhaps to phrase it better, making sure my consistently applied value function definitely considered size and scope.
From where should we derive our values? Well, we've got the option of using what's already there (the value function implemented in the human brain), or we have the option of appealing to something else, or we can just apply our reason and alter the function as needed. It seems to me that we don't really have access to that "something else," so I doubt we have a choice on this part. Our natural empathic hardwiring will shoot off all kinds of flares when we see suffering up close and personal, and will fail to activate when it should on the larger scale. We can still place arbitrary hacks into the value function to try and correct the scope insensitivity. The function was arbitrary in the first place, so there's no conflict other than ease of application.
How much of our values are from hardwiring as opposed to reasoned thought? Well, probably however much we haven't put thought into. For most people, I expect this to be a large portion. However, once we've thought about it, and applied our function to our functions, we can label them good or bad, and work at adding more arbitrary hacks to the arbitrary, evolution-designed, hardwired values. I see this in a particular way: a piece of the function, an item on the list of human morality, is "this list may change or update as needed," or, "this function is subject to revision based upon its output when ran against itself." Again, the ease of doing this is a more interesting debate, in my opinion.
If by "essential" you mean, "someone without it would not be human," then I grant that it's possible. But if you mean, "we can't change it," then I would disagree. We can change our values, now and certainly in the future as we begin rewiring things on a more fundamental level. I see it as another question of definitions: if we change ourselves "for the better," are we "extincting the human race," or "continuing as human and more"? It seems that practical reality won't care either way.