TimS comments on Why Politics are Important to Less Wrong... - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (96)
I'm not sure this is true in any useful sense. Louis XIV probably agrees with me that "I don't want to be wire-headed, drugged in to a stupor, victim of a nuclear winter, or see Earth turned in to paperclips."
But I think is is pretty clear than the Sun King was not implementing my moral preferences, and I am not implementing his. Either one of us is not "weak friendly" or "weak friendly" is barely powerful enough to answer really easy moral questions like "should I commit mass murder for no reason at all?" (Hint: no).
If weak friendly morality is really that weak, then I have no confidence that a weak-FAI would be able to make a strong-FAI, or even would want to. In other words, I suspect that what most people mean by weak friendly is highly generalized applause lights that widely diverging values could agree with without any actual agreement on which actions are more moral.
I think a lower bound on weak friendliness is whether or not entities living within the society consider their lives worthwhile. Of course this opens up debate about house elves and such but it's a useful starting point.
That (along with this semi-recent exchange) reminds me of a stupid idea I had for a group decision process a while back.
Basically, formalized war, only done in the opposite way of the strawman version in A Taste of Armageddon; making actual killing more difficult rather than easier.
A few reasons it's stupid:
Actually, I think I'm now remembering a better (or better-sounding) idea that occurred to me later: rather than something as extreme as deletion, let people "vote" by agreeing to be deinstantiated, giving up the resources that would have been spent instantiating them. It might be essentially the same as death if they stayed that way til the end of the universe, but it wouldn't be as ugly. Maybe they could be periodically awakened if someone wants to try to persuade them to change or withdraw their vote.
That would hopefully keep people from voting selfishly or without thorough consideration. On the other hand, it might insulate them from the consequences of poor policies.
Also, how to count votes is still a problem; where would "the resources that would have been spent instantiating them" come from? Is this a socialist world where everyone is entitled to a certain income, and if so, what happens when population outstrips resources? Or, in a laissez-faire world where people can run out of money and be deinstantiated, the idea amounts to plain old selling of votes to the rich<strike>, like we have now</strike>.
Basically, both my ideas seem to require a eutopia already in place, or at least a genuine 100% monopoly on force. I think that might be my point. Or maybe it's that a simple-sounding, socially acceptable idea like "If someone would rather die than tolerate the status quo, that's bad, and the status quo should be changed" isn't socially acceptable once you actually go into details and/or strip away the human assumptions.
Can this be set up in a round robin fashion with sets of mutually exclusive values such that everyone who is willing to kill for their values kills each other?
Maybe if the winning side's values mandated their own deaths. But then it would be pointless for the sysop to respond to their threat of suicide to begin with, so I don't know. I'm not sure if there's something you're getting at that I'm not seeing.
"I'm not going to live there. There's no place for me there... any more than there is for you. Malcolm... I'm a monster.What I do is evil. I have no illusions about it, but it must be done. "
I'm thinking if you do the matchup's correctly you only wind up with one such person at the end, whom all the others secretly precommit to killing.
...maybe this shouldn't be discussed publicly.
I don't think the system works in the first place without a monopoly on lethal force. You could work within the system by "voting" for his death, but then his friends (if any) get a chance to join in the vote, and their friends, til you pretty much have a new war going. (That's another flaw in the system I could have mentioned.)
I think the vast majority of the population would agree that genocide and mass murder are bad, same as wire heading and turning the earth in to paperclips. A single exception isn't terribly noteworthy - I'm sure there's at least a few pro-wire-heading people out there, and I'm sure at least a few people have gotten enraged enough at humanity to think paperclips would be a better use of the space.
If you have a reason to suspect that "mass murder" is a common preference, that's another matter.
Mass murder is an easy question.
Is the Sun King (who doesn't particularly desire pointless mass murder) more moral than I am? Much harder, and your articulation of "weak Friendliness" seems incapable of even trying to answer. And that doesn't even get into actual moral problems society actually faces every day (i.e. what is the most moral taxation scheme?).
If weak-FAI can't solve those types of problems, or even suggest useful directions to look, why should we believe it is a step on the path to strong-FAI?
That's my point. I'm not sure where the confusion is, here. Why would you call it useless to prevent wireheading, UFAI, and nuclear winter, just because it can't also do your taxes?
If it's easier to solve the big problems first, wouldn't we want to do that? And then afterwards we can take our sweet time figuring out abortion and gay marriage and tax codes, because a failure there doesn't end the species.
For reasons related to Hidden Complexity of Wishes, I don't think weak-FAI actually is likely to prevent "wireheading, UFAI, and nuclear winter." At best, it prohibits the most obvious implementations of those problems. And it is terribly unlikely to be helpful in creating strong-FAI.
And your original claim was that common human preferences already implement weak-FAI preferences. I think that the more likely reason why we haven;t had the disasters you reference is that for most of human history, we lacked the capacity to cause those problems. As actual society shows, hidden complexity of wishes make implementing social consensus hopeless, much less whatever smaller set of preferences is weak-FAI preferences.
My basic point was that we shouldn't worry about politics, at least not yet, because politics is a wonderful example of all the hard questions in CEV, and we haven't even worked out the easy questions like how to prevent nuclear winter. My second point was that humans do seem to have a much clearer CEV when it comes to "prevent nuclear winter", even if it's still not unanimous.
Implicit in that should have been the idea that CEV is still ridiculously difficult. Just like intelligence, it's something humans seem to have and use despite being unable to program for it.
So, then, summarized, I'm saying that we should perhaps work out the easy problems first, before we go throwing ourselves against harder problems like politics.
There's not a clear dividing line between "easy" moral questions and hard moral questions. The Cold War, which massively increased the risk of nuclear winter, was a rational expression of Great Power relations between two powers.
Until we have mutually acceptable ways of resolving disputes when both parties are rationally protecting their interests, we can't actually solve the easy problems either.
from you:
and from me:
So, um, we agree, huzzah? :)
Sure, genocide is bad. That's why the Greens — who are corrupting our precious Blue bodily fluids to exterminate pure-blooded Blues, and stealing Blue jobs so that Blues will die in poverty — must all be killed!