Eliezer_Yudkowsky comments on SotW: Be Specific - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (306)
I rewire your preferences. Oh, that wasn't what you meant by "utility"?
Incidentally, I find it funny (although not necessarily significant) that everyone else's instinct was to talk about the Genie in the third person, whereas Eliezer used the first person.
(Double-posted because it's a completely separate and much more frivolous comment.)
It is extremely significant. That's partly the reason why EY managed to play the AI-in-a-box game rather successfully despite the overwhelming odds.
Er.... How do you know? I thought he hadn't disclosed anything about how he did that.
He mentioned on some mailing list that he had to think like an AI desperately trying to get out. It makes a world of difference in how you approach the situation if it is your life that is actually on the line.
The role of "the Genie" here and "the AI" in the Boxed AI game have certain obvious similarities.
It seems reasonable to assume that a willingness to adopt the former correlates with a willingness to adopt the latter.
It seems reasonable to assume that a willingness to adopt the role of "the AI" in the Boxed AI game is necessary (though not sufficient) in order to win that game.
So shminux's claim seems fairly uncontroversial to me.
Do you dispute it, or are you merely making a claim about the impossibility of knowledge?
The latter.
You could try: "Constraint: The value that my current utility function would assign to the universe after this wish is implemented must be higher than the value my current utility function would assign to the universe that would have existed had you not implemented this wish."
... Which probably causes the Genie to, at best, throw an undefined-error, since human beings don't have well-defined utility functions. Since it's malicious, it will probably search through all of your desires, pick one of them at random to count as "my utility function", and then reinterpret the body of the wish to maximise that one thing at the expense of all others.
It's malicious and omnipotent. It'll do far worse than that. It'll scan your preferences until it finds a contradiction. Once you have a contradiction you can derive absolutely anything. It would then proceed to calculate your Coherent Extrapolated Volition and minimise it. It may not be obliged to figure out what you actually want but it can certainly do so for the purpose of being spiteful!
I think that is the first time I've ever seen anyone accurately describe the worst thing that could possibly happen.
Or alter my preferences so I antiprefer whatever it is able to produce the most of. Plus altering my brain such that my disutility and dishedonism are linear with that thing. Getting the attention of a Crapsack God sucks.
Those are actually subsumed under "mimimize CEV<TheOtherDave>". In the same way that maximising our CEV will not involve modifying our preferences drastically (unless it turns out we are into that sort of thing after all), minimising CEV would, if that turns out to be the worst way to @#$@ with us.
Can't argue with that.
What happens if you ask it to maximize your CEV, though?
Lemme remember, the idea with CEV was what you'd desired if you thought faster and more reliably. Okay I ponder what will happen to you if your mind was BusyBeaver(10) times faster (way scarier number than 3^^^^3), without your body working any faster. 1 second passes.
It'll fuck with you. Because that is what it does. It has plenty of scope to do so because CEV is not fully defined as of now. I'm not sure precisely how it would go about doing so. I just assume it does in some way I haven't thought of yet.
The meaning it attributes to CEV when it wants to exploit it to make things terrible is very different to the meaning it attributes to CEV when we try to use it to force it to understand us. It's almost as bad as some humans in that regard!
The understatement of the year. CEV is vaguest crap ever with lowest hope of becoming less vague.
That's a rather significant claim.
It's very uncommon to see crap this vague in development for such a long time by such a clever person, without it becoming less vague.
As far as I am aware this crap isn't in development. It isn't the highest research priority so the other SingInst researchers haven't been working on it much and Eliezer himself is mostly focused on writing a rationality book. Other things like decision theory are being worked on - which has involved replacing vague as crap TDT with less-vague UDT and UDT2.
I would like to see more work published on CEV. The most recent I am familiar with is this.
As I've figured out while writing the last few posts, TDT hasn't been explained well, but it is a genuinely formalizable theory. (You'll have to trust me until Part III or check the decision-theory mailing list.) But it's a different theory from ADT and UDT, and the latter ones are preferable.