MugaSofer comments on Failed Utopia #4-2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (247)
That doesn't follow. There just isn't any reason that the former implies the latter. Either kind of caring is possible but they are not the same thing (and the second is likely more complex than the first).
This much is true. (Or at least it must be something that follows rules.)
This isn't required. It need no oppositional interests/incentives at all beyond, after they are given a request, the desire to honour it. This isn't a genie trying to thwart someone in order to achieve some other goal. It is just the genie trying to the intent in order to for some other purpose. It is a genie only caring about the request and some jackass asking for something they don't want. (Rather than 'oppositional' it could be called 'obedient', where it turns out that isn't what is desired.)
Presumably it got it's wish granting motives from whoever created it or otherwise constructed the notion of the wish granter genie.
Actually, I think Will has a point here.
"Wishes" are just collections of coded sounds intended to help people deduce our desires. Many people (not necessarily you, IDK) seem to model the genie as attempting to attack us while maintaining plausible deniability that it simply misinterpreted our instructions, which, naturally, does occasionally happen because there's only so much information in words and we're only so smart.
In other words, it isn't trying to understand what we mean; it's trying to hurt us without dropping the pretense of trying to understand what we mean. And that's pretty anthropomorphic, isn't it?
Yes, that's the essence of it. People do it all the time. Generally, all sorts of pseudoscientific scammers try to maintain image of honest self deception; in the medical scams in particular, the crime is just so heinous and utterly amoral (killing people for cash) that pretty much everyone goes well out of their way to be able to pretend at ignorance, self deception, misinterpretation, carelessness and enthusiasm. But why would some superhuman AI need plausible deniability?
If your genie is using your vocal emissions as information toward the deduction of your extrapolated volition, then I'd say your situation is good.
Your problems start if it works more by attempting to extract a predicate from your sentence by matching vocal signals against known syntax and dictionaries, and output an action that maximises the probability of that predicate being true with respect to reality.
To put it simply, I think that "understanding what we mean" is really a complicated notion that involves knowing what constitutes true desires (as opposed to, say, akrasia), and of course having a goal system that actually attempts to realize those desires.