Normal_Anomaly comments on Open thread, October 2011 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (308)
People are bothered by some words and phrases.
Recently, I learned that the original meaning of "tl;dr" has stuck in people's mind such that they don't consider it a polite response. That's good to know.
Some things that bother me are:
I'm not going to pretend that referring to women as"girls" inherently bothers me, but it bothers other people, so it by extension bothers me and I wouldn't want it excluded from this discussion.
Some say to say not "complexity" or "emergence".
"Friendly" as I've seen it used on here means "an AI that creates a world we won't regret having created," or something like that. It might be good to link to an explanation every time the term is used for the benefit of new readers, but I don't think it's necessary. "Unfriendly" means "any AI that doesn't meet the definition of Friendly," or "an AI that we would regret creating (usually because it destroys the world)." I think these are good, consistent ways of using the term.
Most possible AIs have no particular desire either to kill humans or to avoid doing so. They are generally called "Unfriendly" because creating one would be A Bad Thing. Many possible AIs that want to avoid killing humans are also Unfriendly because they have no problem doing other things we don't want. The important thing, when classifying potential AIs, is whether it would be a very good idea or a very bad idea to create one. That's what the Friendly/Unfriendly distinction should mean.
I've found that saying, "I don't think I understand what you mean by that" or "I don't see why you're saying so" is a useful tactic when somebody says something apparently nonsensical. The other person usually clarifies their position without being much offended, and one of two things happens. Either they were saying something true which I misinterpreted, or they really did mean something I disagree with, at which point I can say so.
I think this is a good idea, because humans aren't expected utility maximizers. We have different desires at different times, we don't always want what we like, etc. An individual's CEV would be the coherent combination of all that person's inconsistent drives: what that person is like at reflective equilibrium.
These ones bother me too, and I support not doing them.
Yes, when you actually don't understand, saying that you don't understand is rarely a bad idea. It's when you understand but disagree that proclaiming an inability to comprehend the other's viewpoint is ill-advised.
I could be wrong, but this may be a terminology issue.
It would indeed appear that EY originally defined coherence that way. I think it's legitimate to extend the meaning of the term to "strong agreement among the different utility functions an individual maximizes in different situations." You don't necessarily agree, and that's fine, because this is partly a subjective issue. What, if anything, would you suggest instead of "CEV" to refer to a person's utility function at reflective equilibrium? Just "eir EV" could work, and I think I've seen that around here before.
Me too. I consider the difference in coherency issues between CEV(humanity) and CEV(pedanterrific) to be one of degree, not kind. I just thought that might be what lessdazed was objecting to, that's all.
Okay, so I'm not the only one. Lessdazed: is that your objection to "individual CEV" or were you talking about something else?