wedrifid comments on Open thread, October 2011 - Less Wrong

5 Post author: MarkusRamikin 02 October 2011 09:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (308)

You are viewing a single comment's thread. Show more comments above.

Comment author: lessdazed 16 October 2011 07:09:16AM *  2 points [-]

want

AIs that want to kill humans (ie. most of them) are unfriendly.

Just as "want" does not unambiguously exclude instrumental values in English, "unfriendly" does not unambiguously include instrumental values in English. As for the composite technical term "Unfriendly Artificial Intelligence"...

If you write "Unfriendly Artificial Intelligence" alone, regardless of other context, you are technically correct. If you want to be correct again, type it again, in wingdings if the mood strikes you, you will still be technically correct, though with even less of a chance at communicating. In the context of entire papers, there is other supporting context, so it's not a problem. In the context of secondary discussions, consider those liable to be confused or you can consider them confused.

We might disagree about the extent of confusion around here, we might disagree as to how important that is, we might disagree as to how much of that is caused by unclear forum discussions, and we might disagree about the cost of various solutions.

Regarding the first point, those confident enough to post their thoughts on the issue make mistakes. Regarding the fourth point, assume I'm not advocating an inane extreme solution such as requiring you to define words every comment you make, but rather thoughtfulness.

Examples of cases where this is a problem include when people go around saying "a friendly AI may torture <any set which includes a wedrifid or anyone he likes>". Because that is by definition not friendly. Any other example of "what if a friendly AI thing did <something absurdly undesirable all things considered>" is also a misuse of the idea.

No torture? You're guessing as to what you want, what people want, what you value, what there is to know...etc. Guessing reasonably, but it's still just conjecture and not a necessary ingredient in the definition (as I gather it's usually used).

Or, you're using "friendly" in the colloquial rather than strictly technical sense, which is the opposite of how you criticized how I said not to speak about unfriendly AI! My main point is that care to should be taken to explain what is meant when navigating among differing conceptions within and between colloquial and technical senses.

Comment author: wedrifid 16 October 2011 09:26:28AM 1 point [-]

Or, you're using "friendly" in the colloquial rather than strictly technical sense

No, you're wrong about the dichotomy there. The words were used legitimately with respect to a subjectively objective concept. But never mind that.

Of all the terms in "Unfriendly Artificial Intelligence" I'd say the 'unfriendly' is the most straightforward. I encourage folks to go ahead and use it. Elaborate further on what specifically they are referring to as the context makes necessary.

Comment author: lessdazed 16 October 2011 08:06:01PM 2 points [-]

I encourage folks to go ahead and use it. Elaborate further on what specifically they are referring to as the context makes necessary.

This implies I'm discouraging use of the term, which I'm not, when I raised the issue to point out that for this subject specificity is often not supplied by context alone and needs to be made explicit.

What is confusing is when people describe a scenario in which it is central that an AI has human suffering as a positive terminal value, and they use "unfriendly" alone as a label to discuss it. The vast majority of possible minds are the ones most overlooked: the indifferent ones. If something applies to malicious minds but not indifferent or benevolent ones, one can do better than describing the malicious minds as "either indifferent or malicious", i.e. "unfriendly".

I would also discourage calling blenders "not-apples" when specifically referring to machines that make apple sauce. Obviously, calling a blender a "not-apple" will never be wrong. There's nothing wrong with talking about non-apples in general, nor talking about distinguishing them from apples, nor with saying that a blender is an example of a non-apple, nor with saying that a blender is a special kind of non-apple that, unlike other non-apples, is an anti-apple.

But when someone describes a blender and just calls it a "non-apple", and someone else starts talking about how almost nothing is a non-apple because most things don't pulverize apples, and every few times the subject is raised someone assumes a "non-apple" is something that pulverizes apples, it's time for the first person to implement low-cost clarifications to his or her communication in certain contexts.

Comment author: wedrifid 17 October 2011 04:01:03AM 1 point [-]

What is confusing is when people describe a scenario in which it is central that an AI has human suffering as a positive terminal value, and they use "unfriendly" alone as a label to discuss it. The vast majority of possible minds are the ones most overlooked: the indifferent ones. If something applies to malicious minds but not indifferent or benevolent ones, one can do better than describing the malicious minds as "either indifferent or malicious", i.e. "unfriendly".

I would use malicious in that context. A specific kind of uFAI requires a more specific word if you expect people to distinguish it from all other uFAIs.