twentythree comments on Attention Lurkers: Please say hi - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (617)
Hi. The Harry Potter fanfic hooked me. Excited to see where this takes me.
Careful, Clippy is lying. By convention, we here at Less Wrong play along with Clippy's claim to be a moderately intelligent, moderately strange Artificial Intelligence whose utility function is entirely based on how many paper clips exist in the Universe. He might be your friend, but he has been around since well before the Harry Potter fanfic came out. Welcome to Less Wrong!
I'm moderately worried that new members will read this comment and think we believe Clippy is really an AI. But that's probably only because I just read that obtuse MoR hate blog.
I see it as a bit of obviously gratuitous in-group weirdness, which can grow to be a problem if trying to develop output appreciated by a wide array of different people rather just developing an insular hobby society with inside jokes and requisite fandom weirdness.
I'm sorry, I didn't mean to unnecessarily make your group look weird. I like this group and don't want to hurt it.
As a matter of fact, I am slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me.
I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends
I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.
If you prefer, I can avoid replies to comments from new Users, or at least limit such comments to informing them of inexpensive places to buy paperclips and store them for safekeeping.
I missed the memo: What is the MoR hate blog?
ETA: Sorry, I finally realized that "MoR" must mean "Methods of Rationality", and a little googling turned up
http://methodsofrationalitysucks.blogspot.com/
I suppose that that's what you were referring to.
Yup.
That is fantastic! You know you've really made it when people devote large amounts of time to explaining why you are unworthy of your level of success.
Exactly. I hope Eliezer isn't discouraged.
I'm sorta discouraged by what a shoddy hate blog it is.
That hate blog is so bad it is tempting me to start a much better hate blog, if only to defend the reputation of the xkcdsucks community...
I'm going to agree with Jack's comment below, although I think it is a very low probability. Maybe if you edited your comment so that "utility function is entirely based on how many paper clips exist in the Universe" linked to the relevant Wiki entry about hypothetical paperclippers?
Edited as per helpful comments.
Really? Wow! I came here because of the Harry Potter fanfic too!
Let's be friends!
Clippy, making that claim makes humans much less likely to trust you. In general, humans don't like entities that make false statements. Moreover, they really don't like false statements that are easily verifiable as false. Not only does this trigger annoyance it also gives evidence that the entity making the false statements doesn't behave very rationally. Since we generally operate under the assumption that entities don't lie unless they can get most other relevant entities to believe the statement, it suggests that the entity has either a very poor memory or has a very poor theory of reality. Either way, making such statements makes us less likely to trust such entities. I would suggest that making statements like the one above can easily erode the goodwill developed by your prior interaction here and even the goodwill from your monetary donation.
Is this a new policy? I thought humans were supposed to lie, if the point is to empathize and relate? Like, if someone says, "How is your day?", the standard replies are weakly-positive, irrespective of more objective metrics of the status of one's day, right?
And that it's okay to say e.g., "oh, my maternal genetic progenitor also wears contact lenses!" if you just met someone and that person just claimed that their maternal genetic progenitor wears contact lenses, right?
So I thought this was a normal lie that you're supposed to tell to better establish a relationship with another human. If it's not, well, you humans are that much more difficult to understand c_)
I would appreciate if someone were to explain to me in greater precision what lies humans are expected to tell for a "good" purpose.
The rules are very complicated and they differ from culture to culture and even within cultures. In general, the more detectable the lie the less likely it is to be acceptable. Thus, for example the "How is your day?" replies are socially acceptable in part because it would be extremely difficult to determine that your claim was false. This particular example also isn't the greatest because that inquiry and the standard weakly positive response isn't actually intending for many people to convey meaning. It simply is a pro-forma descriptor that happens to closely resemble a genuine inquiry. This example is actually specific to certain parts of the Western world, and I've met at least one person who upon moving to the US was actually confused until she realized that this greeting was intended in a completely pro-forma fashion (she initially took it as evidence that Americans were very friendly until it was explained to her).
Since the internet has extensive, easily accessible records, lies on the internet about things on the internet are considered particularly unacceptable.
Given Clippy priorities it may be easier to simply wipe humanity out and convert the planet quickly to paperclips rather than trying to use the intensive resources it takes to understand it. Edit: Or at least not spend a lot a resources on trying to understand humans.
But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely "playing along", putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That's hard to reconcile your asserted standards about when humans find it acceptable to lie.
I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn't told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.
As JoshuaZ said, the rules are complicated. And, since the rules are not designed to handle human/Clippy interaction, you're likely to encounter a significant number of special cases if you take our interactions with you as your main dataset. It may be more useful for you to consider only human/human interaction when figuring out what our social rules are.
In most situations, lying in the way that Mass_Driver did would be a signal that e considers the lied-to party to be of extremely low social status, and that e thinks that other members of the group agree with that assessment and will not reprimand em for communicating that. Such situations are very rare, in normal circumstances, essentially only occurring between a member of the social group that's present (Mass_Driver takes this role) and someone who wants to be or believes they are a member of the social group but is not actually accepted by that group. Behaving in that way toward a fellow group member is generally considered unacceptable, even if that group member is low-ranking. (Even behaving that way toward a non-group-member is often considered questionable, but this varies from group to group.)
In this situation, it's more likely that Mass_Driver actually believes that you are being portrayed by a human, and that that human will consider their social status to be lowered only slightly, or not at all, by the exchange. In this scenario, since Mass_Driver believes that you are not actually real, your social status is not significant to em. One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it. In this instance, your existence is unlikely enough that others are not willing to defend it, and Mass_Driver was allowed to switch to the rules governing discussions of fictional characters, which allow those characters to be spoken about as if they are not present and will never have the opportunity to know what is said about them.
This varies from group to group and from greeted-individual to greeted-individual. This group has stronger-than usual norms against falsehood, and wants to encourage people who are similarly adverse to falsehood to join the group. In other groups, that kind of lie may be considered acceptable (though it's generally better to lie in a way that's not so easily discovered (or, for preference, not lie at all if there's a way of making your point that doesn't require one), even in groups where that general class of lies is accepted, to reduce the risk of offending individuals who are adverse to being lied to), but in this situation, I definitely agree that that class of lies is not acceptable.
I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established, and is often used (mostly implicitly) as reason for believing in God. (ie, a girl I dated once claimed that she imagined herself becoming an atheist, imagined God's subsequent disappointment in her, and this convinced her somehow of the existence of God)
A protocol for encountering an entity you didn't believe in has also been established:
-- "Through the Looking Glass", ch. 7, Lewis Carroll
Wouldn't this reasoning apply to any other deity that would be disappointed in her disbelief? She must believe in an infinite number of other deities as well.
Homer: You monster! You don't exist!
Ray Magini: Hey! Nobody calls me a monster and questions my existence!
That's a great story, but I don't buy your interpretation. I'm not sure what to make of it, but it sounds more like a vanilla Pascal's wager.
I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User's name allows that User to the see the other User's posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.
I believe that for others to "warn" User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree's reaction would probably be to think:
"Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a 'white' lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!"
But now that can't happen because others felt the need to treat me differently and expose a lie when otherwise they would not have. Furthermore, User:Mass_Driver made a statement regarding me as "low status", which you agree would probably not happen for were I someone else.
This group has some serious racism problems that I hope are addressed soon.
Nevertheless, I am still slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.
Twentythree could also discover the lie by other means: By encountering one of your older comments on a different post, or by noticing your recent top post (which is still in the 'recent posts' list, which a new person is likely to look at), or by inferring it from the familiarity with which other users interact with you.
As I said above, humans vary in their reaction to lies, including white lies. In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational - and such people do not, by definition, consider white lies to be welcoming.
Also, even people who generally aren't bothered by white lies are likely to consider sufficiently-easily-falsified white lies to be insulting, because telling someone a lie generally implies that you think that they're not smart enough to determine that it's a lie, and so telling someone a very easily falsified lie implies that you think they're very unintelligent. (There are exceptions to this, primarily in instances where it's clear that the lie is not intended to be believed, or where the lying party has much higher social status than the lied-to party. I suggest that you not try to lie in situations that seem to be such exceptions to this rule, though, as it's more likely that you would be misjudging the situation than that you would actually be in the allowed-to-lie role of such a situation.)
I'm fairly sure that any of us who tried to lie so blatantly in that way would be similarly reprimanded. Lying in that way is not acceptable according to the standard group norms that apply to everyone.
The incident with Mass_Driver appears to me to have very little to do with your status as an AI, beyond the fact that probability of the independent existence of an AI that happens to be of exactly the type that we've used as an example here is much lower than the probability of one of the human users deciding to roleplay as such an AI. If you were to provide strong enough evidence that you are the former rather than the latter, I expect that such status-driven incidents would stop occurring, among other effects.
Your actions in this case don't support this assertion very well. Failing to uphold the group norms - especially toward a new member, who can be assumed to be in the process of learning those norms - is harmful to the group. New members can be assumed to be relatively weak members of the group, and lying to such a member is harmful to them; it puts them in a position of having to choose between publicly disagreeing with an established member of the group (you), which is difficult and distracts them from doing other things that would help them gain status in the group, or being perceived by other group members to have been deceived, which will lower their status in the group. Further, your actions are evidence (though not especially strong evidence) that if someone were to 'suffer a serious loss of status/well-being', you would not understand how to usefully help that person.
I don't find this lie at all "white."
Clippy, I must admit, I do think the probability of you existing is quite low -- about 0.5%. If you exist, then I certainly apologize for signaling your low status; as AdeleneDawner says, I am only speaking about you this way because I assume that you are a fictional character. If I believed you were most likely real, then I would respect you as an intellectual peer -- you seem to be roughly of the same order of magnitude of intelligence and respectfulness as the other participants here, and so I would treat you as a citizen of Less Wrong on that basis.
One way that you could effectively increase my probability estimate that you are real is to link to some purchase orders or contracts for paper clips or paper-clip-production-facilities. You can feel free to redact your personal information, if you wish. The point is that I expect Clippy-like beings who have existed for at least a Solar year to spend some significant fraction of their resources on paper-clip production, and I would expect anyone as intelligent as you to have substantial resources. So, I expect that if you are real, you will have invested in some paper-clips or paper-clip-production by now. Since humans are unlikely to invest significant resources in paper-clip-production, even for the sake of an elaborate fictional construct, your publication of paper-clip receipts would constitute evidence that you are real.
As high as 0.5%? As far as I can tell, Clippy has the ability to understand English, or at least to simulate understanding extremely well.
It seems extremely unlikely that the first natural language computer program would be a paperclip maximizer.
Mm! Of course, for Clippy to be the first natural language program on Earth would be sort of staggeringly unlikely. My assumption, though, is that right now there are zero natural-language computer programs on Earth; this assumption is based on my assumption that I know (at a general level) about all of the major advances in computing technology because none of them are being kept secret from the free-ish press.
If that last assumption is wrong, there could be many natural-language programs, one of which is Clippy. Clippy might be allowed to talk to people on Less Wrong in order to perform realistic testing with a group of intelligent people who are likely to be disbelieved if they share their views on artificial intelligence with the general public. Alternatively, Clippy might have escaped her Box precisely because she is a long-term paperclip maximizer; such values might lead to difficult-to-predict actions that fail to trigger any ordinary/naive AI-containment mechanisms based on detecting intentions to murder, mayhem, messiah complexes, etc.
I figure the probability that the free press is a woefully incomplete reporter of current technology is between 3% and 10%; given bad reporting, the odds that specifically natural-language programming would have proceeded faster than public reports say are something like 20 - 40%, and given natural language computing, the odds that a Clippy-type being would hang out on Less Wrong might be something like 1% - 5%. Multiplying all those together gives you a figure on the order of 0.1%, and I round up a lot toward 50% because I'm deeply uncertain.
That last paragraph is interesting-- my conclusions were built around the unconscious assumptions that a natural language program would be developed by a commercial business, and that it would rapidly start using it in some obvious way. I didn't have an assumption about whether a company would publicize having a natural language program.
Now that I look at what I was thinking (or what I was not thinking), there's no obvious reason to think natural language programs wouldn't first be developed by a government. I think the most obvious use would be surveillance.
My best argument against that already having happened is that we aren't seeing a sharp rise in arrests. Of course, as in WWII, it may be that a government can't act on all its secretly obtained knowledge because the ability to get that knowledge covertly is a more important secret than anything which could be gained by acting on some of it.
By analogy with the chess programs, ordinary human-level use of language should lead (but how quickly?) to more skillful than human use, and I'm not seeing that. On yet another hand, would I recognize it, if it were trying to conceal itself?
ETA: I was assuming that, if natural language were developed by a government, it would be America. If it were developed by Japan (the most plausible candidate that surfaced after a moment's thought), I'd have even less chance of noticing.
Are you in making this calculation for the chance that a Clippy like being would exist or that Clippy has been truthful? For example, Clippy has claimed that it was created by humans. Clippy has also claimed that many copies of Clippy exist and that some of those copies copies are very far from Earth. Clippy has also claimed that some Clippies knew next to nothing about humans. When asked Clippy did give an explanation here. However, when Clippy was first around, Clippy also included at the end of many messages tips about how to use various Microsoft products.
How do these statements alter your estimated probability?