The_Jaded_One comments on What can we learn from Microsoft's Tay, its inflammatory tweets, and its shutdown? - Less Wrong

1 Post author: InquilineKea 26 March 2016 03:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread.

Comment author: The_Jaded_One 26 March 2016 04:49:53AM 4 points [-]

I though a bit about it, but I think Tay is basically a software version of a parrot that repeats back what it hears - I don't think it has any commonsense knowledge or serious attempt to understand that tweets are about a world that exists outside of twitter. I.e it has no semantics, it's just a syntax manipulator that uses some kind of probabilistic language model to generate grammatically correct sentences and a machine learning model to try and learn which kind of sentences will get the most retweets or will most closely resemble other things people are tweeting about. Tay does't know what a "Nazi" actually is. I haven't looked into it in any detail but I know enough to guess that that's how it works.

As such, the failure of Tay doesn't particularly tell us much about Friendliness, because friendliness research pertains to superintelligent AIs which would definitely have a correct ontology/semantics and understand the world.

However, it does tell us that a sufficiently stupid, amateurish attempt to harvest human values using an infrahuman intelligence wouldn't reliably work. This is obvious to anyone who has been "in the trade" for a while, however it does seem to surprise the mainstream media.

It's probably useful as a rude slap-in-the-face to people who are so ignorant of how software and machine learning work that they think friendliness is a non-issue.

Comment deleted 26 March 2016 07:52:33PM [-]
Comment author: The_Jaded_One 27 March 2016 08:12:34PM 0 points [-]

Yes, you are correct. And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa), then I would still say that it doesn't really give us much new information about unfriendliness in superintelligent AGIs.

Comment deleted 28 March 2016 03:50:53AM [-]
Comment author: The_Jaded_One 28 March 2016 05:06:20PM 1 point [-]

Sure, but he point stands: failures of nattow AI systems aren't informative about likely faulures of superintelligent AGIs.

Comment author: dlarge 29 March 2016 05:54:00PM 5 points [-]

They are informative, but not because narrow AI systems are comparable to superintelligent AGIs. It's because the developers, researchers, promoters, and funders of narrow AI systems are comparable to those of putative superintelligent AGIs. The details of Tay's technology aren't the most interesting thing here, but rather the group that manages it and the group(s) that will likely be involved in AGI development.

Comment author: The_Jaded_One 29 March 2016 08:28:14PM *  1 point [-]

That's a very good point.

Though one would hope that the level of effort put into AGI safety will be significantly more than what they put into twitter bot safety...

Comment author: dlarge 30 March 2016 05:42:11PM 1 point [-]

One would hope! Maybe the Tay episode can serve as a cautionary example, in that respect.

Comment author: Lumifer 30 March 2016 05:57:52PM 0 points [-]
Comment author: Lamp2 08 April 2016 02:08:15AM 0 points [-]

And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa)

The fact that this kind of mistake is considered more "unethical" then other types of mistakes tells us more about the quirks of the early 21th century Americans doing the considering than about AI safety.

Comment author: Lamp2 08 April 2016 01:58:35AM 0 points [-]

I though a bit about it, but I think Tay is basically a software version of a parrot that repeats back what it hears - I don't think it has any commonsense knowledge or serious attempt to understand that tweets are about a world that exists outside of twitter. I.e it has no semantics

Well neither does image recognition software. Neither does Google's search algorithm.

Comment author: Lumifer 28 March 2016 01:04:10AM 0 points [-]

it does tell us that a sufficiently stupid, amateurish attempt to harvest human values using an infrahuman intelligence wouldn't reliably work.

You probably mean "reliably wouldn't work" :-)

However I have to question whether the Tay project was an attempt to harvest human values. As you mentioned, Tay lacks understanding of what she hears or says and so whatever it "learned" about humanity by listening to Twitter it would have been able to learn by straightforward statistical analysis of the corpus of text from Twitter.

Comment author: Rangi 30 March 2016 06:21:50AM 0 points [-]

Tay doesn't tell us much about deliberate Un-Friendliness. But Tay does tell us that a well-intentioned effort to make an innocent, harmless AI can go wrong for unexpected reasons. Even for reasons that, in hindsight, are obvious.

Are you sure that superintelligent AIs would have a "correct ontology/semantics"? They would have to have a useful one, in order to achieve their goals, but both philosophers and scientists have had incorrect conceptualizations that nevertheless matched the real world closely enough to be productive. And for an un-Friendly AI, "productive" translates to "using your atoms for its own purposes."

Comment author: The_Jaded_One 30 March 2016 07:31:55AM 0 points [-]

Are you sure that superintelligent AIs would have a "correct ontology/semantics"?

it's hard to imagine a superintelligent AGI that didn't know basic facts about the world like "trees have roots underground" or "most human beings sleep at night".

They would have to have a useful one, in order to achieve their goals

Useful models of reality (useful in the sense of achieving goals) tend to be ones that are accurate. This is especially true of a single agent that isn't subject to the weird foibles of human psychology and isn't mainly achieving things via signalling like many humans do.

The reason I made the point about having a correct understanding of the world, for example knowing what the term "Nazi" actually means, is that Tay has not achieved the status of being "unfriendly", because it doesn't actually have anything that could reasonably be called goals pertaining to the world. Tay is not even an unfriendly infra-intelligence. Though I'd be very interested if someone managed to make one.