Comment author: Lamp2 08 April 2016 01:59:06AM 0 points [-]

It might help to take an outside view here:

Picture a hypothetical set of highly religious AI researchers who make an AI chatbot, only to find that the bot has learned to say blasphemous things. What lessons should they learn from the experience?

Original thread here.

Comment author: The_Jaded_One 26 March 2016 04:49:53AM 4 points [-]

I though a bit about it, but I think Tay is basically a software version of a parrot that repeats back what it hears - I don't think it has any commonsense knowledge or serious attempt to understand that tweets are about a world that exists outside of twitter. I.e it has no semantics, it's just a syntax manipulator that uses some kind of probabilistic language model to generate grammatically correct sentences and a machine learning model to try and learn which kind of sentences will get the most retweets or will most closely resemble other things people are tweeting about. Tay does't know what a "Nazi" actually is. I haven't looked into it in any detail but I know enough to guess that that's how it works.

As such, the failure of Tay doesn't particularly tell us much about Friendliness, because friendliness research pertains to superintelligent AIs which would definitely have a correct ontology/semantics and understand the world.

However, it does tell us that a sufficiently stupid, amateurish attempt to harvest human values using an infrahuman intelligence wouldn't reliably work. This is obvious to anyone who has been "in the trade" for a while, however it does seem to surprise the mainstream media.

It's probably useful as a rude slap-in-the-face to people who are so ignorant of how software and machine learning work that they think friendliness is a non-issue.

Comment author: Lamp2 08 April 2016 01:58:35AM 0 points [-]

I though a bit about it, but I think Tay is basically a software version of a parrot that repeats back what it hears - I don't think it has any commonsense knowledge or serious attempt to understand that tweets are about a world that exists outside of twitter. I.e it has no semantics

Well neither does image recognition software. Neither does Google's search algorithm.

Comment author: Lamp2 08 April 2016 01:57:04AM 0 points [-]

Well, in the Hilary case my reason for favoring the "bribe" explanation is that presumably the person who first made the accusation was more familiar with the specifics of the situation than I am.

In the Senators case, anti-insider trading laws are written in such a way that they don't apply to Congressmen and their staff. So that makes that explanation more likely.

Comment author: Lamp2 07 April 2016 10:12:50AM 1 point [-]

Well, in the Hilary case my reason for favoring the "bribe" explanation is that presumably the person who first made the accusation was more familiar with the specifics of the situation than I am.

In the Senators case, anti-insider trading laws are written in such a way that they don't apply to Congressmen and their staff. So that makes that explanation more likely.

Comment author: The_Jaded_One 26 March 2016 04:49:53AM 4 points [-]

I though a bit about it, but I think Tay is basically a software version of a parrot that repeats back what it hears - I don't think it has any commonsense knowledge or serious attempt to understand that tweets are about a world that exists outside of twitter. I.e it has no semantics, it's just a syntax manipulator that uses some kind of probabilistic language model to generate grammatically correct sentences and a machine learning model to try and learn which kind of sentences will get the most retweets or will most closely resemble other things people are tweeting about. Tay does't know what a "Nazi" actually is. I haven't looked into it in any detail but I know enough to guess that that's how it works.

As such, the failure of Tay doesn't particularly tell us much about Friendliness, because friendliness research pertains to superintelligent AIs which would definitely have a correct ontology/semantics and understand the world.

However, it does tell us that a sufficiently stupid, amateurish attempt to harvest human values using an infrahuman intelligence wouldn't reliably work. This is obvious to anyone who has been "in the trade" for a while, however it does seem to surprise the mainstream media.

It's probably useful as a rude slap-in-the-face to people who are so ignorant of how software and machine learning work that they think friendliness is a non-issue.

Comment author: Lamp2 07 April 2016 10:12:28AM 1 point [-]

I though a bit about it, but I think Tay is basically a software version of a parrot that repeats back what it hears - I don't think it has any commonsense knowledge or serious attempt to understand that tweets are about a world that exists outside of twitter. I.e it has no semantics

Well neither does image recognition software. Neither does Google's search algorithm.

Comment author: Lamp2 07 April 2016 10:11:55AM 1 point [-]

BTW, the twitter account is here if you want to see the things the AI said for yourself.

Original thread here.

Comment author: Lamp2 07 April 2016 10:11:41AM 0 points [-]

It might help to take an outside view here:

Picture a hypothetical set of highly religious AI researchers who make an AI chatbot, only to find that the bot has learned to say blasphemous things. What lessons should they learn from the experience?

Original thread here.

Comment author: ChristianKl 27 March 2016 04:38:35PM 1 point [-]

Did they delete posts?

Comment author: Lamp2 07 April 2016 10:11:00AM 1 point [-]

Probably, they said something about that in the wired article. One can still get an idea for its level of intelligence.

Comment author: The_Jaded_One 27 March 2016 08:12:34PM 0 points [-]

Yes, you are correct. And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa), then I would still say that it doesn't really give us much new information about unfriendliness in superintelligent AGIs.

Comment author: Lamp2 07 April 2016 10:10:36AM 1 point [-]

And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa)

The fact that this kind of mistake is considered more "unethical" then other types of mistakes tells us more about the quirks of the early 21th century Americans doing the considering than about AI safety.

Comment author: Lumifer 30 March 2016 12:38:01AM 3 points [-]

There was originally an argument from Mainline Protestantism* that somewhat bridged the gap from.

Interesting. Do you think it was a sort of convergent evolution or there's actually a traceable line of descent from this bit of Protestant theology to SJWs?

SJWs do commonly carry forward as assumptions ideas like “true” desires aren’t going to be contradictory and therefor don’t need to be put in a hierarchy

Yes, but I read it as, basically, refusal to consider the consequences. It's like you make a list of what you want and pay no attention to the costs or likelihood or even whether things you like are compatible with each other.

SJWs certainly do have utopian tendencies and utopias rarely tolerate close scrutiny.

the cultural value (inherited from Christianity) that we want to increase people’s happiness.

I think this value is deeper and more ancient that Christianity -- it's a consequence of being social animals. If I scratch your back, I make it more probable that you'll scratch mine when I need it. As long at the cost is not high, sure, whatever makes you happy.

But I don't think that's what SJWs are all about.

Comment author: Lamp2 07 April 2016 10:10:09AM 1 point [-]

Do you think it was a sort of convergent evolution or there's actually a traceable line of descent from this bit of Protestant theology to SJWs?

There is. It's most evident in the Unitarian Universalists, who gradually started focusing on the particular aspect of Mainline Protestantism to the exclusion of everything else, to the point where they wound up forgetting about Christ, and Christianity specifically, and basically made Social Justice their religion.

Of course when they first did this Social Justice wasn't quite as insane as it is today, largely because it was tempered by other aspects of their religion, which they failed to pass on to their children.

Other religious denominations had other paths, for example, during the mid-20th century a large number of nominal Caltholics in the US were more acurately described by "Practicing Democrats". They would, among other things, hang pictures of St. Franklin and John (that's Kennedy) in place of the more traditional Catholic Saints.

View more: Prev | Next