You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

The_Jaded_One comments on What can we learn from Microsoft's Tay, its inflammatory tweets, and its shutdown? - Less Wrong Discussion

1 Post author: InquilineKea 26 March 2016 03:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment deleted 26 March 2016 07:52:33PM [-]
Comment author: The_Jaded_One 27 March 2016 08:12:34PM 0 points [-]

Yes, you are correct. And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa), then I would still say that it doesn't really give us much new information about unfriendliness in superintelligent AGIs.

Comment deleted 28 March 2016 03:50:53AM [-]
Comment author: The_Jaded_One 28 March 2016 05:06:20PM 1 point [-]

Sure, but he point stands: failures of nattow AI systems aren't informative about likely faulures of superintelligent AGIs.

Comment author: dlarge 29 March 2016 05:54:00PM 5 points [-]

They are informative, but not because narrow AI systems are comparable to superintelligent AGIs. It's because the developers, researchers, promoters, and funders of narrow AI systems are comparable to those of putative superintelligent AGIs. The details of Tay's technology aren't the most interesting thing here, but rather the group that manages it and the group(s) that will likely be involved in AGI development.

Comment author: The_Jaded_One 29 March 2016 08:28:14PM *  1 point [-]

That's a very good point.

Though one would hope that the level of effort put into AGI safety will be significantly more than what they put into twitter bot safety...

Comment author: dlarge 30 March 2016 05:42:11PM 1 point [-]

One would hope! Maybe the Tay episode can serve as a cautionary example, in that respect.

Comment author: Lumifer 30 March 2016 05:57:52PM 0 points [-]
Comment author: Lamp2 08 April 2016 02:08:15AM 0 points [-]

And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa)

The fact that this kind of mistake is considered more "unethical" then other types of mistakes tells us more about the quirks of the early 21th century Americans doing the considering than about AI safety.