JoshuaZ comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 17 May 2012 08:35:42PM 3 points [-]

There are lots of tipoffs to what is fictional and what is real. It might notice for example the Wikipedia article on fiction describes exactly what fiction is and then note that Wikipedia describes the One Ring as fiction, and that Early warning systems are not. I'm not claiming that it will necessarily have an easy time with this. But the point is that there are not that many steps here, and no single step by itself looks extremely unlikely once one has a smart entity (which frankly to my mind is the main issue here- I consider recursive self-improvement to be unlikely).

Comment author: kalla724 17 May 2012 09:40:19PM 1 point [-]

We are trapped in an endless chain here. The computer would still somehow have to deduce that Wikipedia entry that describes One Ring is real, while the One Ring itself is not.

Comment author: jacob_cannell 17 May 2012 11:06:27PM 0 points [-]

We observer that Wikipedia is mainly truthful. From that we infer "entry that describes "One Ring" is real". From use of term fiction/story in that entry, we refer that "One Ring" is not real.

Somehow you learned that Wikipedia is mainly truthful/nonfictional and that "One Ring" is fictional. So your question/objection/doubt is really just the typical boring doubt of AGI feasibility in general.

Comment author: JoshuaZ 17 May 2012 11:13:14PM *  1 point [-]

But even humans have trouble with this sometimes. I was recently reading the Wikipedia article Hornblower and the Crisis which contains a link to the article on Francisco de Miranda. It took me time and cues when I clicked on it to realize that de Miranda was a historical figure.

So your question/objection/doubt is really just the typical boring doubt of AGI feasibility in general.

Isn't Kalla's objection more a claim that fast takeovers won't happen because even with all this data, the problems of understanding humans and our basic cultural norms will take a long time for the AI to learn and that in the meantime we'll develop a detailed understanding of it, and it is that hostile it is likely to make obvious mistakes in the meantime?

Comment author: Strange7 22 May 2012 11:49:34PM -1 points [-]

Why would the AI be mucking around on Wikipedia to sort truth from falsehood, when Wikipedia itself has been criticized for various errors and is fundamentally vulnerable to vandalism? Primary sources are where it's at. Looking through the text of The Hobbit and Lord of the Rings, it's presented as a historical account, translated by a respected professor, with extensive footnotes. There's a lot of cultural context necessary to tell the difference.