You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kawoomba comments on Open Thread March 7 - March 13, 2016 - Less Wrong Discussion

4 Post author: Elo 07 March 2016 03:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 09 March 2016 10:09:06AM 0 points [-]

I wonder if / how that win will affect estimates on the advent of AGI within the AI community.

Comment author: Vaniver 09 March 2016 02:31:33PM 1 point [-]

I've already seen some goalpost-moving at Hacker News. I do hope this convinces some people, though.

Comment author: dxu 09 March 2016 05:45:37PM 1 point [-]

People who engage in such goalpost-moving have already written down their bottom line, most likely because AI risk pattern-matches to the literary genre of science fiction. I wouldn't expect such people to be swayed by any sort empirical evidence short of the development of strong AGI itself. Any arguments they offer against strong AGI amount to little more than rationalization. (Of course, that says nothing about the strengths of the arguments themselves, which must be evaluated on their own merits.)

Comment author: [deleted] 09 March 2016 11:40:15PM *  1 point [-]

It is entirely possible to firmly believe in the inevitability of near-term AGI without subscribing to AI risk fears. I wouldn't conflate the two.

Comment author: dxu 11 March 2016 05:46:31PM 1 point [-]

Most of the arguments I've seen against AI risk I've seen (in popular media, that is) take the form of arguments against AGI, full-stop. Naturally there exist more nuanced arguments (though personally I've yet to see any I find convincing), but I was referring to the arguments made by a specific part of the population, i.e. "people who engage in such goalpost-moving"--and in my (admittedly limited) experience, those sorts of people don't usually put forth very deep arguments.

Comment author: [deleted] 11 March 2016 09:21:44PM *  1 point [-]

Here's some arguments against AI x-risk positions from an expert source rather than the popular media:

http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials

http://time.com/3641921/dont-fear-artificial-intelligence/

In any case I think you have unnecessarily limited yourself to considering viewpoints expressed in media that tend to act as echo chambers. It's not very interesting or relevant what a bunch of talking heads say with respect to a technical question.

Comment author: Furcas 12 March 2016 09:26:16AM 0 points [-]

The Time article doesn't say anything interesting.

Goertzel's article (the first link you posted) is worth reading, although about half of it doesn't actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren't goal-driven.

Comment author: [deleted] 12 March 2016 05:50:30PM *  0 points [-]

I updated my earlier comment to say "against AI x-risk positions" which I think is a more accurate description of the arguments. There are others as well, e.g. Andrew Ng, but I think Goertzel does the best job at explaining why the AI x-risk arguments themselves are possibly flawed. They are simplistic in how they model AGIs, and therefore draw simple conclusions that don't hold up in the real world.

And yes, I think more LW'ers and AI x-risk people should read and respond to Goertzel's super-intelligence article. I don't agree with it 100%, but there are some valid points in there. And one doesn't become effective by only reading viewpoints you agree with...