You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

G0W51 comments on Open thread, Aug. 03 - Aug. 09, 2015 - Less Wrong Discussion

5 Post author: MrMind 03 August 2015 07:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (177)

You are viewing a single comment's thread.

Comment author: G0W51 04 August 2015 03:58:20AM 1 point [-]

Why don't people (outside small groups like LW) advocate the creation of superintelligence much? If it is Friendly, it would have tremendous benefits. If superintelligence's creation isn't being advocated out of fears of it being unFriendly, then why don't more people advocate FAI research? Is it just too long-term for people to really care about? Do people not think managing the risks is tractable?

Comment author: MrMind 04 August 2015 07:08:51AM 3 points [-]

One answer could be that people don't really think that a superintelligence is possible. It doesn't even enter in their model of the world.

Comment author: [deleted] 04 August 2015 09:03:20AM 1 point [-]
Comment author: G0W51 04 August 2015 12:20:26PM 0 points [-]

I think something else is going on. The responses to this question about the feasibility of strong AI mostly stated that it was possible, though selection bias is probably largely at play, as knowledgable people would be more likely to answer than the ignorant would be.

Comment author: MrMind 05 August 2015 07:11:08AM 1 point [-]

Surely AI is a concept that's more and more present in the Western culture, but only as fictional, as far as I can tell.
No man in the street takes it seriously, as in "it's really starting to happen". Possibly the media are paving the way for a change in that, as the insurgence of AI related movies seems to suggest, but I would bet it's still an idea very far from their realm of possibilities. Also, once the reality of an AI would be estabilished, it would still be a jump to believe in the possibility of an intelligence superior to human's, a leap that for me is tiny but for many I suspect would not be so small (self-importance and all that).

Comment author: G0W51 06 August 2015 09:18:38AM 0 points [-]

But other than self-importance, why don't people take it seriously? Is it otherwise just due to the absurdity and availability heuristics?

Comment author: FrameBenignly 04 August 2015 10:58:57PM 2 points [-]

If you're not reading about futurism, it's unlikely to come up. There aren't any former presidential candidates giving lectures about it, so most people have never heard of it. Politics isn't about policy as Robin Hanson likes to say.

Comment author: IffThen 07 August 2015 03:50:55AM 1 point [-]

FWIW, I have been a long time reader of SF, have long been a believer of strong AI, am familiar with friendly and unfriendly AIs and the idea of the singularity, but hadn't heard much serious discussion on development of superintelligence. My experience and beliefs are probably not entirely normal, but arose from a context close to normal.

My thought process until I started reading LessWrong and related sites was basically split between "scientists are developing bigger and bigger supercomputers, but they are all assigned to narrow tasks -- playing chess, obscure math problems, managing complicated data traffic" and "intelligence is a difficult task akin to teaching a computer to walk bipedally or recognize complex visual images, which will teke forever with lots of dead ends". Most of what I had read in terms of spontaneous AI was fairly silly SF premises (lost packets on the internet become sentient!) or in the far future, after many decades of work on AI finally resulting in a super-AI.

I also believe that science reporting downplays the AI aspects of computer advances. Siri, self-driving cars, etc. are no longer referred to as AI in the way they would have been when I was growing up; AI is by definition something that is science fiction or well off in the future. Anything that we have now is framed as just an interesting program, not an 'intelligence' of any sort.