You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on Summoning the Least Powerful Genie - Less Wrong Discussion

-1 Post author: Houshalter 16 September 2015 05:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: gurugeorge 18 September 2015 07:48:19AM 0 points [-]

I can't remember where I first came across the idea (maybe Daniel Dennett) but the main argument against AI is that it's simply not worth the cost for the foreseeable future. Sure, we could possibly create an intelligent, self-aware machine now, if we put nearly all the relevant world's resources and scientists onto it. But who would pay for such a thing?

What's the ROI for a super-intelligent, self-aware machine? Not very much, I should think - especially considering the potential dangers.

So yeah, we'll certainly produce machines like the robots in Interstellar - clever expert systems with a simulacrum of self-awareness. Because there's money in it.

But the real thing? Not likely. The only way it will be likely is much further down the line when it becomes cheap enough to do so for fun. And I think by that time, experience with less powerful genies will have given us enough feedback to be able to do so safely.

Comment author: Houshalter 19 September 2015 05:09:02AM *  0 points [-]

I don't think AI will be incredibly expensive. There is a tendency to believe that hard problems require expensive and laborious solutions.

Building a flying machine was a hard problem. An impossible problem. But two guys from a bicycle shop built the first airplane on their own. A lot of hard math problems are solved by lone geniuses. Or by the iterative work of a lot of lone geniuses building on each other. But rarely by large organized projects.

And there is a ton of gain in building smarter and smarter AIs. You can use them to automate more and more jobs, or do things even humans can't do.

The robots in interstellar were AGI. They could fully understand English and work in unrestricted environments. They are already at, or very close to, human level AI. But there's no reason advancement has to stop at human level AI. People will continue to tweak it, run it on bigger and faster computer, and eventually have it work on it's own code.