ChrisHallquist comments on Lone Genius Bias and Returns on Additional Researchers - LessWrong

24 Post author: ChrisHallquist 01 November 2013 12:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChrisHallquist 01 November 2013 03:36:31AM *  2 points [-]

I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kluge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do. I don't expect it to work this time, but if China or the NSA or Google or Goldman Sachs tries to do it with the computing power and AI researchers we'll have 35 years from now, they very well might succeed, even without any deep philosophical insights. After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power. The problem is that this approach is very unlikely to yield something capable of Friendliness, and yet there are massive nearer-term incentives for China and the NSA and everyone else to race towards it.

Ah, yes, you expressed better than I could my other reason for thinking AI is most likely to be built by a big organization. I'd really been struggling how to say that.

One thought I have, building on this comment of yours, is that while making kludge AI safe may look impossible, given that sometimes you have to shut up and do the impossible, I wonder if making kludge AI safe might be the less-impossible option here.

EDIT: I'm also really curious to know how Eliezer would respond to the paragraph I quoted above.

Comment author: lukeprog 01 November 2013 04:36:22AM *  6 points [-]

I wonder if making kludge AI safe might be the less-impossible option here.

Yeah, that's possible. But as I said here, I suspect that learning whether that's true mostly comes from doing FAI research (and from watching closely as the rest of the world inevitably builds toward Kludge AI). Also: if making Kludge AI safe is the less-impossible option, then at least some FAI research probably works just as well for that scenario — especially the value-loading problem stuff. MIRI hasn't focused on that lately but that's a local anomaly: some of the next several open problems on Eliezer's to-explain list fall under the value-loading problem.

Comment author: [deleted] 04 November 2013 11:20:03PM 0 points [-]

I'm not sure how value-loading would apply to that situation, since you're implicitly assuming a non-steadfast goal system as the default case of a kludge AI. Wouldn't boxing be more applicable?

Comment author: lukeprog 05 November 2013 04:27:20AM 0 points [-]

Well, there are many ways it could turn out to be that making Kludge AI safe is the less-impossible option. The way I had in mind was that maybe goal stability and value-loading turn out to be surprisingly feasible with Kludge AI, and you really can just "bolt on" Friendliness. I suppose another way making Kludge AI safe could be the less-impossible option is if it turns out to be possible to keep superintelligences boxed indefinitely but also use them to keep non-boxed superintelligences from being boxed, or something. In which case boxing research would be more relevant.