lukeprog comments on LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' - Less Wrong

7 Post author: shminux 17 May 2013 07:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 17 May 2013 10:33:09PM 1 point [-]

Right. I should have said "wants", not "does". In any case, I'm wondering how concerned you are, given the budget discrepancy and the quality and quantity of the Google's R&D brains.

Comment author: lukeprog 17 May 2013 10:50:41PM *  6 points [-]

In the long term, very concerned.

In the short term, not so much. It's very unlikely Google or anyone else will develop HLAI in the next 15 years.

Comment author: Eliezer_Yudkowsky 18 May 2013 05:13:41AM 3 points [-]

15 years plus more importantly everyone besides Google is too much possibility width to use the term "very unlikely".

Comment author: lukeprog 18 May 2013 08:33:48PM 3 points [-]

I think I'd put something like 5% on AI in the next 15 years. Your estimate is higher, I imagine.

Comment author: Eliezer_Yudkowsky 19 May 2013 12:29:16PM 4 points [-]

EDIT: On further reflection, my "Huh?" doesn't square with the higher probabilities I've been giving lately of global vs. basement default-FOOMS, since that's a substantial chunk of probability mass and you can see more globalish FOOMs coming from further off. 15/5% would make sense given a 1/4 chance of a not-seen-coming-15-years-off basement FOOM, sometime in the next 75 years. Still seems a bit low relative to my own estimate, which might be more like 40% for a FOOM sometime in the next 75 years that we can't see coming any better than this from say 15 years off, so... but actually 1/2 of the next 15 years are only 7.5 years off. Okay, this number makes more sense now that I've thought about it further. I still think I'd go higher than 5% but anything within a factor of 2 is pretty good agreement for asspull numbers.

Comment author: lukeprog 19 May 2013 07:31:26PM 3 points [-]

anything within a factor of 2 is pretty good agreement for asspull numbers

This made me LOL. I hadn't heard that term before.

Comment author: Eliezer_Yudkowsky 19 May 2013 08:21:26AM 1 point [-]

I don't understand where you're getting that from. It obviously isn't an even distribution over AI at any point in the next 300 years. This implies your probability distribution is much more concentrated than mine, i.e., compared to me you think we have much better data about the absence of AI over the next 15 years specifically, compared to the 15 years after that. Why is that?

Comment author: ciphergoth 19 May 2013 09:38:56AM 4 points [-]

You guys have had a discussion like this here on LW before, and you mention your disagreement with Carl Schulman in your foom economics paper. This is a complex subject and I don't expect you all to come to agreement, or even perfect understanding of each other's positions, in a short period of time, but it seems like you know surprisingly little about these other positions. Given its importance to your mission, I'm surprised you haven't set aside a day for the three of you and whoever else you think might be needed to at least come to understand each other's estimates on when foom might happen.

Comment author: Eliezer_Yudkowsky 19 May 2013 10:53:53AM 3 points [-]

We spent quite a while on this once, but that was a couple of years ago and apparently things got out of date since then (also I think this was pre-Luke). It does seem like we need to all get together again and redo this, though I find that sort of thing very difficult and indeed outright painful when there's not an immediate policy question in play to ground everything.

Comment author: Halfwit 18 May 2013 11:58:50PM 1 point [-]

5% is pretty high considering the purported stakes.

Comment author: Alsadius 19 May 2013 02:12:31AM -1 points [-]

Not necessarily. If it takes us 15 years to kludge something together that's twice as smart as a single human, I don't think it'll be capable of an intelligence explosion on any sort of time scale that could outmaneuver us. Even if the human-level AI can make something better in a tenth the time, we still have more than a year to react before even worrying about superhuman AI, never mind the sort of AI that's so far superhuman that it actually poses a threat to the established order. An AI explosion will have to happen in hardware, and hardware can't explode in capability so fast that it outstrips the ability of humans to notice it's happening.

One machine that's about as smart as a human and takes millions of dollars worth of hardware to produce is not high stakes. It'll bugger up the legal system something fierce as we try to figure out what to do about it, but it's lower stakes than any of a hundred ordinary problems of politics. It requires an AI that is significantly smarter than a human, and that has the capability of upgrading itself quickly, to pose a threat that we can't easily handle. I suspect at least 4.9 of that 5% is similar low-risk AI. Just because the laws of physics allow for something doesn't mean we're on the cusp of doing it in the real world.

Comment author: elharo 22 May 2013 12:44:13AM 1 point [-]

You substantially overrate the legal system's concern with simple sentient rights and basic dignity. The legal system will have no problem determining what to do with such a machine. It will be the property of whoever happens to own it under the same rules as any other computer hardware and software.

Now mind you, I'm not saying that's the right answer (for more than one definition of right) but it is the answer the legal system will give.

Comment author: Alsadius 22 May 2013 04:42:58AM 0 points [-]

It'll be the default, certainly. But I suspect there's going to be enough room for lawyers to play that it'll stay wrapped up in red tape for many years. (Interestingly, I think that might actually make it more dangerous in some ways - if we really do leapfrog humans on intelligence, giving it years while we wait on lawyers might be a dangerous thing to do. OTOH, there's generally no truckloads of silicon chips going into the middle of a legal dispute like that, so it might slow things down too.)

Comment author: lukeprog 19 May 2013 12:04:01AM 1 point [-]

No doubt!