Eliezer_Yudkowsky comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 21 August 2010 12:46:41AM 8 points [-]

Imagine an AI as intelligent and well informed as an FAI, but one without much power - as a result of physical safeguards, say

There's some part of my brain that just processes "the Internet" as a single person and wants to scream "But I told you this a thousand times already!"

http://yudkowsky.net/singularity/aibox

Comment author: steven0461 21 August 2010 01:03:39AM 2 points [-]

Surely it's possible to imagine a successfully boxed AI.

Comment author: wedrifid 21 August 2010 01:23:11AM 3 points [-]

I could imagine successfully beating Rybka at chess too. But it would be foolish of me to take any actions that considered it as a serious possibility. If motivated humans cannot be counted on to box an Eliezer then expecting a motivated, overconfident and prestige seeking AI creator to successfully box his AI creation is reckless in the extreme.

Comment author: steven0461 21 August 2010 01:30:46AM *  2 points [-]

What Eliezer seemed to be objecting to was someone proposing a successfully boxed AI as an example of why "able to destroy humanity" can't be a part of the definition of "AI" (or more charitably, "artificial superintelligence"). For boxed AI to be such an example (as opposed to a good idea to actually strive toward), it only has to be not knowably impossible.

Comment author: ata 21 August 2010 01:56:43AM *  1 point [-]

I see your point there. But I think this discussion sort of went in an irrelevant direction, albeit probably my fault for not being clear enough. When I put "powerful enough to destroy humanity" in that criterion, I mainly meant "powerful" as in "really powerful optimization process", mathematical optimization power, not "power" as in direct influence over the world. We're inferring that the former will usually lead fairly easily to the latter, but they are not identical. So "powerful enough to destroy humanity" would mean something like "powerful enough to figure out a good subjunctive plan to do so given enough information about the world, even if it has no output streams and is kept in an airtight safe at the bottom of the ocean".

Comment author: wedrifid 21 August 2010 01:39:51AM *  0 points [-]

Reading back further into the context I see your point. Imagining such an AI is sufficient and Eliezer does seem to be confusing a priori with obvious. I expect that he just completed a pattern based off "AI box" and so didn't really understand the point that was being made - he should have replied with a "Yes - But". (I, of course, made a similar mistake in as much as I wasn't immediately prompted to click back up the tree beyond Eliezer's comment.)

Comment author: dclayh 21 August 2010 02:21:27AM *  2 points [-]

Eliezer, while you're defending yourself from charges of self-aggrandizement, it troubles me a little bit that AI Box page states that your record is 2 for 2, and not 3 for 5.

Comment author: Eliezer_Yudkowsky 21 August 2010 07:10:05AM 4 points [-]

Obviously I'm not trying to keep it a secret. I just haven't gotten around to editing.

Comment author: dclayh 21 August 2010 07:46:00PM 1 point [-]

I'm sure that's the case, I'm just saying it looks bad. Presumably you'd like to be Caesar's wife?

Comment author: Oscar_Cunningham 21 August 2010 05:16:41PM *  0 points [-]

Move it up your to-do list, it's been incorrect for a time that's long enough to look suspicious to others. Just add a footnote if you don't have time to give all the details.

Comment author: Perplexed 21 August 2010 12:58:17AM 0 points [-]

Thx for the link. If I already had already known the link, I would have asked for it by name. :)

Eliezer, you have written a lot. Some people have read only some of it. Some people have read much of it, but forgotten some. Keep your cool. This situation really ought not to be frustrating to you.

Comment author: Eliezer_Yudkowsky 21 August 2010 01:02:48AM 5 points [-]

Oh, I know it's not your fault, but seriously, have "the Internet" ask you the same question 153 times in a row and see if you don't get slightly frustrated with "the Internet".

Comment author: Perplexed 21 August 2010 01:16:08AM 2 points [-]

Yeah, after reading your "some part of my brain" thing a second time, I realized I had misinterpreted. Though I will point out that my question was not directed to you. You should learn to delegate the task of becoming frustrated with the Internet.

I read the article (though not yet any of the transcripts). Very interesting. I hope that some tests using a gatekeeper committee are tried someday.

Comment author: timtyler 21 August 2010 06:49:38AM *  0 points [-]

Computer programmers do not normally test their programs by getting a committee of humans to hold the program down - the restraints themselves are mostly technological. We will be able to have the assistance of technological gatekeepers too - if necessary.

Today's prisons have pretty configurable security levels. The real issue will probably be how much people want to pay for such security. If an agent does escape, will it cause lots of damage? Can we simply disable it before it has a chance to do anything undesirable? Will it simply be crushed by the numerous powerful agents that have already been tested?