John_Maxwell_IV comments on Will AGI surprise the world? - Less Wrong

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 22 June 2014 06:33:02AM 2 points [-]

That's a reasonable expectation. But in as much as one can expect AI researchers to have gone through this exercise in the past (this is where the problem is, I think), the data is apparently not predictive. Kaj Sotala and Stuart Armstrong looked at this in some detail, with MIRI funding. Some highlights:

"There is little difference between experts and non-experts" "There is little difference between current predictions, and those known to have been wrong previously" "It is not unlikely that recent predictions are suffering from the same biases and errors as their predecessors"

http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ https://intelligence.org/files/PredictingAI.pdf

In other words, asking AI experts is about as useless as it can get when it comes to making predictions about future AI developments. This includes myself, objectively. What I advocate people do instead is what I did: investigate the matter yourself and make your own evaluation.

Comment author: John_Maxwell_IV 22 June 2014 08:37:33AM 6 points [-]

It sounds to me as though you are aware that your estimate for when AI will arrive is earlier than most estimates, but you're also aware that the reference class of which your estimate is a part of is not especially reliable. So instead of pushing your estimate as the one true estimate, you're encouraging others to investigate in case they discover what you discovered (because if your estimate is accurate, that would be important information). That seems pretty reasonable. Another thing you could do is create a discussion post where you lay out the specific steps you took to come to the conclusion that AI will come relatively early in detail, and get others to check your work directly that way. It could be especially persuasive if you were to contrast the procedure you think was used to generate other estimates and explain why you think that procedure was flawed.

Comment author: [deleted] 22 June 2014 08:55:20AM *  2 points [-]

"What I discovered" was that all the pieces for a seed AGI exist, are demonstrated to work as advertised, and could be assembled together rather quickly if adequate resources were available to do so. Really all that is required is rolling up our sleeves and doing some major integrative work in putting the pieces together.

With designs that are public knowledge (albeit not contained in one place), this could be done as well-funded project in the order of 5 years -- an assessment that concurs with what is said by the leaders of the project I am thinking of as well.

My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.

But here's the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.

So what can I do, except encourage people to walk the same path I did, and see if they come to the same conclusions?

Comment author: brazil84 23 June 2014 03:49:16PM *  2 points [-]

But here's the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.

That's assuming people take you seriously. Even if your plan is solid, probably most people will write you off as another Crackpot Who Thinks He's Solved an Important Problem.

But I do agree it's a bit of a conundrum. If you have what you think is an important idea, it's natural to worry that people will either (1) steal your idea or (2) criticize it not because it's not a great idea but because they want to feel superior.

Comment author: [deleted] 23 June 2014 05:04:33PM 0 points [-]

But I do agree it's a bit of a conundrum. If you have what you think is an important idea, it's natural to worry that people will either (1) steal your idea or (2) criticize it not because it's not a great idea but because they want to feel superior.

I think you entirely missed the point.

Comment author: brazil84 23 June 2014 06:09:30PM 1 point [-]

I think you entirely missed the point.

I would agree with this in the sense that my stated reasons for the "conundrum" are a bit different from yours.

Comment author: [deleted] 23 June 2014 06:26:50PM 0 points [-]

Well perhaps instead of insinuating motives, you could share your thoughts about the actual stated reason? At what point does one have a moral obligation not to share information about a dangerous idea on a public forum?

Comment author: brazil84 24 June 2014 12:49:27AM 1 point [-]

Well perhaps instead of insinuating motives,

I was thinking of my own motives in similar situations, sorry if you took it as a characterization of yours. I do see it could have been read that way.

you could share your thoughts about the actual stated reason?

I would suggest you e-mail your blueprint to a few of the posters here with the understanding they keep it to themselves. If even one long-term poster says "I've read Friedenbach's arguments and while they are confidential, I now agree that his estimate of the time to AI is actually pretty good," then I think your argument is starting to become persuasive.

Comment author: [deleted] 24 June 2014 12:51:28AM *  0 points [-]

Sorry I didn't mean to come off so abrasively either. I was just being unduly snarky. The internet is not good for conveying emotional state :\

Comment author: [deleted] 25 June 2014 09:04:49AM 0 points [-]

My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.

If you've solved stable self-improvement issues, that's FAI work, and you should damn well share that component.

Comment author: Crux 24 June 2014 04:07:31PM *  -1 points [-]

[retracted]

Comment author: [deleted] 24 June 2014 04:33:19PM 0 points [-]

Read the OP, I didn't make any boisterous claims. I simply said UFAI is 2-5 years away, focused effort, and 10-20 years away otherwise. I therefore believe it important that FAI research be refocused on near-term solutions. I state so publicly in order to counter the entrenched meme that seems to have infected everyone here, saying that AI is X years away, where X is some arbitrary number that by golly seems like a lot, in the hope that some people who encounter the post consider refocusing on near-term work. What's wrong with that?

Comment author: Crux 24 June 2014 04:38:09PM 0 points [-]

Disregard my reply. I really shouldn't be posting from my phone at 2 AM. Such a venture rarely ends well.

Comment author: [deleted] 24 June 2014 04:58:59PM 1 point [-]

Yeah, I've been there before. No worries ;)