John_Maxwell_IV comments on Will AGI surprise the world? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (129)
This is my adopted long-term field -- though professionally I work as a bitcoin developer right now -- and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors' opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.
The only extrapolations I've seen around here, e.g. by lukeprog, involve statistically sampling AI researchers' opinions. Stuart Armstrong showed a year or two ago just how inaccurate this method is historically, as well as concrete reasons for why such statistical methods are useless in this case.
You rate your ability to predict AI above AI researchers? It seems to me that at best, I as an independent observer should give your opinion about as much weight as any AI researcher. Any concerns with the predictions of AI researchers in general should also apply to your estimate. (With all due respect.)
This is required reading for anyone wanting to extrapolate AI researcher predictions:
https://intelligence.org/files/PredictingAI.pdf
In short, asking AI researchers (including myself) their opinions is probably the worst way to get an answer here. What you need to do instead is learn the field, try your hand at it yourself, ask AI researchers what they feel are the remaining unsolved problems, investigate those answers, and most critically form your own opinion. That's what I did, and where my numbers came from.
If several people follow this procedure, I would expect to get a better estimate from averaging their results than trying it out for myself.
That's a reasonable expectation. But in as much as one can expect AI researchers to have gone through this exercise in the past (this is where the problem is, I think), the data is apparently not predictive. Kaj Sotala and Stuart Armstrong looked at this in some detail, with MIRI funding. Some highlights:
"There is little difference between experts and non-experts" "There is little difference between current predictions, and those known to have been wrong previously" "It is not unlikely that recent predictions are suffering from the same biases and errors as their predecessors"
http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ https://intelligence.org/files/PredictingAI.pdf
In other words, asking AI experts is about as useless as it can get when it comes to making predictions about future AI developments. This includes myself, objectively. What I advocate people do instead is what I did: investigate the matter yourself and make your own evaluation.
It sounds to me as though you are aware that your estimate for when AI will arrive is earlier than most estimates, but you're also aware that the reference class of which your estimate is a part of is not especially reliable. So instead of pushing your estimate as the one true estimate, you're encouraging others to investigate in case they discover what you discovered (because if your estimate is accurate, that would be important information). That seems pretty reasonable. Another thing you could do is create a discussion post where you lay out the specific steps you took to come to the conclusion that AI will come relatively early in detail, and get others to check your work directly that way. It could be especially persuasive if you were to contrast the procedure you think was used to generate other estimates and explain why you think that procedure was flawed.
"What I discovered" was that all the pieces for a seed AGI exist, are demonstrated to work as advertised, and could be assembled together rather quickly if adequate resources were available to do so. Really all that is required is rolling up our sleeves and doing some major integrative work in putting the pieces together.
With designs that are public knowledge (albeit not contained in one place), this could be done as well-funded project in the order of 5 years -- an assessment that concurs with what is said by the leaders of the project I am thinking of as well.
My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.
But here's the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.
So what can I do, except encourage people to walk the same path I did, and see if they come to the same conclusions?
That's assuming people take you seriously. Even if your plan is solid, probably most people will write you off as another Crackpot Who Thinks He's Solved an Important Problem.
But I do agree it's a bit of a conundrum. If you have what you think is an important idea, it's natural to worry that people will either (1) steal your idea or (2) criticize it not because it's not a great idea but because they want to feel superior.
I think you entirely missed the point.
I would agree with this in the sense that my stated reasons for the "conundrum" are a bit different from yours.
If you've solved stable self-improvement issues, that's FAI work, and you should damn well share that component.
[retracted]
Read the OP, I didn't make any boisterous claims. I simply said UFAI is 2-5 years away, focused effort, and 10-20 years away otherwise. I therefore believe it important that FAI research be refocused on near-term solutions. I state so publicly in order to counter the entrenched meme that seems to have infected everyone here, saying that AI is X years away, where X is some arbitrary number that by golly seems like a lot, in the hope that some people who encounter the post consider refocusing on near-term work. What's wrong with that?
Disregard my reply. I really shouldn't be posting from my phone at 2 AM. Such a venture rarely ends well.