taw comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 20 August 2010 07:00:52PM 15 points [-]

I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you're working on.

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

My impression is that you've greatly underestimated the difficulty of building a Friendly AI.

Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?

And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?

Comment author: multifoliaterose 20 August 2010 07:09:42PM 0 points [-]

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

I don't understand this remark.

What probability do you assign to your succeeding in playing a critical role on the Friendly AI project that you're working on? I can engage with a specific number. I don't know if your object is that my estimate is off by a single of order of magnitude or by many orders of magnitude.

Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?

I should clarify that my comment applies equally to AGI.

I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don't know because you didn't give a number) then there would be people in the scientific community who would be working on AGI.

And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?

Yes, this possibility has certainly occurred to me. I just don't know what your different non-crazy beliefs might be.

Why do you think that AGI research is so uncommon within academia if it's so easy to create an AGI?

Comment author: khafra 20 August 2010 07:44:57PM *  4 points [-]

This question sounds disingenuous to me. There is a large gap between "10^-9 chance of Eliezer accomplishing it" and "so easy for the average machine learning PhD." Whatever else you think about him, he's proved himself to be at least one or two standard deviations above the average PhD in ability to get things done, and some dimension of rationality/intelligence/smartness.

Comment author: multifoliaterose 20 August 2010 07:56:56PM *  0 points [-]

My remark was genuine. Two points:

  1. I think that the chance that any group of the size of SIAI will develop AGI over the next 50 years is quite small.

  2. Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done. As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.

Comment author: XiXiDu 20 August 2010 08:16:25PM 3 points [-]

Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done.

He actually stated that himself several times.

So I do understand that, and I did set out to develop such a theory, but my writing speed on big papers is so slow that I can't publish it. Believe it or not, it's true.

Yes, ok, this does not mean his intellectual power isn't on par, but his ability to function in an academic environment.

As far as I know he has no experience with narrow AI research.

Well...

I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. [...] And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?"

Comment author: Vladimir_Nesov 20 August 2010 08:51:12PM 1 point [-]

As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.

Most things can be studied through the use of textbooks. Some familiarity with AI is certainly helpful, but it seems that most AI-related knowledge is not on the track to FAI (and most current AGI stuff is nonsense or even madness).

Comment author: multifoliaterose 20 August 2010 10:04:58PM *  1 point [-]

The reason that I see familiarity with narrow AI as a prerequisite to AGI research is to get a sense of the difficulties present in designing machines to complete certain mundane tasks. My thinking is the same as that of Scott Aaronson in his The Singularity Is Far posting: "there are vastly easier prerequisite questions that we already don’t know how to answer."

Comment author: Vladimir_Nesov 20 August 2010 10:08:43PM 2 points [-]

FAI research is not AGI research, at least not at present, when we still don't know what it is exactly that our AGI will need to work towards, how to formally define human preference.

Comment author: multifoliaterose 20 August 2010 10:13:18PM 1 point [-]

So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer's goal is for SIAI to actually build an AGI unilaterally. That's where my low probability was coming from.

It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.

As I've said, I find your position sophisticated and respect it. I have to think more about your present point - reflecting on it may indeed alter my thinking about this matter.

Comment author: Vladimir_Nesov 20 August 2010 10:27:03PM *  6 points [-]

So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer's goal is for SIAI to actually build an AGI unilaterally.

Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.

It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.

It seems obviously infeasible to me that governments will chance upon this level of rationality. Also, we are clearly not on the same page if you say things like "implement in any AI". Friendliness is not to be "installed in AIs", Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that's possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.

See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.

Comment author: Wei_Dai 11 September 2010 08:13:32PM *  2 points [-]

It seems obviously infeasible to me that governments will chance upon this level of rationality.

I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven't done that badly. From an article about RAND:

Futurology was the magic word in the years after the Second World War, and because the Army and later the Air Force didn’t want to lose the civilian scientists to the private sector, Project Reasearch and Development, RAND in short, was founded in 1945 together with the aircraft manufacturer Douglas and in 1948 was converted into a Corporation. RAND established forecasts for the coming, cold future and developed, towards this end, the ‘delphi’ method.

Rand worshipped rationality as a god and attempted to quantify the unpredictable, to calculate it mathematically, to bring the fear within its grasp and under control - something that seemed to many Americans spooky and made the soviet Prawda call RAND the “American academy of death and destruction.”

(Huh, this is the first time I've heard of the Delphi Method.) Many of the big names in game theory (von Neumann, Nash, Shapley, Schelling) worked for RAND at some point, and developed their ideas there.

Comment author: multifoliaterose 20 August 2010 10:43:08PM *  0 points [-]

Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.

Yes, this is the point that I had not considered and which is worthy of further consideration.

It seems obviously infeasible to me that governments will chance upon this level of rationality.

Possibly what I mention could be accomplished with lobbying.

Also, we are clearly not on the same page if you say things like "implement in any AI". Friendliness is not to be "installed in AIs", Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that's possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.

See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.

Okay, so to clarify, I myself am not personally interested in Friendly AI research (which is why the points that you're mentioning were not in my mind before), but I'm glad that there are some people (like you) who are.

The main point that I'm trying to make is that I think that SIAI should be transparent, accountable, and place high emphasis on credibility. I think that these things would result in SIAI having much more impact than it presently is.

Comment author: Emile 20 August 2010 09:04:58PM 3 points [-]

I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don't know because you didn't give a number) then there would be people in the scientific community who would be working on AGI.

Um, and there aren't?

Comment author: multifoliaterose 20 August 2010 09:53:55PM 1 point [-]

Give some examples. There may be a few people in the scientific community working on AGI, but my understanding is that basically everybody is doing narrow AI.

Comment author: Vladimir_Nesov 20 August 2010 11:24:04PM *  5 points [-]

What is currently called the AGI field will probably bear no fruit, perhaps except for the end-game when it borrows then-sufficiently powerful tools from more productive areas of research (and destroys the world). "Narrow AI" develops the tools that could eventually allow the construction of random-preference AGI.

Comment author: Nick_Tarleton 20 August 2010 09:57:49PM *  4 points [-]

The folks here, for a start.