Vladimir_Nesov comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 20 August 2010 10:04:58PM *  1 point [-]

The reason that I see familiarity with narrow AI as a prerequisite to AGI research is to get a sense of the difficulties present in designing machines to complete certain mundane tasks. My thinking is the same as that of Scott Aaronson in his The Singularity Is Far posting: "there are vastly easier prerequisite questions that we already don’t know how to answer."

Comment author: Vladimir_Nesov 20 August 2010 10:08:43PM 2 points [-]

FAI research is not AGI research, at least not at present, when we still don't know what it is exactly that our AGI will need to work towards, how to formally define human preference.

Comment author: multifoliaterose 20 August 2010 10:13:18PM 1 point [-]

So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer's goal is for SIAI to actually build an AGI unilaterally. That's where my low probability was coming from.

It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.

As I've said, I find your position sophisticated and respect it. I have to think more about your present point - reflecting on it may indeed alter my thinking about this matter.

Comment author: Vladimir_Nesov 20 August 2010 10:27:03PM *  6 points [-]

So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer's goal is for SIAI to actually build an AGI unilaterally.

Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.

It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.

It seems obviously infeasible to me that governments will chance upon this level of rationality. Also, we are clearly not on the same page if you say things like "implement in any AI". Friendliness is not to be "installed in AIs", Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that's possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.

See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.

Comment author: Wei_Dai 11 September 2010 08:13:32PM *  2 points [-]

It seems obviously infeasible to me that governments will chance upon this level of rationality.

I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven't done that badly. From an article about RAND:

Futurology was the magic word in the years after the Second World War, and because the Army and later the Air Force didn’t want to lose the civilian scientists to the private sector, Project Reasearch and Development, RAND in short, was founded in 1945 together with the aircraft manufacturer Douglas and in 1948 was converted into a Corporation. RAND established forecasts for the coming, cold future and developed, towards this end, the ‘delphi’ method.

Rand worshipped rationality as a god and attempted to quantify the unpredictable, to calculate it mathematically, to bring the fear within its grasp and under control - something that seemed to many Americans spooky and made the soviet Prawda call RAND the “American academy of death and destruction.”

(Huh, this is the first time I've heard of the Delphi Method.) Many of the big names in game theory (von Neumann, Nash, Shapley, Schelling) worked for RAND at some point, and developed their ideas there.

Comment author: rhollerith_dot_com 11 September 2010 08:27:38PM *  1 point [-]

I wonder if we systematically underestimate the level of rationality of major governments.

Data point: the internet is almost completely a creation of government. Some say entrepreneurs and corporations played a large role, but except for corporations that specialize in doing contracts for the government, they did not begin to exert a significant effect till 1993 whereas government spending on research that led to the internet began in 1960, and the direct predacessor to internet (the ARPAnet) became operational in 1969.

Both RAND and the internet were created by the part of the government most involved in an enterprise (namely, the arms race during the Cold War) on which depended the long-term survival of the nation in the eyes of most decision makers (including voters and juries).

EDIT: significant backpedalling in response to downvotes in my second paragraph.

Comment author: gwern 11 September 2010 10:37:03PM 1 point [-]

RAND has a lot of good work (I like their recent reports on Iran), but keep in mind that big misses can undo a lot of their credit; for example, even RAND acknowledges (in their retrospective published this year or last) that they screwed up massively with Vietnam.

Comment author: mattnewport 11 September 2010 08:47:50PM *  1 point [-]

I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven't done that badly. From an article about RAND:

This is not really a relevant example in the context of Vladimir_Nesov's comment. Certain government funded groups (often within the military interestingly) have on occasion shown decent levels of rationality.

The suggestion to "develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that." that he was replying to requires rational government policy making / law making rather than rare pockets of rationality within government funded institutions however. That is something that is essentially non-existent in modern democracies.

Comment author: Vladimir_Nesov 11 September 2010 09:07:25PM *  2 points [-]

It's not adequate to "get governments to mandate that [Friendliness] be implemented in any AI", because Friendliness is not a robot-building standard - refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn't meant to imply that this particular decision would be rational.

Comment author: Wei_Dai 11 September 2010 09:07:32PM 0 points [-]

"Something like that" could be for a government funded group to implement an FAI, which, judging from my example, seems within the realm of feasibility (conditioning on FAI being feasible at all).

Comment author: multifoliaterose 20 August 2010 10:43:08PM *  0 points [-]

Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.

Yes, this is the point that I had not considered and which is worthy of further consideration.

It seems obviously infeasible to me that governments will chance upon this level of rationality.

Possibly what I mention could be accomplished with lobbying.

Also, we are clearly not on the same page if you say things like "implement in any AI". Friendliness is not to be "installed in AIs", Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that's possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.

See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.

Okay, so to clarify, I myself am not personally interested in Friendly AI research (which is why the points that you're mentioning were not in my mind before), but I'm glad that there are some people (like you) who are.

The main point that I'm trying to make is that I think that SIAI should be transparent, accountable, and place high emphasis on credibility. I think that these things would result in SIAI having much more impact than it presently is.