Jonathan_Graehl comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 20 August 2010 09:07:29PM *  7 points [-]

You wrote elsewhere in the thread:

I assign a probability of less than 10^(-9) to [Eliezer] succeeding in playing a critical role on the Friendly AI project that [he's] working on.

Does it mean that we need 10^9 Eliezer-level researchers to make progress? Considering that Eliezer is probably at about 1 in 10000 level of ability (if we forget about other factors that make research in FAI possible, such as getting in the frame of mind of understanding the problem and taking it seriously), we'd need about 1000 times more human beings than currently exists on the planet to produce a FAI, according to your estimate.

How does this claim coexist with the one you've made in the above comment?

I believe that you (and others working on the FAI problem) can credibly hold the view that your work has higher expected value to humanity than that of a very large majority (e.g. 99.99%) of the population. Maybe higher.

It doesn't compute, there is an apparent inconsistency between these two claims. (I see some ways to mend it by charitable interpretation, but I'd rather you make the intended meaning explicit yourself.)

Comment author: Jonathan_Graehl 20 August 2010 10:16:13PM 2 points [-]

Eliezer is probably at about 1 in 10000 level of ability [of G]

Agreed, and I like to imagine that he reads that and thinks to himself "only 10000? thanks a lot!" :)

In case anyone takes the above too seriously, I consider it splitting hairs to talk about how much beyond 1 in 10000 smart anyone is - eventually, motivation, luck, and aesthetic sense / rationality begin to dominate in determining results IMO.