HalFinney comments on A Premature Word on AI - Less Wrong

9 Post author: Eliezer_Yudkowsky 31 May 2008 05:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

Sort By: Old

You are viewing a single comment's thread.

Comment author: HalFinney 31 May 2008 11:59:30PM 1 point [-]

Eliezer, in your AIRisk paper, at the end of section 11 you summarize your position:

I do not assign strong confidence to the assertion that Friendly AI is easier than human augmentation, or that it is safer. There are many conceivable pathways for augmenting a human. Perhaps there is a technique which is easier and safer than AI, which is also powerful enough to make a difference to existential risk. If so, I may switch jobs. But I did wish to point out some considerations which argue against the unquestioned assumption that human intelligence enhancement is easier, safer, and powerful enough to make a difference.

OTOH you imply above that you now do not think it plausible that human augmentation could happen sooner than FAI, and indicate that you could write a knockdown argument against that possibility. This seems inconsistent with your view in the paper.