Jonathan_Graehl comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jonathan_Graehl 16 August 2010 10:01:23PM 0 points [-]

Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

There may be other approaches that are significantly simpler (that we haven't yet found, obviously). Assuming AGI happens, it will have been a race between the specific (type of) path you imagine, and every other alternative you didn't think of. In other words, you think you have an upper bound on how much time/expense it will take.