jacob_cannell comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 21 May 2012 02:30:06AM *  0 points [-]

Sadly, I think the general trend you note is correct, but the first developers to succeed may do so in relative secrecy.

As time goes on it becomes increasingly possible that some small group or lone researcher is able to put the final pieces together and develop an AGI. Assuming a typical largely selfish financial motivation, a small self-sufficient developer would have very little to gain from pre-publishing or publicizing their plan.

Eventually of course they may be tempted to publicize, but there is more incentive to do that later, if at all. Unless you work on it for a while and it doesn't go much of anywhere. Then of course you publish.

Comment author: JoshuaZ 21 May 2012 03:20:51AM 2 points [-]

As time goes on it becomes increasingly possible that some small group or lone researcher is able to put the final pieces together and develop an AGI.

Why do you think this is the case? Is this just because the overall knowledge level concerning AI goes up over time? If so, what makes you think that that rate of increase is anything large enough to be significant?

Comment author: jacob_cannell 12 June 2012 10:34:08PM 0 points [-]

Yes. This is just the way of invention in general: steady incremental evolutionary progress.

A big well funded team can throw more computational resources into their particular solution for the problem, but the returns are sublinear (for any one particular solution) even without moore's law.