turchin comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 11 January 2013 10:51:02AM 1 point [-]

Even evil creator of AI needs somekind of controll over his child, that could be called friendliness to one person. So any group which is seriosly creating AGI and going to use it it in any efforts should be interested in FAI theory. So it could be enough to explain to any one who create AGI that he needs somekind of F-theory and it should be mathematically proven.

Comment author: shminux 11 January 2013 06:50:44PM 2 points [-]

Most people whose paycheck comes from designing a bomb have no trouble rationalizing it. Similarly, if your paycheck depends on the AGI progress and not FAI progress, you will likely be unwilling to slow down or halt the AGI development progress, and if you are, you get fired and replaced.

Comment author: turchin 11 January 2013 08:06:27PM 0 points [-]

I wanted to say that anyone who is creating AGI need to control it some how and by this need somekind of analog of FAI, at least for not to be killed himself. And this idea could be promoted to any AGI reasearch group.