You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Will AGI surprise the world? - Less Wrong Discussion

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 22 June 2014 04:14:30PM *  0 points [-]

I mean you don't have to assume a singleton AI becoming very powerful very quickly. You can assume intelligence and friendliness developing in parallel.[and incrementally]

Comment author: MugaSofer 23 June 2014 01:46:22PM *  1 point [-]

Hmm.

Are you suggesting (super)intelligence would be a result of direct human programming, like Friendliness presumably would be?

Or that Friendliness would be a result of self-modification, like SIAI is predicted to be 'round these parts?

Comment author: TheAncientGeek 23 June 2014 02:01:17PM 0 points [-]

I am talking about SIRI. I mean that human engineers are /will make multiple efforts at simultaneously improving AI and friendliness, and the ecosystem of AIs and AI users are/will select for friendliness that works.

Comment author: skeptical_lurker 22 June 2014 05:31:53PM 0 points [-]

Is the idea that the network develops at roughly the same rate, with no single entity undergoing a hard takeoff?

Comment author: TheAncientGeek 23 June 2014 02:05:58PM 0 points [-]

Yes.

Comment author: Squark 22 June 2014 04:46:14PM *  0 points [-]

I what sense I don't have to assume it? I think singleton AI happens to be a likely scenario and this has little to do with cooperation.

Comment author: TheAncientGeek 22 June 2014 05:07:51PM *  0 points [-]

The more alternative scenarios there are, the less likelihood iof the MIRI scenario, and the less need for the MIRI solutiion.

Comment author: Squark 22 June 2014 05:56:21PM 0 points [-]

I don't understand what it has to do with cooperative game theory.