TheAncientGeek comments on Will AGI surprise the world? - Less Wrong

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 22 June 2014 01:34:25PM 3 points [-]

Cooperative play (as opposed to morality) strongly depends on the position from which you're negotiating. For example if the FAI scenario is much less likely (a priori) than a Clippy scenario, then there's no reason for Clippy to make strong concessions.

Comment author: TheAncientGeek 22 June 2014 03:18:08PM 1 point [-]

But the we might be able achieve AI safety in a relatively easy way by creating networks of interacting agents (including interacting with us)

Comment author: Gunnar_Zarncke 22 June 2014 08:23:19PM 1 point [-]

I think you points out the conclusion of the assumption that

Cooperative play strongly depends on the position from which you're negotiating.

but if you have multiple AIs then none of them is much stronger than the other.

Comment author: Squark 22 June 2014 03:52:10PM 0 points [-]

Sorry, didn't follow that. Can you elaborate?

Comment author: TheAncientGeek 22 June 2014 04:14:30PM *  0 points [-]

I mean you don't have to assume a singleton AI becoming very powerful very quickly. You can assume intelligence and friendliness developing in parallel.[and incrementally]

Comment author: MugaSofer 23 June 2014 01:46:22PM *  1 point [-]

Hmm.

Are you suggesting (super)intelligence would be a result of direct human programming, like Friendliness presumably would be?

Or that Friendliness would be a result of self-modification, like SIAI is predicted to be 'round these parts?

Comment author: TheAncientGeek 23 June 2014 02:01:17PM 0 points [-]

I am talking about SIRI. I mean that human engineers are /will make multiple efforts at simultaneously improving AI and friendliness, and the ecosystem of AIs and AI users are/will select for friendliness that works.

Comment author: skeptical_lurker 22 June 2014 05:31:53PM 0 points [-]

Is the idea that the network develops at roughly the same rate, with no single entity undergoing a hard takeoff?

Comment author: TheAncientGeek 23 June 2014 02:05:58PM 0 points [-]

Yes.

Comment author: Squark 22 June 2014 04:46:14PM *  0 points [-]

I what sense I don't have to assume it? I think singleton AI happens to be a likely scenario and this has little to do with cooperation.

Comment author: TheAncientGeek 22 June 2014 05:07:51PM *  0 points [-]

The more alternative scenarios there are, the less likelihood iof the MIRI scenario, and the less need for the MIRI solutiion.

Comment author: Squark 22 June 2014 05:56:21PM 0 points [-]

I don't understand what it has to do with cooperative game theory.