skeptical_lurker comments on Will AGI surprise the world? - Less Wrong

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 22 June 2014 02:40:40PM 2 points [-]

For example if the FAI scenario is much less likely (a priori) than a Clippy scenario, then there's no reason for Clippy to make strong concessions.

But if a "paperclips" maximizer, as opposed to "tables", "cars", or "alien sex toys" maximizer, is just one of many unfriendly maximizers, then maximizing "human values" is just one of many unlikely outcomes. In other words, you can't just say that unfriendly AIs are more likely than friendly AIs when it comes to cooperation. Since the opposition between a paperclip maximizer and an "alien sex toy" maximizer is the same as the opposition between the former and an alien or human friendly AI. Since all of them want to maximize their opposing values. And even if there turns out to be a subset of values shared by some AIs, other groups could cooperate to outweigh their leverage.

Comment author: skeptical_lurker 22 June 2014 05:42:38PM 2 points [-]

But since there is an exponentially huge set of random maximisers, the probability of each individual one is infinitesimal. OTOH, human values have a high probability density in mindspace because people are actually working towards it.

Comment author: Viliam_Bur 22 June 2014 09:30:37PM *  1 point [-]

human values have a high probability density in mindspace because people are actually working towards it

Depends on how high probability density have humans (and alien life forms so similar to humans that they share our values) in mindspace. Maybe very low. Maybe a society ruled by intelligent ants according to their values would make us very unhappy... and on a cosmic scale, ants are our cousins; alien life should be much more different.