Stuart_Armstrong comments on AI indifference through utility manipulation - Less Wrong

4 Post author: Stuart_Armstrong 02 September 2010 05:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 06 September 2010 01:00:55PM 1 point [-]

The mind space of humans is vast. It is not determined by genetics, it is determined by memetics, and AI's would necessarily inherit our memetics and thus will necessarily start as samples in our mindspace.

The Kolomogrov complexity of humans is quite high. See this list of human universals; every one of the elements on that list cuts the size of humans in general mind space by a factor of at least two, probably much more (even those universals that are only approximately true do this).

Comment author: jacob_cannell 06 September 2010 04:56:43PM *  1 point [-]

This list doesn't really help your point:

  1. Almost all of the linguistic 'universals' are universal to languages, not humans - and would necessarily apply to AI's who speak our languages
  2. Most of the social 'universals' are universal to societies, not humans, and apply just as easily to birds, bees, and dolphins: coalitions, leaders, conflicts?

AI's will inherit some understanding of all the idiosynchronicities of our complex culture just by learning our language and being immersed in it.

Kolomogrov complexity is not immediately relevant to this point. No matter how large the evolutionary landscape is, there are a small number of stable attractors in that landscape that become 'universals', species, parallel evolution, etc etc.

We are not going to create AI's by randomly sampling mindspace. The only way they could be truly alien is if we evolved a new simulated world from scratch with it's own evolutionary history and de novo culture and language. But of course that is unrealistic and unuseful on so many levels.

They will necessarily be samples from our mindspace - otherwise they wouldn't be so useful.

Comment author: timtyler 06 September 2010 05:05:12PM *  1 point [-]

They will necessarily be samples from our mindspace - otherwise they wouldn't be so useful.

Computers so far have been very different from us. That is partly because they have been built to compensate for our weaknesses - to be strong where we are weak. They compensate for our poor memories, our terrible arithmetic module, our poor long-distance communications skills - and our poor ability at serial tasks. That is how they have managed to find a foothold in society - before maastering nanotechnology.

IMO, we will probably be seeing a considerable amount more of that sort of thing.

Comment author: jacob_cannell 06 September 2010 05:32:05PM *  0 points [-]

Computers so far have been very different from us. [snip]

Agree with your point, but so far computers have been extensions of our minds and not minds in their own right. And perhaps that trend will continue long enough to delay AGI for a while.

For for AGI, for them to be minds, they will need to think and understand human language - and this is why I say they "will necessarily be samples from our mindspace".