jacob_cannell comments on AI indifference through utility manipulation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
This list doesn't really help your point:
AI's will inherit some understanding of all the idiosynchronicities of our complex culture just by learning our language and being immersed in it.
Kolomogrov complexity is not immediately relevant to this point. No matter how large the evolutionary landscape is, there are a small number of stable attractors in that landscape that become 'universals', species, parallel evolution, etc etc.
We are not going to create AI's by randomly sampling mindspace. The only way they could be truly alien is if we evolved a new simulated world from scratch with it's own evolutionary history and de novo culture and language. But of course that is unrealistic and unuseful on so many levels.
They will necessarily be samples from our mindspace - otherwise they wouldn't be so useful.
Computers so far have been very different from us. That is partly because they have been built to compensate for our weaknesses - to be strong where we are weak. They compensate for our poor memories, our terrible arithmetic module, our poor long-distance communications skills - and our poor ability at serial tasks. That is how they have managed to find a foothold in society - before maastering nanotechnology.
IMO, we will probably be seeing a considerable amount more of that sort of thing.
Agree with your point, but so far computers have been extensions of our minds and not minds in their own right. And perhaps that trend will continue long enough to delay AGI for a while.
For for AGI, for them to be minds, they will need to think and understand human language - and this is why I say they "will necessarily be samples from our mindspace".