XiXiDu comments on [Link] A review of proposals toward safe AI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (10)
Here is an interesting interview between Hugo de Garis and Ben Goertzel:
[...]
I'm not very familiar with Goertzel's ideas. Does he recognize the importance of not letting the proto-AGI systems self-improve while their values are uncertain?
From what I've gathered Ben thinks that these experiments will reveal that friendliness is impossible, that 'be nice to humans' is not a stable value. I'm not sure why he thinks this.