Normal_Anomaly comments on [Link] A review of proposals toward safe AI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (10)
I'm not very familiar with Goertzel's ideas. Does he recognize the importance of not letting the proto-AGI systems self-improve while their values are uncertain?
From what I've gathered Ben thinks that these experiments will reveal that friendliness is impossible, that 'be nice to humans' is not a stable value. I'm not sure why he thinks this.