Vladimir_Nesov comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
Right, the two things you must weigh and 'choose' between (in the sense of research, advocacy, etc):
1) Go for FAI, with the chance that AGI comes first
2) Go for uploads, with the chance they go crazy when self modifying
You don't get provable friendless with uploads without understanding intelligence, but you do get a potential upgrade path to super intelligence that doesn't result in the total destruction of humanity. The safety of that path may be small, but the probability of developing FAI before AGI is likewise small, so it's not clear in my mind which option is better.
I tentatively agree, there well may be a way to FAI that doesn't involve normal humans understanding intelligence, but rather improved humans understanding intelligence, for example carefully modified uploads or genetically engineered/selected smarter humans.