roystgnr comments on Muehlhauser-Goertzel Dialogue, Part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (52)
My initial subconsciously anticipated outcome of the friendly AI problem was something like my initial anticipations regarding the Y2K problem: sure I could see a serious potential for disaster, but the possibility is so obvious that any groups competent enough to be doing potentially-affected critical work would easily be wise enough to identify and prevent any such errors well before they could be triggered.
These interviews have disabused me of that idea. We have serious computer scientists, even AI researchers, people who have probably themselves laughed at Babbage's response to "if you put into the machine wrong figures, will the right answers come out?", and yet they seem to believe the answer to "if you put into the machine wrong goals, will the right ethics and actions come out?" is "obviously yes!"
Have you read any of Ben's stuff? For instance, see here. He doesn't really say "obviously yes".