JoshuaZ comments on The Singularity Institute's Arrogance Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (307)
I'm sort of pleased to see that I guessed roughly what this episode was about despite having arrived at LessWrong well after it unhappened.+ But if the Rationalwiki description is accurate, I'm now really confused about something new.
I was under the impression that Lesswrong was fairly big on the Litany of Gendlin. But an AI that could do the things Roko proposed (something I place vanishingly small probability, fortunately) could also retrospectively figure out who was being willfully ignorant or failing to reach rational conclusions for which they had sufficient priors.
It's disconcerting, after watching so much criticism of the rest of humanity finding ways to rationalize around the "inevitability" of death, to see transhumanists finding ways to hide their minds from their own "inevitable" conclusions.
+Since most people who would care about this subject at all have probably read Three Worlds Collide, I think this episode should be referred to as The Confessar Vanishes, but my humor may be idiosyncratic even for this crowd.
The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals. At the time when it occurred there were at least two people in the general SI/LW cluster who were apparently deeply disturbed by the thought. I expect that the sort who would be vulnerable would be the same sort who if they were religious would lose sleep over the possibility of going to hell.
The original reasons given:
...and further:
(emphasis mine)