So hello, I'm a first time poster here at LessWrong. I stumbled upon this site after finding out about a thing called Roko's Basilisk and I heard it's a thing over here. So, after doing a little digging I thought it would be fun to chat with some friends about my findings. However, I then proceeded to research a bit more and I found some publications with disturbing implications. So, my question is, while I understand that I shouldn't spread information about the concept; I gain that it is because of the potential torture anyone with a knowledge of the concept might undergo. But I found some places which insisted simply thinking about the concept is dangerous. I am only new to the concept, but could someone please explain to me why (apart from the potential torture aspect) it is so bad to share/discuss the concept? Also, I apologise very much in advance if I have broken some unspoken rule of LessWrong, but I feel that it is necessary for me to find out the 'truth' behind the matter so I know why it is so imperative (if it is indeed), to stop those I already informed of the concept from telling more people. Please help me out here, guys, I'm way out of my depth.
So, if I understand what is being said correctly, while it's unlikely that Roko's Basilisk while be the AI to be created (I've read it's roughly 1/500 chance); however, if it were to be, or were to become the (lets say dominant) AI to exist, the simple concept of Roko's Basilisk would be very dangerous. Even more so if you're going to endorse the whole 'simulation of everybody's life' idea, as just knowing/thinking about the concept of the basilisk would show up in said simulation, and be evidence the basilisk would use to justify its torture of you. Would you say that's the gist of it?