So hello, I'm a first time poster here at LessWrong. I stumbled upon this site after finding out about a thing called Roko's Basilisk and I heard it's a thing over here. So, after doing a little digging I thought it would be fun to chat with some friends about my findings. However, I then proceeded to research a bit more and I found some publications with disturbing implications. So, my question is, while I understand that I shouldn't spread information about the concept; I gain that it is because of the potential torture anyone with a knowledge of the co...
So, if I understand what is being said correctly, while it's unlikely that Roko's Basilisk while be the AI to be created (I've read it's roughly 1/500 chance); however, if it were to be, or were to become the (lets say dominant) AI to exist, the simple concept of Roko's Basilisk would be very dangerous. Even more so if you're going to endorse the whole 'simulation of everybody's life' idea, as just knowing/thinking about the concept of the basilisk would show up in said simulation, and be evidence the basilisk would use to justify its torture of you. Would you say that's the gist of it?