ChristianKl comments on A few misconceptions surrounding Roko's basilisk - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (125)
Is it?
Assume that:
a) There will be a future AI so powerful to torture people, even posthumously (I think this is quite speculative, but let's assume it for the sake of the argument).
b) This AI will be have a value system based on some form of utilitarian ethics.
c) This AI will use an "acausal" decision theory (one that one-boxes in Newcomb's problem).
Under these premises it seems to me that Roko's argument is fundamentally correct.
As far as I can tell, belief in these premises was not only common in LessWrong at that time, but it was essentially the officially endorsed position of Eliezer Yudkowsky and SIAI. Therefore, we can deduce that EY should have believed that Roko's argument was correct.
But EY claims that he didn't believe that Roko's argument was correct. So the question is: is EY lying?
His behavior was certainly consistent with him believing Roko's argument. If he wanted to prevent the diffusion of that argument, then even lying about its correctness seems consistent.
So, is he lying? If he is not lying, then why didn't he believe Roko's argument? As far as I know, he never provided a refutation.
Lying is consistent with a lot of behavior. The fact that it is, is no basis to accuse people of lying.
I'm not accusing, I'm asking the question.
My point is that to my knowledge, given the evidence that I have about his beliefs at that time, and his actions, and assuming that I'm not misunderstanding them or Roko's argument, then it seems that there is a significant probability that EY lied about not beliving that Roko's argument was correct.