Well, but EY is still using arguments in the form "donate to us or the future of the Galactic Civilization is at risk", I don't thik the Basilisk would make much difference. If anything, EY could just declare the Basilisk invalid. His behavior is not consistent with him beliving the argument.
Well, still better than "donate to us or you'll go to hell".
I'm pleased to announce friendly-artificial-intelligence, a google group intended for research-level discussion of problems in FAI and AGI, in particular for discussions that are highly technical and/or math intensive.
Some examples of possible discussion topics: naturalized induction, decision theory, tiling agents / Loebian obstacle, logical uncertainty...
I invite everyone who want to take part in FAI research to participate in the group. This obviously includes people affiliated with MIRI, FHI and CSER, people who attend MIRI workshops and participants of the southern california FAI workshop.
Please, come in and share your discoveries, ideas, thoughts, questions et cetera. See you there!