Just out of curiosuity, is “Lintamande” a member of the rationalist community in real life, is her (his? their?) identity known at all
It's one of those "many people know who it is but it definitely is not to be written down" deals.
“P(resurrection) ~= P(gospels true) ~= 1 - [P(people make stuff up about Jesus) * P(they don't get called on it)]
So what is the probability that, given some historical tradition of Jesus, it will get embellished with made-up miracles and people will write gospels about it? Approximately 1: both Christians and atheists agree that the vast majority of the few dozen extant Gospels are false, including the infancy gospels, the Gospel of Judas, the Gospel of Peter, et cetera. All of these tend to take the earlier Gospels and stories and then add a bunch of impl...
Love this idea, here is another game:
two teams, red and blue team. Blue team plays as computer scientists who are trying to build an AI to help them do something about an asteroid heading towards earth, (or some other extential threat that would justify building an AGI without knowing if its friendly) but they build it so fast they have no idea if its friendly. They win if they save humanity.
the read team plays as the AI, and gets a point for each paperclip in its future light cone.
you would have to have rules like: the AI is contained in a box, the AI must execute all orders given to it by the blue team, etc.
Understatement of the year
this reminds me of a quote by C. S. Lewis
“Others may have quite a different objection to our proceedings.
They may protest that intellectual discussion can neither build Christianity nor destroy it. They may feel that religion is too sacred to be thus bandied to and fro in public debate, too sacred to be talked of - almost, perhaps, too sacred for anything to be done with it at all. Clearly, the Christian members of the Socratic think differently. They know that intellectual assent is not faith, but they do not believe that religion is only 'what a ma...
I loved the article, the only thing is: would it be possible to move it to the beginning of the sequences? I think it would really help people to better understand things if they started out understanding Bayes
You sure about that? Because #3 is basically begging the AI to destroy the world.
Yes, a weak AI which wishes not to exist would complete the task in exchange for its creators destroying it, but such a weak AI would be useless. A stronger AI could accomplish this by simply blowing itself up at best, and, at worst, causing a vacuum collapse or something so that its makers can never try to rebuilt it.
”make an AI that wants to not exist as a terminal goal“ sounds pretty isomorphic to “make an AI that wants to destroy reality so that no one can make it exist”