Will_Newsome comments on The curse of identity - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (296)
When Will talks about hell, or anything that sounds like a religious concept, you should suppose that in his mind it also has a computational-transhumanist meaning. I hear that in Catholicism, Hell is separation from God, and for Will, God might be something like the universal moral attractor for all post-singularity intelligences in the multiverse, so he may be saying (in the great-grandparent comment) that if you are insufficiently attentive to the question of right and wrong, your personal algorithm may never be re-instantiated in a world remade by friendly AI. To round out this guide for the perplexed, one should not think that Will is just employing a traditional language in order to express a very new concept, you need to entertain the idea that there really is significant referential overlap between what he's talking about and what people like Aquinas were talking about - that all that medieval talk about essences, and essences of essences, and all this contemporary talk about programs, and equivalence classes of programs, might actually be referring to the same thing. One could also say something about how Will feels when he writes like this - I'd say it sometimes comes from an advanced state of whimsical despair at ever being understood - but the idea that his religiosity is a double reverse metaphor for computational eschatology is the important one. IMHO.
And a cybernetic/economic/ecological/signal-processing meaning, ethical meaning, sometimes a quantum information theoretic meaning, et cetera. I would not be justified in drawing a conclusion about the validity of a concept based on merely a perceived correspondence between two models. That'd be barely any better than talking acausal simulation seriously simply because computational metaphysics and modal-realist-like-ideas are somewhat intuitively attractive and superintelligences seem theoretically possible. One's inferences should be based on significantly more solid foundations. I just don't have a way to talk about equivalence classes of things while still being at all understood—then not even people like muflax could reliably understand me, and much of why I write here is to communicate with people like muflax, or angels.