Will_Newsome comments on The curse of identity - LessWrong

121 Post author: Kaj_Sotala 17 November 2011 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (296)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 06 August 2012 11:56:42AM 8 points [-]

When Will talks about hell, or anything that sounds like a religious concept, you should suppose that in his mind it also has a computational-transhumanist meaning. I hear that in Catholicism, Hell is separation from God, and for Will, God might be something like the universal moral attractor for all post-singularity intelligences in the multiverse, so he may be saying (in the great-grandparent comment) that if you are insufficiently attentive to the question of right and wrong, your personal algorithm may never be re-instantiated in a world remade by friendly AI. To round out this guide for the perplexed, one should not think that Will is just employing a traditional language in order to express a very new concept, you need to entertain the idea that there really is significant referential overlap between what he's talking about and what people like Aquinas were talking about - that all that medieval talk about essences, and essences of essences, and all this contemporary talk about programs, and equivalence classes of programs, might actually be referring to the same thing. One could also say something about how Will feels when he writes like this - I'd say it sometimes comes from an advanced state of whimsical despair at ever being understood - but the idea that his religiosity is a double reverse metaphor for computational eschatology is the important one. IMHO.

Comment author: Will_Newsome 06 August 2012 08:34:33PM *  1 point [-]

in his mind it also has a computational-transhumanist meaning

And a cybernetic/economic/ecological/signal-processing meaning, ethical meaning, sometimes a quantum information theoretic meaning, et cetera. I would not be justified in drawing a conclusion about the validity of a concept based on merely a perceived correspondence between two models. That'd be barely any better than talking acausal simulation seriously simply because computational metaphysics and modal-realist-like-ideas are somewhat intuitively attractive and superintelligences seem theoretically possible. One's inferences should be based on significantly more solid foundations. I just don't have a way to talk about equivalence classes of things while still being at all understood—then not even people like muflax could reliably understand me, and much of why I write here is to communicate with people like muflax, or angels.