You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

badtheatre comments on Defining causal isomorphism - Less Wrong Discussion

1 Post author: badtheatre 14 December 2013 06:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: badtheatre 15 December 2013 12:08:24AM 0 points [-]

Those are basically the two questions I want answers to. In the thread I originally posted in, Eliezer refers to "pointwise causal isomorphism":

Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified >that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of >detection) and verified surface correspondence (the person emulated says they can't internally detect any >difference) then my probability of consciousness is essentially "top", i.e. I would not bother to think about >alternative hypotheses because the probability would be low enough to fall off the radar of things I should think >about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even >though your biological brain isn't?

We could similarly define a pointwise isomorphism between computations A and B. I think I could come up with a formal definition, but what I want to know is: under what conditions is computation A simulated by computation B, so that if computation A is emulating a brain and we all agree that it contains a consciousness, we can be sure that B does as well.