Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: asr 17 December 2013 06:57:08AM 1 point [-]

Yes. I picked the ethical formulation as a way to make clear that this isn't just a terminological problem.

I like the framing in terms of expectation.

And I agree that this line of thought makes me skeptical about the computationalist theory of mind. The conventional formulations of computation seem to abstract away enough stuff about identity that you just can't hang a theory of mind and future expectation on what's left.

Comment author: summerstay 17 December 2013 03:09:01PM 0 points [-]

I think that arguments like this are a good reason to doubt computationalism. That means accepting that two systems performing the same computations can have different experiences, even though they behave in exactly the same way. But we already should have suspected this: it's just like the inverted spectrum problem, where you and I both call the same flower "red," but the subjective experience I have is what you would call "green" if you had it. We know that most computations even in our brains are not accompanied by conscious perceptual experience, so it shouldn't be surprising if we can make a system that does whatever we want, but does it unconsciously.

Comment author: asr 17 December 2013 03:11:28AM 0 points [-]

The reference is a good one -- thanks! But I don't quite understand the rest of your comments. Can you rephrase more clearly?

Comment author: summerstay 17 December 2013 02:58:18PM 0 points [-]

Sorry, I was just trying to paraphrase the paper in one sentence. The point of the paper is that there is something wrong with computationalism. It attempts to prove that two systems with the same sequence of computational states must have different conscious experiences. It does this by taking a robot brain that calculates the same way as a conscious human brain, and transforms it, always using computationally equivalent steps, to a system that is computationally equivalent to a digital clock. This means that either we accept that a clock is at every moment experiencing everything that can be experienced, or that something is wrong with computationalism. If we take the second option, it means that two systems with the exact same behavior and computational structure can have different perceptual consciousness.

Comment author: ahbwramc 13 December 2013 10:50:09PM 4 points [-]

Could the relevant moral change happen going from B to C, perhaps? i.e. maybe a mind needs to actually be physically/causally computed in order to experience things. Then the torture would have occurred whenever John's mind was first simulated, but not for subsequent "replays," where you're just reloading data.

Comment author: summerstay 16 December 2013 04:12:29PM 1 point [-]

Check out "Counterfactuals Can't Count" for a response to this. Basically, if a recording is different in what it experiences than running a computation, then two computations that calculate the same thing in the same way, but one has bits of code that never run, experience things differently.

Comment author: joaolkf 05 December 2013 03:04:57AM 1 point [-]

Never came by this draft. Is it new? (Though he has been working with it for quite some time..) I will take a look at it. But beforehand, my general view on simulations/emulations, is that even solely non-agency statistical simulations of an agent's behaviour, if precise enough, would contain what matters on suffering/pleasure. Memories feelings, thoughts and so on would be all shattered throughout many, many variables, but the correlations which would have to hold between all of these might still guarantee there would still be a (perhaps sentient) agent there.

Comment author: summerstay 05 December 2013 02:27:33PM 1 point [-]

I found the draft via this post from the end of June 2013.

Ethics of Brain Emulation

1 summerstay 04 December 2013 07:19PM

I felt like this draft paper by Anders Sandberg was a well-thought-out essay on the morality of experiments on brain emulations. Is there anything you disagree with here, or think he should handle differently?

http://www.aleph.se/papers/Ethics%20of%20brain%20emulations%20draft.pdf

Comment author: summerstay 25 November 2013 06:11:37PM 9 points [-]

One rational ability that people are really good at that is hard (i.e. we haven't made much progress in automating) is applying common sense knowledge to language understanding. Here's a collection of sentences where the referent is ambiguous, but we don't even notice because we are able to match it up as quickly as we read: http://www.hlt.utdallas.edu/~vince/data/emnlp12/train-emnlp12.txt

Comment author: summerstay 27 October 2013 01:12:11PM 1 point [-]

You can read a paper on EURISKO here. My impression is that the program quickly exhausted the insights he put in as heuristics, and began journeying down eccentric paths that were not of interest to a human mathematician.

Comment author: summerstay 25 October 2013 04:13:35PM 15 points [-]

Here's my advice: always check Snopes before forwarding anything.

Comment author: sixes_and_sevens 10 September 2013 02:48:04PM 10 points [-]

"Unlike these other highly-contrived hypothetical scenarios we invent to test extreme corner-cases of our reasoning, this highly-contrived hypothetical scenario is a parody. If you ever find yourself in the others, you have to take it seriously, but if you find yourself in this one, you are under no such obligation."

Comment author: summerstay 10 September 2013 03:03:20PM 1 point [-]

Yes, that's what I'm saying. The other ones are meant to prove a point. This one is just to make you laugh, just like the one it is named after. http://www.mindspring.com/~mfpatton/Tissues.htm

Comment author: summerstay 10 September 2013 02:36:20PM *  1 point [-]

I think most of the commenters aren't getting that this is a parody. Edit: It turns out I was wrong.

View more: Next