Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: shminux 08 September 2014 07:35:17PM *  9 points [-]

"WWRMD?" (RM for "rational me".)

Comment author: summerstay 18 September 2014 05:09:22PM 1 point [-]

This is exactly the point of asking "What Would Jesus Do?" Christians are asking themselves, what would a perfectly moral, all-knowing person do in this situation, and using the machinery their brains have for simulating a person to find out the answer, instead of using the general purpose reasoner that is so easily overworked. Of course, simulating a person (especially a god) accurately can be kind of tricky. Similar thoughts religious people use to get themselves to do things that they want to abstractly but are hard in the moment: What would I do if I were the kind of person I want to become? What would a perfectly moral, all-knowing person think about what I'm about to do?

Comment author: DanielVarga 25 May 2014 02:07:08PM 3 points [-]

Wow, I'd love to see some piece of art depicting that pink worm vine.

Comment author: summerstay 01 June 2014 08:37:14PM 1 point [-]

I assumed that was the intention of the writers of Donnie Darko. The actual shapes coming out of their chests we got were not right, but you could see this is what they were trying to do.

Comment author: asr 17 December 2013 06:57:08AM 1 point [-]

Yes. I picked the ethical formulation as a way to make clear that this isn't just a terminological problem.

I like the framing in terms of expectation.

And I agree that this line of thought makes me skeptical about the computationalist theory of mind. The conventional formulations of computation seem to abstract away enough stuff about identity that you just can't hang a theory of mind and future expectation on what's left.

Comment author: summerstay 17 December 2013 03:09:01PM 0 points [-]

I think that arguments like this are a good reason to doubt computationalism. That means accepting that two systems performing the same computations can have different experiences, even though they behave in exactly the same way. But we already should have suspected this: it's just like the inverted spectrum problem, where you and I both call the same flower "red," but the subjective experience I have is what you would call "green" if you had it. We know that most computations even in our brains are not accompanied by conscious perceptual experience, so it shouldn't be surprising if we can make a system that does whatever we want, but does it unconsciously.

Comment author: asr 17 December 2013 03:11:28AM 0 points [-]

The reference is a good one -- thanks! But I don't quite understand the rest of your comments. Can you rephrase more clearly?

Comment author: summerstay 17 December 2013 02:58:18PM 0 points [-]

Sorry, I was just trying to paraphrase the paper in one sentence. The point of the paper is that there is something wrong with computationalism. It attempts to prove that two systems with the same sequence of computational states must have different conscious experiences. It does this by taking a robot brain that calculates the same way as a conscious human brain, and transforms it, always using computationally equivalent steps, to a system that is computationally equivalent to a digital clock. This means that either we accept that a clock is at every moment experiencing everything that can be experienced, or that something is wrong with computationalism. If we take the second option, it means that two systems with the exact same behavior and computational structure can have different perceptual consciousness.

Comment author: ahbwramc 13 December 2013 10:50:09PM 4 points [-]

Could the relevant moral change happen going from B to C, perhaps? i.e. maybe a mind needs to actually be physically/causally computed in order to experience things. Then the torture would have occurred whenever John's mind was first simulated, but not for subsequent "replays," where you're just reloading data.

Comment author: summerstay 16 December 2013 04:12:29PM 1 point [-]

Check out "Counterfactuals Can't Count" for a response to this. Basically, if a recording is different in what it experiences than running a computation, then two computations that calculate the same thing in the same way, but one has bits of code that never run, experience things differently.

Comment author: joaolkf 05 December 2013 03:04:57AM 1 point [-]

Never came by this draft. Is it new? (Though he has been working with it for quite some time..) I will take a look at it. But beforehand, my general view on simulations/emulations, is that even solely non-agency statistical simulations of an agent's behaviour, if precise enough, would contain what matters on suffering/pleasure. Memories feelings, thoughts and so on would be all shattered throughout many, many variables, but the correlations which would have to hold between all of these might still guarantee there would still be a (perhaps sentient) agent there.

Comment author: summerstay 05 December 2013 02:27:33PM 1 point [-]

I found the draft via this post from the end of June 2013.

Ethics of Brain Emulation

1 summerstay 04 December 2013 07:19PM

I felt like this draft paper by Anders Sandberg was a well-thought-out essay on the morality of experiments on brain emulations. Is there anything you disagree with here, or think he should handle differently?


Comment author: summerstay 25 November 2013 06:11:37PM 9 points [-]

One rational ability that people are really good at that is hard (i.e. we haven't made much progress in automating) is applying common sense knowledge to language understanding. Here's a collection of sentences where the referent is ambiguous, but we don't even notice because we are able to match it up as quickly as we read: http://www.hlt.utdallas.edu/~vince/data/emnlp12/train-emnlp12.txt

Comment author: summerstay 27 October 2013 01:12:11PM 1 point [-]

You can read a paper on EURISKO here. My impression is that the program quickly exhausted the insights he put in as heuristics, and began journeying down eccentric paths that were not of interest to a human mathematician.

Comment author: summerstay 25 October 2013 04:13:35PM 15 points [-]

Here's my advice: always check Snopes before forwarding anything.

View more: Next