Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Salemicus 16 October 2014 06:45:57PM 10 points [-]

I think this was a great idea for a post. If LessWrong rationality is worthwhile, then it ought to get lots of replies on concrete facts - not moral preferences, theology, or other unproveables.

I used to believe that embryos pass through periods of development representing earlier evolutionary stages - that there was a period when a human baby was basically a fish, then later an amphibian, and so on. I believed this because my father told me so; he was a doctor (though not an obstetrician), and the information he had given me about other subjects was highly reliable. Most knowledge is second hand - it was highly rational for me to believe him. I now know (also second hand!) that Haeckel's ideas were debunked a long time ago - although they might well have been in a textbook when my father was at medical school.

To me, the lesson is trust, but verify.

Comment author: summerstay 19 October 2014 11:19:31AM 1 point [-]

I only recently realized that evolution works, for the most part, by changing the processes of embryonic development. There are some exceptions-- things like neoteny and metamorphosis-- but most changes are genetic differences leading to differences in, say, how long a process of growth is allowed to occur in the embryo.

Comment author: Pablo_Stafforini 24 September 2014 04:35:22AM *  4 points [-]

David Chalmers' The Conscious Mind is excellent, and no, you don't have to agree with its conclusions to agree with that characterization. If you lack the time to read an entire book, try Consciousness and its place in nature instead.

Few philosophers are worth reading; Chalmers is definitely one of them.

Comment author: summerstay 24 September 2014 04:22:48PM 2 points [-]

There's a reason everyone started calling it "the hard problem." Chalmers explained the problem so clearly that we now basically just point and say "that thing Chalmers was talking about."

Comment author: shminux 08 September 2014 07:35:17PM *  9 points [-]

"WWRMD?" (RM for "rational me".)

Comment author: summerstay 18 September 2014 05:09:22PM 3 points [-]

This is exactly the point of asking "What Would Jesus Do?" Christians are asking themselves, what would a perfectly moral, all-knowing person do in this situation, and using the machinery their brains have for simulating a person to find out the answer, instead of using the general purpose reasoner that is so easily overworked. Of course, simulating a person (especially a god) accurately can be kind of tricky. Similar thoughts religious people use to get themselves to do things that they want to abstractly but are hard in the moment: What would I do if I were the kind of person I want to become? What would a perfectly moral, all-knowing person think about what I'm about to do?

Comment author: DanielVarga 25 May 2014 02:07:08PM 3 points [-]

Wow, I'd love to see some piece of art depicting that pink worm vine.

Comment author: summerstay 01 June 2014 08:37:14PM 1 point [-]

I assumed that was the intention of the writers of Donnie Darko. The actual shapes coming out of their chests we got were not right, but you could see this is what they were trying to do.

Comment author: asr 17 December 2013 06:57:08AM 1 point [-]

Yes. I picked the ethical formulation as a way to make clear that this isn't just a terminological problem.

I like the framing in terms of expectation.

And I agree that this line of thought makes me skeptical about the computationalist theory of mind. The conventional formulations of computation seem to abstract away enough stuff about identity that you just can't hang a theory of mind and future expectation on what's left.

Comment author: summerstay 17 December 2013 03:09:01PM 0 points [-]

I think that arguments like this are a good reason to doubt computationalism. That means accepting that two systems performing the same computations can have different experiences, even though they behave in exactly the same way. But we already should have suspected this: it's just like the inverted spectrum problem, where you and I both call the same flower "red," but the subjective experience I have is what you would call "green" if you had it. We know that most computations even in our brains are not accompanied by conscious perceptual experience, so it shouldn't be surprising if we can make a system that does whatever we want, but does it unconsciously.

Comment author: asr 17 December 2013 03:11:28AM 0 points [-]

The reference is a good one -- thanks! But I don't quite understand the rest of your comments. Can you rephrase more clearly?

Comment author: summerstay 17 December 2013 02:58:18PM 0 points [-]

Sorry, I was just trying to paraphrase the paper in one sentence. The point of the paper is that there is something wrong with computationalism. It attempts to prove that two systems with the same sequence of computational states must have different conscious experiences. It does this by taking a robot brain that calculates the same way as a conscious human brain, and transforms it, always using computationally equivalent steps, to a system that is computationally equivalent to a digital clock. This means that either we accept that a clock is at every moment experiencing everything that can be experienced, or that something is wrong with computationalism. If we take the second option, it means that two systems with the exact same behavior and computational structure can have different perceptual consciousness.

Comment author: ahbwramc 13 December 2013 10:50:09PM 4 points [-]

Could the relevant moral change happen going from B to C, perhaps? i.e. maybe a mind needs to actually be physically/causally computed in order to experience things. Then the torture would have occurred whenever John's mind was first simulated, but not for subsequent "replays," where you're just reloading data.

Comment author: summerstay 16 December 2013 04:12:29PM 1 point [-]

Check out "Counterfactuals Can't Count" for a response to this. Basically, if a recording is different in what it experiences than running a computation, then two computations that calculate the same thing in the same way, but one has bits of code that never run, experience things differently.

Comment author: joaolkf 05 December 2013 03:04:57AM 1 point [-]

Never came by this draft. Is it new? (Though he has been working with it for quite some time..) I will take a look at it. But beforehand, my general view on simulations/emulations, is that even solely non-agency statistical simulations of an agent's behaviour, if precise enough, would contain what matters on suffering/pleasure. Memories feelings, thoughts and so on would be all shattered throughout many, many variables, but the correlations which would have to hold between all of these might still guarantee there would still be a (perhaps sentient) agent there.

Comment author: summerstay 05 December 2013 02:27:33PM 1 point [-]

I found the draft via this post from the end of June 2013.

Ethics of Brain Emulation

1 summerstay 04 December 2013 07:19PM

I felt like this draft paper by Anders Sandberg was a well-thought-out essay on the morality of experiments on brain emulations. Is there anything you disagree with here, or think he should handle differently?

http://www.aleph.se/papers/Ethics%20of%20brain%20emulations%20draft.pdf

Comment author: summerstay 25 November 2013 06:11:37PM 9 points [-]

One rational ability that people are really good at that is hard (i.e. we haven't made much progress in automating) is applying common sense knowledge to language understanding. Here's a collection of sentences where the referent is ambiguous, but we don't even notice because we are able to match it up as quickly as we read: http://www.hlt.utdallas.edu/~vince/data/emnlp12/train-emnlp12.txt

View more: Next