Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

dxu comments on GAZP vs. GLUT - Less Wrong

33 Post author: Eliezer_Yudkowsky 07 April 2008 01:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: dxu 17 March 2015 01:17:47AM *  0 points [-]

Ah. I think I understand your position a bit better now; thanks. Now let me ask you the following question:

Suppose I take a certain volume of space large enough to hold a human brain--say, a 1-by-1-by-1-cubic-meter space. Now let us suppose that I fill that space with a random arrangement of quarks and electrons. This will almost certainly produce nothing more than a shapeless blob of matter. But now suppose that I continue doing this, over and over again, until finally, after perhaps quintillions upon quintillions of trials, I finally manage to construct a human brain--simply out of random chance. (This is actually a real phenomenon speculated about by physicists, known as the Boltzmann brain.)

Assuming that this brain doesn't die immediately due to being created in a vacuum, would you agree that it is conscious?

Comment author: MarsColony_in10years 22 March 2015 04:10:02AM 0 points [-]

The vast majority of such brains would not be. They'd just be hunks of dead meat, no different from the brain of a cadaver. A tiny subset, however, would be conscious, at least until they ran out of oxygen or whatever and died.

I'm not objecting to the matter in which the GLUT is created, but merely observing that it doesn't have a form which seems like it would give rise to consciousness. Without knowing the exact mechanism by which human brains give rise to consciousness, it is difficult to say precisely where to draw the line between calling something conscious or not conscious, but a GLUT doesn't seem to be structured in a way that could think. I'm arguing that it is possible, at least in principle, to cheat a Turing test with a GLUT.

I gave a few more comments in response to blossom's question if you are interested.