Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Psy-Kosh comments on GAZP vs. GLUT - Less Wrong

33 Post author: Eliezer_Yudkowsky 07 April 2008 01:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Psy-Kosh 07 April 2008 03:35:09AM 2 points [-]

Hrm... as far as no one actually willing to jump in and say "a glut can be/is conscious"... What about Moravec and Egan? (Egan in Permutation City, Moravec in Simulation, Consciousness, Existance)... I don't recall them explicitly coming out and saying it, but it does seem to have been implied.

Anyways, I think I'm about to argue it... Or at least argue that there's something here that's seriously confusing me:

Okay, so you say that it's the generating process of the GLUT that has the associated consciousness, rather than the GLUT itself. Fine...

But exactly where is the breakdown between that and, say, the process that generates a human equivalent AI? Why not say that process is where the consciousness resides rather than the AI itself? if one takes at least some level of functionalism, allowing some optimizations and so on in the internal computations, then the internal "levers" can end up looking algorithmically very very different than the external, even if the behavior is identical.

In other words, as I start with the "correct" rods and levers to produce consciousness, then optimize various bits of it incramentally... when does the optimization process itself contain the majority of the consciousness?

More concretely, let's do something analogous to that hashlife program, creating a bunch of minigluts for clusters of neurons rather than a single superglut for the entire brain.

What's going on there? is the location of the consciousness now kinda spread out and scrambled in spacetime, a la Permutation City?

As we make the sizes of the clusers we're precomputing all possible states for larger, or basically grouping clusters and making megaclusters out of them... does the localization of the consciousness start to incrementally concentrate in spacetime toward the optimization process?

To perhaps make this really concrete.... implement turing machine in life universe, implement brain simulation on that, and then start with regular life simulation, then regular hashlife, and then incrementally "optimize" with larger and larger clusters of cells, so you end up with ever larger look up tables. ie, run the sim for a bit, then pause, do a step of optimization of the life CA algorithm (ie, life -> regular hashlife) run for a bit, pause, make a hashhashlife or make larger clusters, continue running, etc...)

This isn't so much an argument for a specific perspective so much as a thought experiment and question. I'm honestly not entirely sure how to view this. "Simplest" seems to be Permutation City style "scrambled in spacetime" consciousness.