FAWS comments on We are not living in a simulation - Less Wrong

-9 Post author: dfranke 12 April 2011 01:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (211)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 12 April 2011 07:11:02PM *  0 points [-]

Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.

Do we know enough to tell for sure?

Comment author: dfranke 12 April 2011 07:14:15PM *  0 points [-]

Do you mean, "know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?". No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.

Comment author: gwern 12 April 2011 11:50:49PM 1 point [-]

"know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?".

Depending on various details, this might well be impossible. Rice's theorem comes to mind - if it's impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn't bode well for similar questions for Turing-equivalent substrates.

Comment author: dfranke 13 April 2011 12:08:44AM *  2 points [-]

Brains, like PCs, aren't actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they'd need something equivalent to a Turing machine's infinite tape. There's nothing analogous to Rice's theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn't mean tractable.

Comment author: gwern 13 April 2011 12:27:56AM 1 point [-]

There's nothing analogous to Rice's theorem or the halting theorem which holds for finite state machines.

It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice's theorem by pointing out that 'actually brains are just FSMs', you replace it with another question, 'are they FSMs decidable within the length of tape available to us?'

Given how fast the Busy Beaver grows, the answer is almost surely no - there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it's impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice's theorem applies again).

(I know you understand this because you pointed out 'Of course, decidable doesn't mean tractable.' but it's not obvious to a lot of people and is worth noting.)

Comment author: dfranke 13 April 2011 12:40:09AM *  1 point [-]

This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.

Comment author: FAWS 12 April 2011 07:33:46PM -2 points [-]

Do you mean, "know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?"

Yes

No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.

Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don't see how they can serve as a replacement in your argument when we can't identify them.

Comment author: dfranke 12 April 2011 07:46:45PM 0 points [-]

The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a "qualia cell". You clearly already understand this concept, so is my particular choice of analogy so terribly important that it's necessary to nitpick over this?

Comment author: FAWS 12 April 2011 08:26:43PM -2 points [-]

The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can't answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can't even verify their existence in the first place?