Lumifer comments on Three questions about source code uncertainty - Less Wrong

9 Post author: cousin_it 24 July 2014 01:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 24 July 2014 06:27:01PM 0 points [-]

In principle, it is possible to simulate a brain on a computer

That's a hypothesis, unproven and untested. Especially if you claim the equivalence between the mind and the simulation -- which you have to do in order to say that the simulation delivers the "source code" of the mind.

you can think of something's source code as a (computable) mathematical description of that thing.

A mathematical description of my mind would be beyond the capabilities of my mind to understand (and so, know). Besides, my mind changes constantly both in terms of patterns of neural impulses and, more importantly, in terms of the underlying "hardware". Is neuron growth or, say, serotonin release part of my "source code"?

Comment author: Adele_L 24 July 2014 06:51:45PM 3 points [-]

The laws of physics as we currently understand them are computable (not efficiently, but still), and there is no reason to hypothesize new physics to explain how the brain works. I'm claiming there is an isomorphism.

Dynamic systems have mathematical descriptions also...

Comment author: ThisSpaceAvailable 26 July 2014 04:09:13AM 0 points [-]

That's a hypothesis, unproven and untested.

In the broadest sense, the hypothesis is somewhat trivial. For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the "right" exchange, such that it is indistinguishable from a human. Where the hypothesis becomes less proven is if the requirement is not for fixed n.

Comment author: Lumifer 28 July 2014 03:45:44PM 1 point [-]

In the broadest sense, the hypothesis is somewhat trivial.

No, I don't think so.

For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the "right" exchange, such that it is indistinguishable from a human.

Are you making the Searle's Chinese Room argument?

In any case, even if we accept the purely functional approach, it doesn't seem obvious to me that you must be able to create a simulation which picks the "right" answer in the future. You don't get to run 2^n instances and say "Pick whichever one you satisfies your criteria".

Comment author: ThisSpaceAvailable 29 July 2014 02:54:17AM 0 points [-]

Well, I did say "In the broadest sense", so yes, that does imply a purely functional approach.

You don't get to run 2^n instances and say "Pick whichever one you satisfies your criteria".

The claim was that it is possible in principle. And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.

Comment author: Lumifer 29 July 2014 04:04:40AM 1 point [-]

And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.

That's not simulating intelligence. That's just a crude exhaustive search.

And I am not sure you have enough energy in the universe to run 2^n instances, anyway.