You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on Shawn Mikula on Brain Preservation Protocols and Extensions - Less Wrong Discussion

5 Post author: oge 29 April 2015 02:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 29 April 2015 01:27:02PM *  4 points [-]

Perhaps he doesn't really understand the implications of universal computability.

Or perhaps he's skeptical of the fidelity of that kind of model. Evolution famously abhors abstraction barriers.

Would you care to quantify your 'almost everyone' claim? Are there surveys, etc.?

Comment author: jacob_cannell 29 April 2015 06:17:59PM *  6 points [-]

No - it's just an observation from my experience (CS degree in the 90's).

Just to be clear, he is making a clear conceptual mistake that indicates he does not understand universal computability:

... the reason for this is simulating the neural activity on a Von Neumann (or related computer) architecture does not reproduce the causal structure of neural interactions in wetware. Using a different computer architecture may avert this problem ...

If there is some other weird computer architecture that can reproduce the causal structure of neural interactions in wetware, then a universal computer (such as a Von Neumann machine) can also reproduce the causal structure of neural interactions simply by simulating the weird computer. This really is theory of computation 101.

Comment author: Lumifer 29 April 2015 06:45:22PM *  -1 points [-]

In theory there is no difference between theory and practice. In practice there is.

A physical Turing machine can simulate an iPhone, in theory. Would you like to try to build one? :-D

Comment author: Viliam 30 April 2015 06:57:17AM 3 points [-]

The only problems would be speed and memory.

There is a tiny chance that when he said "does not reproduce the causal structure of neural interactions", what he actually meant was "would simulate the neural interactions extremely slowly", but if that was the case, he really could have said it better.

My priors are that when people without formal computer science education talk about brains and computers, they usually believe that parallelism is the magical power that gives you much more than merely an increase in speed.

Comment author: jacob_cannell 29 April 2015 08:49:05PM 0 points [-]

In practice it's just a matter of computational power. His statement makes it fairly clear that he doesn't understand this distinction.

Circuit level simulations of advanced microchips certainly exist - this is not just theory. Yes they are super expensive when run on standard CPUs (real-time simulation of an iphone CPU naively would require on the order of an exaflop). However, low level circuit binary logic ops are much simpler than the 32/64 bit ops that CPUs implement, and there are more advanced simulation algorithms. Companies such as Cadence provide general purpose binary logic emulators that actually work, in practice for reasonable cost, not just theory.

Comment author: V_V 30 April 2015 03:58:36PM *  0 points [-]

"He does not understand universal computability" seems an overstatement, universal computability doesn't logically imply functionalism, although I agree that it tends to imply that definitions of consciousness which are not invariant under simulation have little epistemic usefulness.

Comment author: Luke_A_Somers 29 April 2015 05:52:54PM 2 points [-]

The problem is, he just - JUST - got done saying that he's talking about the exact case where it turns out that the simulation's subject completely encompasses the source of consciousness.

If that were his objection, it wouldn't matter if it was Von Neumann or not.

Comment author: Silver_Swift 30 April 2015 12:28:29PM *  0 points [-]

To add my own highly anecdotal evidence: my experience is that most people with a background in computer science or physics have no active model of how consciousness maps to brains, but when prodded they indeed usually come up with some form of functionalism*.

My own position is that I'm highly confused by consciousness in general, but I'm leaning slightly towards substance dualism, I have a background in computer science.

*: Though note that quite a few of these people simultaneously believe that it is fundamentally impossible to do accurate natural language parsing with a turing machine, so their position might not be completely thought through.

Comment author: dxu 30 April 2015 03:49:05PM *  3 points [-]

I'm leaning slightly towards substance dualism

This seems a bit like trying to fix a problem by applying a patch that causes a lot more problems. The stunning success of naturalistic explanations so far in predicting the universe (plus Occam's Razor) alone would enough to convince me that consciousness is a naturalistic process (and, in fact, they were what convinced me, plus a few other caveats). I'd assign maybe 95% probability to this conclusion. Still, I'd be interested in hearing what led you to your conclusion. Could you expand in more detail?