Dre comments on Ask LW: ω-self-aware systems - Less Wrong

-1 Post author: Technoguyrob 16 December 2012 10:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (10)

You are viewing a single comment's thread.

Comment author: Dre 16 December 2012 11:00:11PM *  2 points [-]

I think you need to start by cashing out "understand" better. Certainly no physical system can simulate itself with full resolution. But there are all sorts of things we can't simulate like this. Understanding (as I would say its more commonly used) usually involves finding out which parts of the system are "important" to whatever function you're concerned with. For example, we don't have to simulate every particle in a gas because we have gas laws. And I think most people would say that gas laws show more understanding of thermodynamics than whatever you would get out of a complete simulation anyway.

Now the question is whether the brain actually does have any "laws" like this. IIRC, this is a relatively open question (though I do not follow neuroscience very closely) and in principle it could go either way.

I guess I don't really understand what the purpose of the argument is. Unless we can prove things about this stack of brains, what does it gets us? And how far "down" the evolutionary ladder does this argument work? Are cats omega-self-aware? Computing clusters?

Comment author: Manfred 17 December 2012 12:08:14AM 4 points [-]

caching out

typically cashing out.

Comment author: Technoguyrob 16 December 2012 11:21:55PM 0 points [-]

Good point. It might be that any 1-self-aware system is ω-self-aware.