An unrelated generic New York Times article. Paper beats blog!
How about a news show?. Best watched without sound.
I think you need to start by cashing out "understand" better. Certainly no physical system can simulate itself with full resolution. But there are all sorts of things we can't simulate like this. Understanding (as I would say its more commonly used) usually involves finding out which parts of the system are "important" to whatever function you're concerned with. For example, we don't have to simulate every particle in a gas because we have gas laws. And I think most people would say that gas laws show more understanding of thermodynamics than whatever you would get out of a complete simulation anyway.
Now the question is whether the brain actually does have any "laws" like this. IIRC, this is a relatively open question (though I do not follow neuroscience very closely) and in principle it could go either way.
I guess I don't really understand what the purpose of the argument is. Unless we can prove things about this stack of brains, what does it gets us? And how far "down" the evolutionary ladder does this argument work? Are cats omega-self-aware? Computing clusters?
Took most of it. I pressed enter accidentally after the charity questions. I would like to fill out the remainder. Is there a way I can do that without messing up the data?
Well, I disagree that complimenting a stranger's netbook is creepy, but...
This disagreement on what is creepy demonstrates precisely how hard it is to predict in advance if some behavior will be perceived as creepy or not.
Though I don't think its that simple because both sides are claiming that the other side is not reporting how they truly feel. One side claims that people are calling things creepy semi-arbitrarily to raise their own status, and the other claims that people are intentionally refusing to recognize creepy behavior as creepy so they don't have to stop it (or being slightly more charitable, so they don't take a status hit for being creepy).
Multiplication by a constant is an affine transformation. This clearly is a very big problem.
But all we want is an ordering of choices, and affine transformations (with a positive multiplicative constant) are order preserving.
Editors can edit top level posts but have no access to comments. I could ban them, but they have useful content that doesn't deserve to be hidden entirely.
If you can't tell whose side someone is on, they are not on yours. -Warren E. Buffett
Wouldn't this be a problem for tit for tat players going up against other tit for tat players (but not knowing the strategy of their opponent)?
Doesn't this imply that an infinity of different subjective consciousnesses are being simulated right now, if only we knew how to assign inputs and outputs correctly?
Not necessarily. See Chlamer's reply to Hilary Putnam who asserted something similar, especially section 6. Basically, if we require that all of the "internal" structure of the computation be the same in the isomorphism and make a reasonable assumption about the nature consciousness, all of the matter in the Hubble volume wouldn't be close to large enough to simulate a (human) consciousness.
One example: The Thing That I Protect.
...except for one last thing; so after tomorrow, I plan to go back to posting about plain old rationality on Monday.
If that makes you want to know what the "last thing" is, you have to click Next no less than ten times on Articles tagged ai to find out. Another is "More on this tomorrow" in Resist the Happy Death Spiral.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Wait, since Chloe's theory was a TVTropes reference (see pedanterrific's comment) could the vrooping thing be too?
Oh my Bayes, it's completely obvious:
It's a lampshade. But what was Eliezer lampshading?
ETA: Obvious in retrospect, I should say. Which doesn't actually mean obvious at all.
This feels like reading too much into it, but is
supposed to be something about the fourth wall?