Comment author: Wei_Dai 29 July 2010 10:56:24PM *  2 points [-]

How a human might come to believe, without being epistemically privileged, that a sequence is probably a sequence of busy beavers, is a deep problem, similar to the problem of distinguishing halting oracles from impostors. (At least one mathematical logician who has thought deeply about the latter problem thinks that it's doable.)

But in any case, the usual justification for AIXI (or adopting the universal prior) is that (asymptotically) it does as well as or better than any computable agent, even one that is epistemically privileged, as long as the environment is computable. Eliezer and others were claiming that it does as well as or better than any computable agent, even if the environment is not computable, and this is what my counter-example disproves.

Comment author: ocr-fork 29 July 2010 11:12:49PM 0 points [-]

What about the agent using Solomonoff's distribution? After seeing BB(1),...,BB(2^n), the algorithmic complexity of BB(1),...,BB(2^n) is sunk, so to speak.* It will predict a higher expected payoff for playing 0 in any round i where the conditional complexity K(i | BB(1),...,BB(2^n)) < 100. * This includes for example 2BB(2^n), 2BB(2^n)+1, BB(2^n)^2 * 3 + 4, BB(2^n)^^^3, etc. It will bet on 0 in these rounds (erroneously, since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n), and therefore lose relative to a human.

I don't understand how the bolded part follows. The best explanation by round BB(2^n) would be "All 1's except for the Busy Beaver numbers up to 2^n", right?

Comment author: Wei_Dai 29 July 2010 10:11:51PM 1 point [-]

BB(100) is computable. Am I missing something?

Maybe... by BB I mean the Busy Beaver function Σ as defined in this Wikipedia entry.

Comment author: ocr-fork 29 July 2010 10:14:28PM 0 points [-]

Right, and...

A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.

So why can't the universal prior use it?

Comment author: Wei_Dai 29 July 2010 09:04:27PM *  1 point [-]

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this? Or in other words, the only reason the standard proofs of Solomonoff prediction's optimality go through is that they assume predictions are represented using numerals?

Comment author: ocr-fork 29 July 2010 10:02:50PM 0 points [-]

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?

BB(100) is computable. Am I missing something?

Comment author: ocr-fork 27 July 2010 10:58:04PM 0 points [-]

To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.

I don't see how bayesian utility maximizers lack the "philosophical abilities" to discover these ideas. Also, the last one is only half true. The "wrong" link is about decision theory paradoxes, but a bayesian utility maximizer would overcome these with practice.

Comment author: ocr-fork 23 July 2010 11:06:37PM 6 points [-]

astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that's random.

Comment author: ocr-fork 26 June 2010 04:41:56AM *  4 points [-]

But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don't know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.

Does anyone else feel like this just a weird remake of cached thoughts?

Comment author: cousin_it 24 June 2010 10:51:17AM *  -2 points [-]

No idea. We haven't yet revived any vitrified brains and asked them whether they experience personal continuity with their pre-vitrification selves. The answer could turn out either way.

Comment author: ocr-fork 24 June 2010 04:35:24PM 5 points [-]

They remember being themselves, so they'd say "yes."

I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."

Comment author: Kaj_Sotala 24 June 2010 08:48:40AM 0 points [-]

Sure. What about it?

Comment author: ocr-fork 24 June 2010 03:58:28PM 1 point [-]

Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.

Comment author: Kaj_Sotala 23 June 2010 08:50:00PM *  4 points [-]

Part of the reason why I make available records of e.g. the books I own, the music I listen to and the board games I've played (though this last list is horribly incomplete) is to make it possible for someone to reconstruct me in the future. There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction. A lot would be missing, of course, but it's still better than nothing.

I don't put that big of a priority this, though - I haven't made an effort to make sure that the contents of my hard drive will remain available somewhere after my death, for instance. It's more of an entertaining thought I like to play with.

Comment author: ocr-fork 24 June 2010 06:05:06AM *  5 points [-]

There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction.

That's orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.

Comment author: PhilGoetz 24 June 2010 04:17:55AM 1 point [-]

I find it deeply weird that nobody has pointed out that the information describing you, written as prose, is not conscious. This is a major drawback. The OP mentioned it, and dared people to take him/her up on it, and nobody did.

I attribute this to a majority of people on LW taking Dennett's position on consciousness, which is basically to try to pretend that it doesn't exist, and that being a materialist means believing that there is no "qualia problem".

Comment author: ocr-fork 24 June 2010 04:37:09AM 4 points [-]

Is a vitrified brain conscious?

View more: Prev | Next