Comment author: Wei_Dai 29 July 2010 10:56:24PM *  2 points [-]

How a human might come to believe, without being epistemically privileged, that a sequence is probably a sequence of busy beavers, is a deep problem, similar to the problem of distinguishing halting oracles from impostors. (At least one mathematical logician who has thought deeply about the latter problem thinks that it's doable.)

But in any case, the usual justification for AIXI (or adopting the universal prior) is that (asymptotically) it does as well as or better than any computable agent, even one that is epistemically privileged, as long as the environment is computable. Eliezer and others were claiming that it does as well as or better than any computable agent, even if the environment is not computable, and this is what my counter-example disproves.

Comment author: ocr-fork 29 July 2010 11:12:49PM 0 points [-]

What about the agent using Solomonoff's distribution? After seeing BB(1),...,BB(2^n), the algorithmic complexity of BB(1),...,BB(2^n) is sunk, so to speak.* It will predict a higher expected payoff for playing 0 in any round i where the conditional complexity K(i | BB(1),...,BB(2^n)) < 100. * This includes for example 2BB(2^n), 2BB(2^n)+1, BB(2^n)^2 * 3 + 4, BB(2^n)^^^3, etc. It will bet on 0 in these rounds (erroneously, since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n), and therefore lose relative to a human.

I don't understand how the bolded part follows. The best explanation by round BB(2^n) would be "All 1's except for the Busy Beaver numbers up to 2^n", right?

Comment author: Wei_Dai 29 July 2010 10:11:51PM 1 point [-]

BB(100) is computable. Am I missing something?

Maybe... by BB I mean the Busy Beaver function Σ as defined in this Wikipedia entry.

Comment author: ocr-fork 29 July 2010 10:14:28PM 0 points [-]

Right, and...

A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.

So why can't the universal prior use it?

Comment author: Wei_Dai 29 July 2010 09:04:27PM *  1 point [-]

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this? Or in other words, the only reason the standard proofs of Solomonoff prediction's optimality go through is that they assume predictions are represented using numerals?

Comment author: ocr-fork 29 July 2010 10:02:50PM 0 points [-]

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?

BB(100) is computable. Am I missing something?

Comment author: SilasBarta 29 July 2010 05:02:48PM *  5 points [-]

Wow, that must be some pretty wicked statement. Did someone find the "LW Poster Gödel Strings"?

As best I can tell, all the statement did was increase the probability that some weird people might have nightmares, which might make them not work hard enough (?) on FAI, which might cause UFAI to succeed.

Comment author: ocr-fork 29 July 2010 05:53:18PM 0 points [-]

It outlined a way for a UFAI to blackmail us. Banning the post is a way to fight the blackmail by ignoring it.

Comment author: hegemonicon 29 July 2010 01:23:58PM 26 points [-]

This is silly - there's simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.

For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.

From what I understand, the actual banning was due to it's likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list - which I won't comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.

Comment author: ocr-fork 29 July 2010 04:43:10PM -1 points [-]

But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.

I've read the post. That excuse is actually relevant.

Comment author: ocr-fork 27 July 2010 10:58:04PM 0 points [-]

To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.

I don't see how bayesian utility maximizers lack the "philosophical abilities" to discover these ideas. Also, the last one is only half true. The "wrong" link is about decision theory paradoxes, but a bayesian utility maximizer would overcome these with practice.

Comment author: ocr-fork 23 July 2010 11:06:37PM 6 points [-]

astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that's random.

Comment author: ocr-fork 26 June 2010 04:41:56AM *  4 points [-]

But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don't know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.

Does anyone else feel like this just a weird remake of cached thoughts?

Comment author: cousin_it 24 June 2010 10:51:17AM *  -2 points [-]

No idea. We haven't yet revived any vitrified brains and asked them whether they experience personal continuity with their pre-vitrification selves. The answer could turn out either way.

Comment author: ocr-fork 24 June 2010 04:35:24PM 5 points [-]

They remember being themselves, so they'd say "yes."

I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."

Comment author: Kaj_Sotala 24 June 2010 08:48:40AM 0 points [-]

Sure. What about it?

Comment author: ocr-fork 24 June 2010 03:58:28PM 1 point [-]

Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.

View more: Prev | Next