BB(100) is computable. Am I missing something?
Maybe... by BB I mean the Busy Beaver function Σ as defined in this Wikipedia entry.
BB(100) is computable. Am I missing something?
Maybe... by BB I mean the Busy Beaver function Σ as defined in this Wikipedia entry.
Right, and...
A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.
So why can't the universal prior use it?
Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this? Or in other words, the only reason the standard proofs of Solomonoff prediction's optimality go through is that they assume predictions are represented using numerals?
Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?
BB(100) is computable. Am I missing something?
Wow, that must be some pretty wicked statement. Did someone find the "LW Poster Gödel Strings"?
As best I can tell, all the statement did was increase the probability that some weird people might have nightmares, which might make them not work hard enough (?) on FAI, which might cause UFAI to succeed.
It outlined a way for a UFAI to blackmail us. Banning the post is a way to fight the blackmail by ignoring it.
This is silly - there's simply no way to assign a probability of his posts increasing the chance of UFAI with any degree of confidence, to the point where I doubt you could even get the sign right.
For example, deleting posts because they might add an infinitesimally small amount to the probability of UFAI being created makes this community look slightly more like a bunch of paranoid nutjobs, which overall will hinder its ability to accomplish its goals and makes UFAI more likely.
From what I understand, the actual banning was due to it's likely negative effects on the community, as Eliezer has seen similar things on the SL4 mailing list - which I won't comment on. But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
I've read the post. That excuse is actually relevant.
To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.
I don't see how bayesian utility maximizers lack the "philosophical abilities" to discover these ideas. Also, the last one is only half true. The "wrong" link is about decision theory paradoxes, but a bayesian utility maximizer would overcome these with practice.
astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that's random.
But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don't know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.
Does anyone else feel like this just a weird remake of cached thoughts?
No idea. We haven't yet revived any vitrified brains and asked them whether they experience personal continuity with their pre-vitrification selves. The answer could turn out either way.
They remember being themselves, so they'd say "yes."
I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."
Sure. What about it?
Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.
How a human might come to believe, without being epistemically privileged, that a sequence is probably a sequence of busy beavers, is a deep problem, similar to the problem of distinguishing halting oracles from impostors. (At least one mathematical logician who has thought deeply about the latter problem thinks that it's doable.)
But in any case, the usual justification for AIXI (or adopting the universal prior) is that (asymptotically) it does as well as or better than any computable agent, even one that is epistemically privileged, as long as the environment is computable. Eliezer and others were claiming that it does as well as or better than any computable agent, even if the environment is not computable, and this is what my counter-example disproves.
I don't understand how the bolded part follows. The best explanation by round BB(2^n) would be "All 1's except for the Busy Beaver numbers up to 2^n", right?