@Wei: p(n) will approach arbitrarily close to 0 as you increase n.
This doesn't seem right. A sequence that requires knowledge of BB(k), has O(2^-k) probability according to our Solomonoff Inductor. If the inductor compares a BB(k)-based model with a BB(k+1)-based model, then BB(k+1) will on average be about half as probable as BB(k).
In other words, P(a particular model of K-complexity k is correct) goes to 0 as k goes to infinity, but the conditional probability, P(a particular model of K-complexity k is correct | a sub-model of that particular model with K-complexity k-1 is correct), does not go to 0 as k goes to infinity.
If humanity unfolded into a future civilization of infinite space and infinite time, creating descendants and hyperdescendants of unlimitedly growing size, what would be the largest Busy Beaver number ever agreed upon?
Suppose they run a BB evaluator for all of time. They would, indeed, have no way at any point of being certain that the current champion 100-bit program is the actual champion that produces BB(100). However, if they decide to anthropically reason that "for any time t, I am probably alive after time t, even though I have no direct eviden...
One difference between optimization power and the folk notion of "intelligence": Suppose the Village Idiot is told the password of an enormous abandoned online bank account. The Village Idiot now has vastly more optimization power than Einstein does; this optimization power is not based on social status nor raw might, but rather on the actions that the Village Idiot can think of taking (most of which start with logging in to account X with password Y) that don't occur to Einstein. However, we wouldn't label the Village Idiot as more intelligent
Count me in.
Chip, I don't know what you mean by "The AI Institute", but such discussion would be more on-topic at the SL4 mailing list than in the comments section of a blog posting about optimization rates.
The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you're correct, compared with meta-reasoning position B, is often a difficult one.
When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the heuristic "toxic". When someone uses a favored heuristic to produce a wrong result, the temptation is to shrug and say "there is no safe harbor for a rationalist" or "such a person is biased, stupid, and beyond help; he would have gotten to ...
CERN on its LHC:
Studies into the safety of high-energy collisions inside particle accelerators have been conducted in both Europe and the United States by physicists who are not themselves involved in experiments at the LHC... CERN has mandated a group of particle physicists, also not involved in the LHC experiments, to monitor the latest speculations about LHC collisions
Things that CERN is doing right:
Wilczek was asked to serve on the committee "to pay the wages of his sin, since he's the one that started all this with his letter."
Moral: if you're a practicing scientist, don't admit the possibility of risk, or you will be punished. (No, this isn't something I've drawn from this case study alone; this is also evident from other case studies, NASA being the most egregious.)
@Vladimir: We can't bother to investigate every crazy doomsday scenario suggested
This is a strawman; nobody is suggesting investigating "every crazy doomsday scenario suggested". A strangelet catastrophe is qualitatively possible according to accepted physical theories, and was proposed by a practicing physicist; it's only after doing quantitative calculations that they can be dismissed as a threat. The point is that such important quantitative calculations need to be produced by less biased processes.
if you manage to get yourself stuck in an advanced rut, dutifully playing Devil's Advocate won't get you out of it.
It's not a binary either/or proposition, but a spectrum; you can be in a sufficiently shallow rut that a mechanical rule of "when reasoning, search for evidence against the proposition you're currently leaning towards" might rescue you in a situation where you would otherwise fail to come to the correct conclusion. That said, yes, it would indeed be preferable to conduct the search because you actually have "true doubt" and...
"Oh, look, Eliezer is overconfident because he believes in many-worlds."
I can agree that this is absolutely nonsensical reasoning. The correct reason to believe Eliezer is overconfident is because he's a human being, and the prior that any given human is overconfident is extremely large.
One might propose heuristics to determine whether person X is more or less overconfident, but "X disagrees strongly with me personally on this controversial issue, therefore he is overconfident" (or stupid or ignorant) is the exact type of flawed reasoning that comes from self-serving biases.
Some physicists speak of "elegance" rather than "simplicity". This seems to me a bad idea; your judgments of elegance are going to be marred by evolved aesthetic criteria that exist only in your head, rather than in the exterior world, and should only be trusted inasmuch as they point towards smaller, rather than larger, Kolmogorov complexity.
Example:
In theory A, the ratio of tiny dimension #1 to tiny dimension #2 is finely-tuned to support life.
In theory B, the ratio of the mass of the electron to the mass of the neutrino is finely-tuned to support life.
An "elegance" advocate might favor A over B, whereas a "simplicity" advocate might be neutral between them.
can you tell me why the subjective probability of finding ourselves in a side of the split world, should be exactly proportional to the square of the thickness of that side?
Po'mi runs a trillion experiments, each of which have a one-trillionth 4D-thickness of saying B but is otherwise A. In his "mainline probability", he sees the all trillion experiments coming up A. (If he ran a sextillion experiments he'd see about 1 come up B.)
Presumably an external four-dimensional observer sees it differently: He sees only one-trillionth of Po'mi coming up a...
It seems worthwhile to also keep in mind other quantum mechanical degrees of freedom, such as spin
Only if the spin's basis turns out to be relevant in the final ToEILEL (Theory of Everything Including Laboratory Experimental Results) that gives a mechanical algorithm for what probabilities I anticipate.
In contrast, if someone had a demonstrably-correct theory that could tell you the macroscopic position of everything I see, but doesn't tell you the spin or (directly) the spatial or angular momentum, then the QM Measurement Problem would still be marked &qu...
Robin: is there a paper somewhere that elaborates this argument from mixed-state ambiguity?
Scott should add his own recommendations, but I would say here is a good starting introduction.
To my mind, the fact that two different situations of uncertainty over true states lead to the same physical predictions isn't obviously a reason to reject that type of view regarding what is real.
The anti-MWI position here is that MWI produces different predictions depending on what basis is arbitrarily picked by the predictor; and that the various MWI efforts to "pat...
In many of your prior posts where you bring up MWI, your interpretation doesn't fundamentally matter to the overall point you're trying to make in that post; that is, your overall conclusion for that post held or failed regardless of which interpretation is correct, possibly to a greater degree than you tend to realize.
For example: "We used a true randomness source - a quantum device." The philosophers' point could equally have been made by choosing the first 2^N digits of pi and finding they correspond by chance to someone's GLUT.
the colony is in the future light cone of your current self, but no future version of you is in its future light cone.
Right, and if anyone's still confused how this is possible: wikipedia and a longer explanation
* That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.
not yet understood? Is your position that there's a mathematical or physical discovery waiting out there, that will cause you, me, Chalmers, and everyone else to slap our heads and say, "of course, that's what the answer is! We should have realized it all along!"
Question for all: How do you apply Occam's Razor to cases where there are two competing hypo...
@spindizzy:
No, this hasn't been "argued out", and even if it had been in the past, the "single best answer" would differ from person to person and from year to year. I would suggest starting a thread on SL4 or on SIAI's Singularity Discussion list.
One possibility, given my (probably wrong) interpretation of the ground rules of the fictional universe, is that the humans go to the baby-eaters and tell them that they're being invaded. Since we cooperated with them, the baby-eaters might continue to cooperate with us, by agreeing to:
1. reduce their baby-eating activities, and/or
2. send their own baby-eaters ship to blow up the star (since the fictional characters are probably barred by the author from reducing the dilemma by blowing up Huygens or sending a probe ship), so that the humans don't have to sacrifice themselves.