Posts

Sorted by New

Wiki Contributions

Comments

One possibility, given my (probably wrong) interpretation of the ground rules of the fictional universe, is that the humans go to the baby-eaters and tell them that they're being invaded. Since we cooperated with them, the baby-eaters might continue to cooperate with us, by agreeing to:

1. reduce their baby-eating activities, and/or

2. send their own baby-eaters ship to blow up the star (since the fictional characters are probably barred by the author from reducing the dilemma by blowing up Huygens or sending a probe ship), so that the humans don't have to sacrifice themselves.

@Wei: p(n) will approach arbitrarily close to 0 as you increase n.

This doesn't seem right. A sequence that requires knowledge of BB(k), has O(2^-k) probability according to our Solomonoff Inductor. If the inductor compares a BB(k)-based model with a BB(k+1)-based model, then BB(k+1) will on average be about half as probable as BB(k).

In other words, P(a particular model of K-complexity k is correct) goes to 0 as k goes to infinity, but the conditional probability, P(a particular model of K-complexity k is correct | a sub-model of that particular model with K-complexity k-1 is correct), does not go to 0 as k goes to infinity.

If humanity unfolded into a future civilization of infinite space and infinite time, creating descendants and hyperdescendants of unlimitedly growing size, what would be the largest Busy Beaver number ever agreed upon?

Suppose they run a BB evaluator for all of time. They would, indeed, have no way at any point of being certain that the current champion 100-bit program is the actual champion that produces BB(100). However, if they decide to anthropically reason that "for any time t, I am probably alive after time t, even though I have no direct evidence one way or the other once t becomes too large", then they will believe (with arbitrarily high probability) that the current champion program is the actual champion program, and an arbitrarily high percentage of them will be correct in their belief.

  1. One difference between optimization power and the folk notion of "intelligence": Suppose the Village Idiot is told the password of an enormous abandoned online bank account. The Village Idiot now has vastly more optimization power than Einstein does; this optimization power is not based on social status nor raw might, but rather on the actions that the Village Idiot can think of taking (most of which start with logging in to account X with password Y) that don't occur to Einstein. However, we wouldn't label the Village Idiot as more intelligent than Einstein.

  2. Is the Principle of Least Action infinitely "intelligent" by your definition? The PLA consistently picks a physical solution to the n-body problem that surprises me in the same way Kasparov's brilliant moves surprise me: I can't come up with the exact path the n objects will take, but after I see the path that the PLA chose, I find (for each object) the PLA's path has a smaller action integral than the best path I could have come up with.

  3. An AI whose only goal is to make sure such-and-such coin will not, the next time it's flipped, turn up heads, can apply only (slightly less than) 1 bit of optimization pressure by your definition, even if it vaporizes the coin and then builds a Dyson sphere to provide infrastructure and resources for its ongoing efforts to probe the Universe to ensure that it wasn't tricked and that the coin actually was vaporized as it appeared to be.

Chip, I don't know what you mean by "The AI Institute", but such discussion would be more on-topic at the SL4 mailing list than in the comments section of a blog posting about optimization rates.

The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you're correct, compared with meta-reasoning position B, is often a difficult one.

When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the heuristic "toxic". When someone uses a favored heuristic to produce a wrong result, the temptation is to shrug and say "there is no safe harbor for a rationalist" or "such a person is biased, stupid, and beyond help; he would have gotten to the wrong conclusion anyway, no matter what his meta-reasoning position was. The idiot reasoner, rather than my beautiful heuristic, has to be discarded." In the absence of hard data, consensus seems difficult; the problem is exacerbated when a novel meta-reasoning argument is brought up in the middle of a debate on a separate disagreement, in which case the opposing sides have even more temptation to "dig in" to separate meta-reasoning positions.

CERN on its LHC:

Studies into the safety of high-energy collisions inside particle accelerators have been conducted in both Europe and the United States by physicists who are not themselves involved in experiments at the LHC... CERN has mandated a group of particle physicists, also not involved in the LHC experiments, to monitor the latest speculations about LHC collisions

Things that CERN is doing right:

  1. The safety reviews were done by people who do not work at the LHC.
  2. There were multiple reviews by independent teams.
  3. There is a group continuing to monitor the situation.

Wilczek was asked to serve on the committee "to pay the wages of his sin, since he's the one that started all this with his letter."

Moral: if you're a practicing scientist, don't admit the possibility of risk, or you will be punished. (No, this isn't something I've drawn from this case study alone; this is also evident from other case studies, NASA being the most egregious.)

@Vladimir: We can't bother to investigate every crazy doomsday scenario suggested

This is a strawman; nobody is suggesting investigating "every crazy doomsday scenario suggested". A strangelet catastrophe is qualitatively possible according to accepted physical theories, and was proposed by a practicing physicist; it's only after doing quantitative calculations that they can be dismissed as a threat. The point is that such important quantitative calculations need to be produced by less biased processes.

Load More