@Wei: p(n) will approach arbitrarily close to 0 as you increase n.
This doesn't seem right. A sequence that requires knowledge of BB(k), has O(2^-k) probability according to our Solomonoff Inductor. If the inductor compares a BB(k)-based model with a BB(k+1)-based model, then BB(k+1) will on average be about half as probable as BB(k).
In other words, P(a particular model of K-complexity k is correct) goes to 0 as k goes to infinity, but the conditional probability, P(a particular model of K-complexity k is correct | a sub-model of that particular model with K-complexity k-1 is correct), does not go to 0 as k goes to infinity.
If humanity unfolded into a future civilization of infinite space and infinite time, creating descendants and hyperdescendants of unlimitedly growing size, what would be the largest Busy Beaver number ever agreed upon?
Suppose they run a BB evaluator for all of time. They would, indeed, have no way at any point of being certain that the current champion 100-bit program is the actual champion that produces BB(100). However, if they decide to anthropically reason that "for any time t, I am probably alive after time t, even though I have no direct eviden...
One difference between optimization power and the folk notion of "intelligence": Suppose the Village Idiot is told the password of an enormous abandoned online bank account. The Village Idiot now has vastly more optimization power than Einstein does; this optimization power is not based on social status nor raw might, but rather on the actions that the Village Idiot can think of taking (most of which start with logging in to account X with password Y) that don't occur to Einstein. However, we wouldn't label the Village Idiot as more intelligent
Count me in.
Chip, I don't know what you mean by "The AI Institute", but such discussion would be more on-topic at the SL4 mailing list than in the comments section of a blog posting about optimization rates.
The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you're correct, compared with meta-reasoning position B, is often a difficult one.
When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the heuristic "toxic". When someone uses a favored heuristic to produce a wrong result, the temptation is to shrug and say "there is no safe harbor for a rationalist" or "such a person is biased, stupid, and beyond help; he would have gotten to ...
CERN on its LHC:
Studies into the safety of high-energy collisions inside particle accelerators have been conducted in both Europe and the United States by physicists who are not themselves involved in experiments at the LHC... CERN has mandated a group of particle physicists, also not involved in the LHC experiments, to monitor the latest speculations about LHC collisions
Things that CERN is doing right:
Wilczek was asked to serve on the committee "to pay the wages of his sin, since he's the one that started all this with his letter."
Moral: if you're a practicing scientist, don't admit the possibility of risk, or you will be punished. (No, this isn't something I've drawn from this case study alone; this is also evident from other case studies, NASA being the most egregious.)
@Vladimir: We can't bother to investigate every crazy doomsday scenario suggested
This is a strawman; nobody is suggesting investigating "every crazy doomsday scenario suggested". A strangelet catastrophe is qualitatively possible according to accepted physical theories, and was proposed by a practicing physicist; it's only after doing quantitative calculations that they can be dismissed as a threat. The point is that such important quantitative calculations need to be produced by less biased processes.
if you manage to get yourself stuck in an advanced rut, dutifully playing Devil's Advocate won't get you out of it.
It's not a binary either/or proposition, but a spectrum; you can be in a sufficiently shallow rut that a mechanical rule of "when reasoning, search for evidence against the proposition you're currently leaning towards" might rescue you in a situation where you would otherwise fail to come to the correct conclusion. That said, yes, it would indeed be preferable to conduct the search because you actually have "true doubt" and...
"Oh, look, Eliezer is overconfident because he believes in many-worlds."
I can agree that this is absolutely nonsensical reasoning. The correct reason to believe Eliezer is overconfident is because he's a human being, and the prior that any given human is overconfident is extremely large.
One might propose heuristics to determine whether person X is more or less overconfident, but "X disagrees strongly with me personally on this controversial issue, therefore he is overconfident" (or stupid or ignorant) is the exact type of flawed reasoning that comes from self-serving biases.
Some physicists speak of "elegance" rather than "simplicity". This seems to me a bad idea; your judgments of elegance are going to be marred by evolved aesthetic criteria that exist only in your head, rather than in the exterior world, and should only be trusted inasmuch as they point towards smaller, rather than larger, Kolmogorov complexity.
Example:
In theory A, the ratio of tiny dimension #1 to tiny dimension #2 is finely-tuned to support life.
In theory B, the ratio of the mass of the electron to the mass of the neutrino is finely-tuned to support life.
An "elegance" advocate might favor A over B, whereas a "simplicity" advocate might be neutral between them.
can you tell me why the subjective probability of finding ourselves in a side of the split world, should be exactly proportional to the square of the thickness of that side?
Po'mi runs a trillion experiments, each of which have a one-trillionth 4D-thickness of saying B but is otherwise A. In his "mainline probability", he sees the all trillion experiments coming up A. (If he ran a sextillion experiments he'd see about 1 come up B.)
Presumably an external four-dimensional observer sees it differently: He sees only one-trillionth of Po'mi coming up a...
It seems worthwhile to also keep in mind other quantum mechanical degrees of freedom, such as spin
Only if the spin's basis turns out to be relevant in the final ToEILEL (Theory of Everything Including Laboratory Experimental Results) that gives a mechanical algorithm for what probabilities I anticipate.
In contrast, if someone had a demonstrably-correct theory that could tell you the macroscopic position of everything I see, but doesn't tell you the spin or (directly) the spatial or angular momentum, then the QM Measurement Problem would still be marked &qu...
Robin: is there a paper somewhere that elaborates this argument from mixed-state ambiguity?
Scott should add his own recommendations, but I would say here is a good starting introduction.
To my mind, the fact that two different situations of uncertainty over true states lead to the same physical predictions isn't obviously a reason to reject that type of view regarding what is real.
The anti-MWI position here is that MWI produces different predictions depending on what basis is arbitrarily picked by the predictor; and that the various MWI efforts to "pat...
In many of your prior posts where you bring up MWI, your interpretation doesn't fundamentally matter to the overall point you're trying to make in that post; that is, your overall conclusion for that post held or failed regardless of which interpretation is correct, possibly to a greater degree than you tend to realize.
For example: "We used a true randomness source - a quantum device." The philosophers' point could equally have been made by choosing the first 2^N digits of pi and finding they correspond by chance to someone's GLUT.
the colony is in the future light cone of your current self, but no future version of you is in its future light cone.
Right, and if anyone's still confused how this is possible: wikipedia and a longer explanation
* That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.
not yet understood? Is your position that there's a mathematical or physical discovery waiting out there, that will cause you, me, Chalmers, and everyone else to slap our heads and say, "of course, that's what the answer is! We should have realized it all along!"
Question for all: How do you apply Occam's Razor to cases where there are two competing hypo...
@spindizzy:
No, this hasn't been "argued out", and even if it had been in the past, the "single best answer" would differ from person to person and from year to year. I would suggest starting a thread on SL4 or on SIAI's Singularity Discussion list.
Doug S., we get the point, nothing that Ian could say would pry you away from your version of reductionism, there's no need to make any more posts with Fully General Counterarguments. "I defy the data" is a position, but does not serve as an explanation of why you hold that position, or why other people should hold that position as well.
I would agree with reductionism, if phrased as follows:
When entity A can be explained in terms of another entity B, but not vice-versa, it makes sense to say that entity A "has less existence" compared
if the vast majority of the measure of possible worlds given Bob's knowledge is in worlds where he loses, he's objectively wrong.
That's a self-consistent system, it just seems to me more useful and intuitive to say that:
"P" is true => P
"Bob believes P" is true => Bob believes P
but not
"Bob's belief in P" is true => ...er, what exactly?
Also, I frequently need to attach probabilities to facts, where probability goes from [0,1] (or, in Eliezer's formulation, (-inf, inf)). But it's rare for me to have to any reason to att...
Follow-up question: If Bob believes he has a >50% chance of winning the lottery tomorrow, is his belief objectively wrong? I would tentatively propose that his belief is unfounded, "unattached to reality", unwise, and unreasonable, but that it's not useful to consider his belief "objectively wrong".
If you disagree, consider this: suppose he wins the lottery after all by chance, can you still claim the next day that his belief was objectively wrong?
Most of the proposed models in this thread seem reasonable.
I would write down all the odd things people say about free will, pick the simplest model that explained 90% of it, and then see if I could make novel and accurate predictions based on the model. But, I'm too lazy to do that. So I'll just guess.
Evolution hardwired our cognition to contain two mutually-exclusive categories, call them "actions" and "events."
"Actions" match: [rational, has no understandable prior cause]. "Rational" means they are often influence...
Green-eyed people are more likely than average to be black-haired (and vice versa), meaning that we can probabilistically infer green eyes from black hair or vice versa
There is nothing in the mind that is not first in the census.
Another solid essay.
To form accurate beliefs about something, you really do have to observe it.
How do we model the fact that I know the Universe was in a specific low-entropy state (spacetime was flat) shortly after the Big Bang? It's a small region in the phase space, but I don't have enough bits of observations to directly pick that region out of all the points in phase space.
Frank, tcpkac:
What do you think of, say, philosophers' endless arguments of what the word "knowledge" really means? This seems to me one example where many philosophers don't seem to understand that the word doesn't have any intrinsic meaning apart from how people define it.
If Bob sees a projection of an oasis and thinks there's an oasis, but there's a real oasis behind the projection that creates a projection of itself as a Darwinian self-defense mechanism, does Bob "know" there's an oasis? Presumably Eliezer would ask, "for what ...
What's really at stake is an atheist's claim of substantial difference and superiority relative to religion
Often semantics matter because laws and contracts are written in words. When "Congress shall make no law respecting an establishment of religion", it's sometimes advantageous to claim that you're not a religion, or that your enemy is a religion. If churches get preferential tax treatment, it may be advantageous to claim that you're a church.
@Peter As a human, I can't introspect and look at my utility function, so I don't really know if it's bounded or not. If I'm not absolutely certain that it's bounded, should I just assume it's unbounded, since there is much more at stake in this case?
This has been gnawing at my brain for a while. If the useful Universe is temporally unbounded, then utility arguably goes to aleph-null. Some MWI-type models and Ultimate-ensemble models arguably give you an uncountable number of copies of yourself, does that count as greater than than aleph-null or less than ...
other way around, I mean.
It's a pity I consider my current utility function bounded; the statement "there's no amount of fun F such that there isn't a greater amount of fun G such that I would I would prefer a 100% chance of having fun F, to having a 50% chance of having fun G and a 50% chance of having no fun" would have been a catchy slogan for my next party.
The difficulty with analyzing the "insightfulness quotient" of comedians like Scott Adams or Jon Stewart is that there's no reliable way of differentiating "things he sincerely believes" versus "things he means seriously at some level, but are not literally true" versus "things that are meant to be just throwaway jokes". If you're sympathetic to Scott Adams, you're likely to interpret true statements or true predictions as "hits", but classify false predictions as "just jokes", and overestimate ho...
Another solid article.
One point of confusion for me: You talk about axiomatic faith in logic (which is necessary in some form to bootstrap your introspective thinking process), but then abruptly switch to talking about "the last ten million times that first-order arithmetic has proven consistent", a statement of observed prior evidence about learned arithmetic. Both points are valid, but it seemed a non sequiter to me to abruptly go from one to the other.
Oh well, off to cast half a vote in the Michigan Primary.
Rolf, surely the simplicity of MWI relative to objective collapse is strong evidence that when we have a better technical understanding of decoherence it will be compatible with MWI?
What do you mean by "compatible"? Do you mean, the observed macroscopic world will emerge as "the most likely result" from MWI, instead of some other macroscopic world where objects decohere on alternate Thursdays, or whenever a proton passes by, or stay a homogeneous soup forever? That's a lot of algorithmic bits that I have to penalize MWI for, given that ...
Do you have any specific problem in mind? Have you read some of the post-2000 papers on how MWI works, like Everett and Structure?
From the paper:
Two sorts of objection can be raised against the decoherence approach to definiteness. The first is purely technical: will decoherence really lead to a preferred basis in physically realistic situations, and will that preferred basis be one in which macroscopic objects have at least approximate definiteness. Evaluating the progress made in establishing this would be beyond the scope of this paper, but there is goo...
I wish to hell that I could just not bring up quantum physics. But there's no real way to explain how reality can be a perfect mathematical object and still look random due to indexical uncertainty, without bringing up quantum physics.
MWI doesn't explain why the Universe has four large dimensions and three small neutrinos. In order to explain that by indexical uncertainty, you have to bring up other multiverse concepts anyway, and if you bring in "ultimate ensemble" theories, then MWI vs. non-MWI no longer matters for the rhetorical point you're ...
Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.
Not in standard quantum mechanics. Certain of the many theories unsupported hypotheses of quantum gravity (such as Loop Quantum Gravity) might say something similar to this, but that doesn't abolish every infinite set in the framework. The total number of "places where infinity can happen" in modern models has tended to increase, rather than decrease, over the centuries, as models have gotten more complex...
Is there a word for the similar case to the "just-so" story, but that has a spurious environmental explanation rather than a spurious genetic explanation? (For example, "boys are more aggressive than girls because parents give their boys more violent toys.") I see many more of the former than the latter in the media.
I don't find the polls consistent with the picture of libertarian voters vs. colluding statist politicians. Only a significant majority (not an overwhelming majority) seems to support lower taxes, and when the question is phrased as costs vs. benefits (rather than "taxes in a vacuum") that majority tends to disappear.
the overreaction was foreseeable in advance, not just in hindsight
To paraphrase what my brain is hearing from you, Eliezer:
In 2001, you would have predicted, "In 2007, I will believe that the U.S. overreacted between 2001 and 2007."
In 2007, your prediction is true: you personally believe the U.S. overreacted.
Not very impressive. (I know lots of people who can successfully predict that they will have the same political beliefs six years from now, no matter what intervening evidence occurs between now and then! It's not something that you should ta...
Rolf: It seems to me that you are trying to assert that it is normative for agents to behave in a certain manner because the agents you are addressing are presumably non-normative.
On a semantic level, I agree; I actually avoided using the word "normative" in my comment because you had, earlier, correctly criticized my use of the word on my blog.
I try to consistently consider myself as part of an ensemble of flawed humans. (It's not easy, and I often fail.) To be more rigorous, I would want to condition my reasoning on the fact that I'm one of the...
And it is triple ultra forbidden to respond with violence.
I agree. However, here are my minority beliefs on the topic: unless you use Philosophical Majoritarianism, or some other framework where you consider yourself as part of an ensemble of fallible human beings, it's fairly hard to conclusively demonstrate the validity of this rule, or indeed to draw any accurate conclusions about what to do in these cases.
If I consider my memories and my current beliefs in the abstract, as not a priori less infallible than anyone else's, a "no exceptions to Freedo...
Cyan,
> I can't really process this query until you relate the words you've used to the math MacKay uses
On Page 1, MacKay posits x as a bit-sequence of an individual. Pick an individual an random. The question at hand is whether the Shannon Entropy of x, for that individual, decreases at a rate of O(1) per generation.
This would be one way to quantify the information-theoretic adaptive complexity of an individual's DNA.
In contrast, if for some odd reason you wanted to measure the total information-theoretic adaptive complexity of the entire species, then ...
If you look at equation 3 of MacKay's paper, you'll see that he defines information in terms of frequency of an allele in a population
I apologize, my statement was ambiguous. The topic of Eliezer's post is how much information is in an individual organism's genome, since that's what limits the complexity of a single organism, which is what I'm talking about.
Equation 3 addresses the holistic information of the species, which I find irrelevant to the topic at hand. Maybe Alice, Bob, and Charlie's DNA could together have up to 75 MB of data in some holographi...
MacKay's paper talks about gaining bits as in bits on a hard drive
I don't think MacKay's paper even has a coherent concept of information at all. As far as I can tell, in MacKay's model, if I give you a completely randomized 100 Mb hard drive, then I've just given you 50 Mb of useful information, because half of the bits are correct (we just don't know which ones.) This is not a useful model.
You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work.
To be more specific, you would have to alter it in such a way that it accepted Brandon Carter's Doomsday Argument.
Even if the Matrix-claimant says that the 3^^^^3 minds created will be unlike you, with information that tells them they're powerless, if you're in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.
You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work. Solomonoff Induction doesn't let you consider just "generalized scenarios"; you have to calculate each one in turn, and eventually one of the...
Tiiba, keep in mind that to an altruist with a bounded utility function, or with any other of Peter's caveats, in may not "make perfect sense" to hand over the five dollars. So the problem is solveable in a number of ways, the problem is to come up with a solution that (1) isn't a hack and (2) doesn't create more problems than in solves.
Anyway, like most people, I'm not a complete utilitarian altruist, even at a philosophical level. Example: if an AI complained that you take up too much space and are mopey, and offered to kill you and replace you...
Re: "However clever your algorithm, at that level, something's bound to confuse it. Gimme FAI with checks and balances every time."
I agree that a mature Friendly Artificial Intelligence should defer to something like humanity's volition.
However, before it can figure out what humanity's volition is and how to accomplish it, an FAI first needs to:
If ...
Here's my data point:
Like Michael Vassar, I see the rationality of cryonics, but I'm not signed up myself. In my case, I currently use altruism + inertia (laziness) + fear of looking foolish to non-transhumanists + "yuck factor" to override my fear of death and allow me to avoid signing up for now. Altruism is a constant state of Judo.
My initial gut emotional reaction to reading that Eliezer signed up for cryonics was irritation that Eliezer asks for donations, and then turns around and spends money on this perk that most people, including me
One possibility, given my (probably wrong) interpretation of the ground rules of the fictional universe, is that the humans go to the baby-eaters and tell them that they're being invaded. Since we cooperated with them, the baby-eaters might continue to cooperate with us, by agreeing to:
1. reduce their baby-eating activities, and/or
2. send their own baby-eaters ship to blow up the star (since the fictional characters are probably barred by the author from reducing the dilemma by blowing up Huygens or sending a probe ship), so that the humans don't have to sacrifice themselves.