Posts

Sorted by New

Wiki Contributions

Comments

My previous post resulted in 0 points, despite being very thoroughly thought-through. A comment on it, consisting of the four words "I know nothing! Nothing!" resulted in 4 points. If someone could please explain this, I'd be a grateful Goo.

Suppose sentient beings have intrinsic value in proportion to how intensely they can experience happiness and suffering. Then the value of invertebrates and many non-mammal vertebrates is hard to tell, while any mammal is likely to have almost as much intrinsic value as a human being, some possibly even more. But that's just the intrinsic value. Humans have a tremendously greater instrumental value than any non-human animal, since humans can create superintelligence that can, with time, save tremendous amounts of civilisations in other parts of the universe from suffering (yes, they are sparse, but with time our superintelligence will find more and more or them, in theory ultimately infinitely many).

The instrumental value of most humans is enormously higher than the intrinsic value of the same persons - given that they do sufficiently good things.

Hereinafter, "to Know x" means "to be objectively right about x, and to be subjectively 100 percent certain of x, and to have let the former 'completely scientifically cause' the latter (i.e. to have used the former to create the latter in a completely scientific manner), such that it cannot, even theoretically, be the case that something other than the former coincidentally and crucially misleadingly caused the latter - and to Know that all these criteria are met".

Anything that I merely know ("know" being defined as people usually seem to implicitly define it in their use of it), as opposed to Know, may turn out to be wrong (for all that I know). It seems that the more our scientists know, the more they realize that they don't know. Perhaps this "rule" holds forever, for every advancing civilisation (with negligible exceptions)? I think there could not even theoretically be any Knowing in the (or any) world. I conjecture that, much like it's universally theoretically impossible to find a unique integer for every unique real, it's universally theoretically impossible for any being to Know anything at all, such as for example what box(es) a human being will take.

Nick Bostrom's Simulation Argument seems to show that any conceivable being that could theoretically exist might very well (for all he (that being) knows) be living in a computer simulation controlled by a mightier being than himself. This universal uncertainty means that no being could Know that he has perfect powers of prediction over anything whatsoever. Making a "correct prediction" partly due to luck isn't having perfect powers of prediction, and a being who doesn't Know what he is doing cannot predict anything correctly without at least some luck (because without luck, Murphy's law holds). This means that no being could have perfect powers of prediction.

Now let "Omeg" be defined as the closest (in terms of knowledge of the world) to an all Knowing being (Omega) that could theoretically exist. Let A be defined as the part(s) of an Omeg that are fully known by the Omeg itself, and let B be defined as whatever else there may be in an Omeg. I suggest that in no Omeg of at least the size of the Milky Way can the B part be too small to secretly contain mechanisms that could be stealthily keeping the Omeg arbitrarily ignorant by having it falsely perceive arbitrarily much of its own wildest thought experiments (or whatever other unready thoughts it sometimes produces) to be knowledge (or even Knowledge). I therefore suggest that B, in any Omeg, could be keeping its Omeg under the impression that the A part is sufficient for correct prediction of, say, my choice of boxes, while in reality it isn't. Conclusion: no theoretically possible being could perfectly predict any other being's choice of boxes.

You may doubt it, but you can't exclude the possibility. This means you also can't exclude the possibility that whatever implications Newcomb's problem seems to produce that wouldn't occur to people if Omega were replaced by, say, a human psychologist, are implications that occur to people only because the assumption, that there could be such a thing as a perfect predictor of something/anything, is an assumption too unreasonable to be worthy of acceptance, as its crucial underpinnings don't make sense (like it doesn't make sense to assume that there is an integer for every real) - and as it can, because of this, be expected to produce arbitrarily misleading conclusions (about decision theory in this case) - much like many seemingly reasonable but heavily biased extreme thought experiments designed to smear utilitarianism scare even very skilled thinkers into drawing false conclusions about utilitarianism.

Or suppose someone goes to space, experiences weightlessness, thinks: "hey, why doesn't my spaceship seem to exert any gravity on me?" and draws the conclusion: "it's not gravity that keeps people down on Earth; it's just that the Earth sucks". Like that conclusion would be flawed, the conclusion that Newcomb's problem shows that we should replace Causal Decision Theory with Evidential Decision Theory is flawed.

So, to be as faithful to the original Newcomb thought-experiment as is possible within reason, I'd interpret it in the way that just barely rids its premises of theoretical impossibility: I'd take Omega to mean Omeg, as defined above. An Omeg is fallible, but probably most of the time better than me at predicting my behavior, so I should definitely one-box, for the same reason that I should one-box if the predictor were a mere human being who just knew me very well. To risk a million dollar just to possibly get another 1000 dollar just isn't worth it. Causal Decision Theory leads me to this conclusion just fine.

*) You might think B would be "the real" (or "another, smarter") Omeg, by controlling A. But neither B nor A can rationally completely exclude the possibility that the other one of them is in secret control of both of them. So no one of them can have "perfect powers of prediction" over any being whatsoever.

GrateGoo-20

Is it plausible that evolution would gradually push those 70% down to 30% or even lower, given enough time? There may not yet have been enough time for a strong enough group selection in evolution to create such an effect, but sooner or later it should happen, shouldn't it? I'm thinking a species with such a great degree of selflessness would be more likely to survive than the present humanity is, because a larger percentage of them would cooperate about existential risk reduction than is the case in present humanity. Yet, 10-30% is still not 0%, so even with 10% there would still be enough of selfishness to make sure they wouldn't end up refusing each other's gifts until they all starve to death or something.

Can group selection of genes for different psychological constitution in humans already explain why player 1 takes only 70% and not, say, at least 90%, on average, in the game you describe?

What do chimps do? Does a chimp player 1 take more or less than 70%?

GrateGoo-10

Any conclusions, about how things work in the real world, drawn from Newcomb's problem, crucially rest on the assumption that an all-knowing being might, at least theoretically, as a logically consistent concept, exist. If this crucial assumption is flawed, then any conclusions drawn from Newcomb's problem are likely flawed too.

To be all-knowing, you'd have to know everything about everything, including everything about yourself. To contain all that knowledge, you'd have to be larger than it - otherwise there would be no matter or energy left to perform the activity of knowing it all. So, in order to be all-knowing, you'd have to be larger than yourself. Which is theoretically impossible. So, the Newcomb problem crucially rests on a faulty assumption: that something that is theoretically impossible might be theoretically possible.

So, conclusions drawn from Newcomb's problem are no more valid than conclusions drawn from any other fairy tale. They are no more valid than, for example, the reasoning: "if an omnipotent and omniscient God would exist who would eventually reward all good humans with eternal bliss, all good humans would eventually be rewarded with eternal bliss -> all good humans will eventually be rewarded with eternal bliss whether the existence of an omnipotent and omniscient God is even theoretically possible or not".

One might think that Newcomb's problem could be altered; one might think that instead of an "all-knowing being" it could assume the existence a non-all-knowing being that however knows what you will choose. But if the MWI is correct, or if the universe is otherwise infinitely large, not all of the infinitely many identical copies of you would be controlled by any such being. If they would, that would mean that that being would have to be all-knowing. Which, as shown, is not possible.