All of GrateGoo's Comments + Replies

My previous post resulted in 0 points, despite being very thoroughly thought-through. A comment on it, consisting of the four words "I know nothing! Nothing!" resulted in 4 points. If someone could please explain this, I'd be a grateful Goo.

4wedrifid
I don't know why your post got 0 points and no replies. But one of the reasons may be that it is hard to extract what the central point or conclusion you are trying to make is. My comment gleaned 4 karma by taking the definition you introduce in the first sentence and tracing the implications using the reasoning Clippy mentions. This leads to the conclusion that I am literally in the epistemic state that is used in a hyperbolic sense by the character Shcultz from Hogans Heroes. While humour itself is hard to describe things that are surprising and include a contrast between distant concepts tend to qualify. (By the way, the member Clippy is roleplaying an early iteration of an artificial intelligence with the goal of maximising paperclips - an example used to reference a broad group of unfriendly AIs that could be plausibly created by well meaning but idiotic programmers.)

That is unfortunate. You deserve a better explanation.

I believe a lot of the posters here (because they're about as good as me at correct reasoning) did not read much of your exposition because toward the beginning, you posited a circumstance in which someone has 100% certainty of something. But this breaks all good epistemic models. One of the humans here provided a thorough explanation of why in the article 0 and 1 are not probabilities.

That, I believe, is why User:wedrifid found it insightful (as did 4 others) to say that User:wedrifid knows nothing,... (read more)

3wnoise
In general, the voting system doesn't reward thought through, nor large wads of text. It rewards small things that can be easily digested and seem insightful, no more than one or maybe two inferential steps from the median voter. Nitpicking and jokes are both easily judged.

Suppose sentient beings have intrinsic value in proportion to how intensely they can experience happiness and suffering. Then the value of invertebrates and many non-mammal vertebrates is hard to tell, while any mammal is likely to have almost as much intrinsic value as a human being, some possibly even more. But that's just the intrinsic value. Humans have a tremendously greater instrumental value than any non-human animal, since humans can create superintelligence that can, with time, save tremendous amounts of civilisations in other parts of the univers... (read more)

Hereinafter, "to Know x" means "to be objectively right about x, and to be subjectively 100 percent certain of x, and to have let the former 'completely scientifically cause' the latter (i.e. to have used the former to create the latter in a completely scientific manner), such that it cannot, even theoretically, be the case that something other than the former coincidentally and crucially misleadingly caused the latter - and to Know that all these criteria are met".

Anything that I merely know ("know" being defined as people us... (read more)

Hereinafter, "to Know x" means "to be objectively right about x, and to be subjectively 100 percent certain of x,

I know nothing! Nothing!

Is it plausible that evolution would gradually push those 70% down to 30% or even lower, given enough time? There may not yet have been enough time for a strong enough group selection in evolution to create such an effect, but sooner or later it should happen, shouldn't it? I'm thinking a species with such a great degree of selflessness would be more likely to survive than the present humanity is, because a larger percentage of them would cooperate about existential risk reduction than is the case in present humanity. Yet, 10-30% is still not 0%, so even w... (read more)

1Perplexed
First of all, from the standpoint of the good of the group, I see no reason why player1 shouldn't keep 100% of the money. After all, it is not as if player 2 were starving, and surely the good of player 1 is just as important to the good of the group as is the good of player 2. There is almost no reason for sharing from a standpoint of either Bentham-style utilitarianism or good-of-the-group. However, there is a reason for sharing when you realize that player 2 is quite reasonably selfish, and has the power to make your life miserable. So, go ahead and give the jerk what he asks for. It is certainly to your own selfish advantage to do so. As long as he doesn't get too greedy.

Any conclusions, about how things work in the real world, drawn from Newcomb's problem, crucially rest on the assumption that an all-knowing being might, at least theoretically, as a logically consistent concept, exist. If this crucial assumption is flawed, then any conclusions drawn from Newcomb's problem are likely flawed too.

To be all-knowing, you'd have to know everything about everything, including everything about yourself. To contain all that knowledge, you'd have to be larger than it - otherwise there would be no matter or energy left to perform th... (read more)

2PaulAlmond
I disagree with that. The being in Newcomb's problem wouldn't have to be all-knowing. He would just have to know what everyone else is going to do conditional on his own actions. This would mean that any act of prediction would also cause the being to be faced with a choice about the outcome. For example: Suppose I am all-knowing, with the exception that I do not have full knowledge about myself. I am about to make a prediction, and then have a conversation with you, and then I am going to sit in a locked metal box for an hour. (Theoretically, you could argue that even then I would affect the outside world, but it will take time for chaos to become an issue, and I can factor that in.) You are about to go driving. I predict that if I tell you that you will have a car accident in half an hour, you will drive carefully and will not have a car accident. I also predict that if I do not tell you that you will have a car accident in half an hour, you will drive as usual and you will have a car accident. I lack full self-knowledge. I cannot predict whether I will tell you until I actually decide to tell you. I decide not to tell you. I get in my metal box and wait. I know that you will have a car accident in half an hour. My lack of complete self-knowledge merely means that I do not do pure prediction: Instead any prediction I make is conditional on my own actions and therefore I get to choose which of a number of predictions comes true. (In reality, of course, the idea that I really had a "choice" in any free will sense is debatable, but my experience will be like that.) It would be the same for Newcomb's boxes. Now, you could argue that a paradox could be caused if the link between predictions and required actions would force Omega to break the rules of the game. For example, if Omega predicts that if he puts the money in both boxes, you will open both boxes, then clearly Omega can't follow the rules. However, this would require some kind of causal link between Om
-1[anonymous]
...Or, perhaps more correctly put, such a being (a non-all-knowing being who, however, "knows what you will do") could not know for sure that he knows what all of the copies of you will do - because in order to know that, he would have be all-knowing - and so any statement to the effect that "he knows what you will do" is a highly questionable statement. Just like a being who doesn't know that he is all-knowing cannot reasonably be said to be all-knowing, a being who doesn't know that he knows what all of the copies of you will do (because he doesn't know how many copies of you there exist outside of the parts of the universe he has knowledge of) cannot reasonably be said to know what all of the copies of you will do.