Is there any justification that Solomonoff Induction is accurate, other than intuition?
If I understand Solomonoff Induction correctly, for all n and p the the sum of the probabilities of all the hypotheses of length n equals the sum of the probabilities of hypotheses of length p. If this is the case, what normalization constant could you possibility use to make all the probabilities sum to one? It seems there are none.
"Mystery, and the joy of finding out, is either a personal thing, or it doesn't exist at all—and I prefer to say it's personal." I don't see why this is the case. Can't one only have joy from finding out what no one in the Solar System knows? That way, one can still have joy, but it's still not personal.
Should one really be so certain about there being no higher-level entities? You said that simulating higher-level entities takes fewer computational resources, so perhaps our universe is a simulation and that the creators, in an effort to save computational resources, made the universe do computations on higher-level entities when no-one was looking at the "base" entities. Far-fetched, maybe, but not completely implausible.
Perhaps if we start observing too many lower-level entities, the world will run out of memory. What would that look like?
Occam's razor. There are patterns in your thoughts that are very unlikely to exist by coincidence. It's more likely that the pattern is a result of an underlying process. At least, that's why I think that I think things for a reason.
What makes you think that the argument you just said was generated by you for a reason, instead of for no reason at all?
What evidence is there for mice being unable to think about thinking? Due to the communication issues, mice can't say if they can think about thinking or not.
What evidence is there for floating beliefs being uniquely human? As far as I know, neuroscience hasn't advanced far enough to be able to tell if other species have floating beliefs or not.
Edit: Then again, the question of if floating beliefs are uniquely human is practically a floating belief itself.
I question whether keeping probabilities summing to one is a valid justification for acting as if the mugger being honest has a probability of roughly 1/3^^^3. Since we know that due to our imperfect reasoning, the probability is greater than 1/3^^^3, we know that the expected value of giving the mugger $5 is unimaginably large. Of course, acknowledging this fact causes our probabilities to sum to above one, but this seems like a small price to pay.
Edit: Could someone explain why I've lost points for this?
If our math has to handle infinities we have bigger problems. Unless we use measures, and then we have the same issue and seemingly forced solution as before. If we don't use measures, things fail to add up the moment you imagine "infinity".
Then this solution just assumes the possibility of infinite people is 0. If this solution is based on premises that are probably false, then how is it a solution at all? I understand that infinity makes even bigger problems, so we should instead just call your solution a pseudo-solution-that's-probably-false-but-is--still-the-best-one-we-have, and dedicate more efforts to finding a real solution.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If I understand correctly, Yudkowsky finds philosophical zombies to be implausible, as it would require consciousness to have no causal influence on reality, which Yudkowsky seems to believe entails that if there are philosophical zombies, it’s purely coincidental that accurate discussions of consciousness are done by those who are conscious, which is very improbable and thus philosophical zombies are very implausible. This reasoning seems flawed, as discussing and thinking about consciousness could cause consciousness to exist, but this consciousness would have no effect on anything else. For philosophical zombies to exist, thinking about consciousness could only bring about consciousness in certain substrates.