Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
somejan00

Extrapolating from Eliezers line of reasoning you would probably find that although you remember ss0 + ss0 = ssss0, if you try to derive ss0 + ss0 from the peano axioms, you also discover it ends up as sss0, and starting with ss0 + ss0 = ssss0 quickly leads you to a contradiction.

somejan10

If the idea that time stems from the second law is true, and we apply the principle of eliminating variables that are redundant because they don't make any difference, we can collapse the notions of time and entropy into one thing. Under these assummptions, in a universe where entropy is decreasing (relative to our external notion of 'time'), the internal 'time' is in fact running backward.

As also noted by some other commenters, it seems to me that the expressed conditional dependence of different points in a universe is in some way equivalent to increasing entropy.

Let's assume that the laws of the universe described by the LMR picture are in fact time-symmetric and that the number of states each point can be in is too large to describe exactly (i.e. just as is the case in our actual universe, as far as we know). In that case, we can only describe our conditional knowledge of M2 given the states of M1 and R1,2 using very rough descriptions, not using the fully detailed descriptions describing the exact states. It seems to me that this can only be usefully done if there is some kind of structure in the states of M1,2 (a.k.a. low entropy) that matches our coarse description. Saying that the L or M part of the universe is in a low entropy state is equivalent to saying that some of the possible states are much more common for the nodes in the L or M part than other states. Our coarse predictor will necessarily make wrong predictions given some input states. Since the actual laws are time symmetric, if the input states to our predictor were randomly distributed over all possible states, our predictions would fail equally often predicting from left to right or from right to left. Only if on the left the states we can predict correctly happen more often than on the right will there be an inequality in the number of correct predictions.

...except that I now seem to have concluded that time always flows in the opposite direction of what Eliezers conditional dependence indicates, so I'm not sure how to interpret that. Maybe it is because I am assuming time symmetric laws and Eliezer is using time-asymmetric probablistic laws. However, it still seems correct to me that in the case of time symmetric underlying laws and a coarse (incomplete) predictor, predictions can only be better in one way than the other if there is a difference in how often we see correctly predicted input relative to incorrectly predicted input, and therefore if there is a difference in entropy.

somejan10

First¸ I didn't read all of the above comments, though I read a large part of it.

Regarding the intuition that makes one question Pascals mugging: I think it would be likely that there was a strong survival value in the ancestral environment to being able to detect and disregard statements that would cause you to pay money to someone else without there being any way to detect if these statements were true. Anyone without that ability would have been mugged to extinction long ago. This makes more sense if we regard the origin of our builtin utility function as a /very/ coarse approximation of our genes' survival fitness.

Regarding what the FAI is to do, I think the mistake made is assuming that the prior utility of doing ritual X is exactly zero, so that a very small change in our probabilities would make the expected utility of X positive. (Where X is "give the Pascal mugger the money"). A sufficiently smart FAI would have thought about the possibility of being Pascal-mugged long before that actually happens, and would in fact consider it a likely event to sometimes happen. I am not saying that this actually happening is not a tiny sliver of evidence in favor of the mugger telling the truth, but it is very tiny. The FAI would (assuming it had enough resources) compute for every possible Matrix scenario the appropriate probabilities and utilities for every possible action, taking the scenario's complexity into account. There is no reason to assume the prior expected utility for any religious ritual (such as paying Pascal muggers, whose statements you can't check) is exactly zero. Maybe the FAI finds that there is a sufficiently simple scenario in which a god exists and in which it is extremely utillious to worship that god, more so than any alternative scenarios. Or in which one should give in to (specific forms of) Pascal mugging.

However, the problem as presented in this blogpost implicitly assumes that the prior probabilities the FAI holds are such that the tiny sliver of probability provided by one more instance of Pascal's mugging happening, is enough to push the probability of the scenario of 'Extra-Matrix deity killing lots of people if I don't pay' over that of 'Extra-Matrix deity killing lots of people if I do pay'. Since these two scenarios need not have the exact same Kolmogorov complexity this is unlikely.

In short, either the FAI is already religious, (which may include as a ritual 'give money to people who speak a certain passphrase') or it is not, but the event of a Pascal mugging happening is unlikely to change its beliefs.

Now, the question becomes if we should accept the FAI doing things that are expected to favor a huge number of extra-matrix people at a cost to a smaller number of inside-matrix people. If we actually count every human life as equal, and we accept what Solomonoff-inducted bayesian probability theory has to say about huge payoff-tiny probability events and dutch books, the FAI's choice of religion would be the rational thing to do. Else, we could add a term to the AI's utility function to favor inside-matrix people over outside-matrix people, or we could make it favor certainty (of benefitting people known to actually exist) over uncertainty (of outside-matrix people not known to actually exist).

somejan20

As another datapoint (though I don't have sources), I heard that among evangelical church leaders you also find a relatively higher proportion of engineers.

somejan60

Might it be that engineering teaches you to apply a given set of rules to its logical conclusion, rather than questioning if those rules are correct? To be a suicide bomber, you'd need to follow the rules of your variant of religion and act on them, even if that requires you to do something that goes against your normal desires, like kill yourself.

I'd figure questioning things is what you learn as a scientist, but apparently the current academic system is not set up to question generally accepted hypotheses, or generally do things the fund providers don't like.

Looking at myself, studying philosophy and also having an interest in fundamental physics, computer science and cognitive psychology helps, but how many people do that.

somejan00

There's nothing in being a rationalist that prevents you from considering multiple hypotheses. One thing I've not seen elaborated on a lot on this site (but maybe I've just missed it) is that you don't need to commit to one theory or the other, the only time you're forced to commit yourself is if you need to make a choice in your actions. And then you only need to commit for that choice, not for the rest of your life. So a bunch of perfect rationalists who have observed exactly the same events/facts (which of course doesn't happen in real life) would ascribe exactly the same probabilities to a bunch of theories. If new evidence came in they would all switch to the new hypothesis because they were all already contemplating it but considering it less likely than the old hypothesis.

The only thing preventing you from considering all possible hypotheses is lack of brain power. This limited resource should probably be divided among the possible theories in the same ratio that you're certain about them, so if you think theory A has a probability of 50% of being right, theory B a probability of 49% and theory C a probability of 1%, you should spend 99% of your efforts on theory A and B. But if the probabilities are 35%, 33% and 32% you should spend almost a third of your resources on theory C. (Assuming the goal is just to find truth, if the theories have other utilities that should be weighted in as well.)