Comment author: learnmethis 25 September 2012 08:02:35PM *  0 points [-]

I understand the point Eliezer's trying to make here. However, you (whoever's reading this) could not convince me that ss0 + ss0 =sss0 in Peano arithmetic (I define the scenario in which my mind is directly manipulated so that I happen to believe this not to constitute "convincing me"). Here's why I believe this position to be rational:

A)In order for me to make this argument, I have to presume communication of it. It's not that I believe the probability of that communication to be 1. Certainly many people might read this comment and not know Peano arithmetic, misunderstand my language, not finish reading, etc. etc. etc. and the probability of this is nontrivial. However, arguments are directed at the possible worlds in which they are understood.

B)Communication of "ss0 + ss0 =" as a statement of Peano arithmetic already fully constrains the answer to be "ssss0" simply by virtue of what these symbols mean. That is to say, that having understood these symbols and Peano arithmetic, no further experience is necessary to know that "sss0" is wrong. Mental flaws at any point in this process or understanding are possible, but they exist only within possible worlds in which communication of these ideas does not actually occur because to think that "ss0 + ss0 = sss0" is to misunderstand Peano arithmetic, and understanding Peano arithmetic is a prerequisite for understanding a claim about it.

Therefore

C)There is no possible world within which I can be convinced of the properly communicated concept "ss0 + ss0 = sss0". Of course, this doesn't mean there's no possible world in which I can be convinced that I am experiencing a neurological fault or being manipulated, or that there are no possible worlds in which I happen to wrongly believe that ss0 + ss0 = sss0. It's just that someone experiencing a neurological fault or being manipulated is not the same thing as someone being convinced.

A similar argument holds for the impossibility of me convincing myself that ss0 + ss0 = sss0. I understand ss0 + ss0 = ssss0 in Peano arithmetic well enough that I can review in a very short period of time why it must be so. Thus you would literally have to make me forget that I know this in order to have me believe otherwise, which hardly counts as "convincing." This does not mean that I am presuming mental errors or Dark Lords of the Matrix to be impossible. For clarification, here's what a runthrough of me experiencing what Eliezer proposes would look like:

I get up one morning, take out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared—in contrast to my stored memory that 2 + 2 was supposed to equal 4.

Because that stored memory entails an understanding of why, I run through those reasons. If they're incomplete this constitutes me "forgetting that I know this." (It does not mean that I don't know this now. Right now I do.) Therefore I don't have a "stored memory that 2 + 2 was supposed to equal 4." I have an incomplete stored memory which tries to say something about 2, +, =, and 4 (if my personality were intact I would probably try and re-derive the missing parts of it, after calling 911). Either way I identify a cognitive fault. In real life waking up to this my most likely suspect would be that my experience of one earplug disappearing was deleted before I processed it, but there are lots of other possibilities as well. If I repeated the experiment multiple times I would consider either a systematic fault or "being messed with" at a fundamental level.

When I visualize the process in my own mind, it seems that making XX and XX come out to XXXX requires an extra X to appear from nowhere

Still presuming an intact line of reasoning saying why this must not be so, I would again identify a cognitive fault, and a pretty cool one at that. Something this intricate might well leave me suspecting Dark Lords of the Matrix as a nontrivial possibility, provided all other cognitive functions seemed fully intact. Still wouldn't be as likely as a weird brain fault, though. I would definitely have fun investigating this.

I check a pocket calculator, Google, and my copy of 1984 where Winston writes that "Freedom is the freedom to say two plus two equals three."

Dark Lords of the Matrix bump higher, but Psychosis has definitely leapt into the front of the pack.

I could keep going, of course. These last few presume I can still reason out something like Peano arithmetic. If I can't incidentally, then of course they look different but I still don't think it would be accurate to describe any possible outcome as "me being convinced that 2 + 2 = 3." If you run all the way down the list until you literally delete all things that I know and all ways I might obtain them, I would describe that as a possible universe in which "me" has been deleted. The strict lower bound on where I can still stumble across my cognitive fault and/or manipulation is when my reasoning ability is no longer Turing complete. This essentially requires the elimination of all complex thought, though of course making it merely unlikely for me to stumble upon the fault is much easier--just delete everything I know about formal mathematics, for example.

tl;dr

I agree with most of what Eliezer is saying, but wouldn't say that I could be convinced 2 + 2 = 3. Does this make my belief unconditional? Dependant on me understanding what 2 + 2 = 3 means, maybe it does. Maybe an understanding of 2, +, =, 3, and 4 necessitates 2 + 2 = 4 for a rational mind, and any deviation from this, even in internal mental processes, would be identifiable as a fault. After all, you can run a software program to detect flaws in some computer processors.

Comment author: somejan 05 May 2013 07:45:49PM 0 points [-]

Extrapolating from Eliezers line of reasoning you would probably find that although you remember ss0 + ss0 = ssss0, if you try to derive ss0 + ss0 from the peano axioms, you also discover it ends up as sss0, and starting with ss0 + ss0 = ssss0 quickly leads you to a contradiction.

In response to Timeless Causality
Comment author: somejan 31 January 2013 01:54:43PM 1 point [-]

If the idea that time stems from the second law is true, and we apply the principle of eliminating variables that are redundant because they don't make any difference, we can collapse the notions of time and entropy into one thing. Under these assummptions, in a universe where entropy is decreasing (relative to our external notion of 'time'), the internal 'time' is in fact running backward.

As also noted by some other commenters, it seems to me that the expressed conditional dependence of different points in a universe is in some way equivalent to increasing entropy.

Let's assume that the laws of the universe described by the LMR picture are in fact time-symmetric and that the number of states each point can be in is too large to describe exactly (i.e. just as is the case in our actual universe, as far as we know). In that case, we can only describe our conditional knowledge of M2 given the states of M1 and R1,2 using very rough descriptions, not using the fully detailed descriptions describing the exact states. It seems to me that this can only be usefully done if there is some kind of structure in the states of M1,2 (a.k.a. low entropy) that matches our coarse description. Saying that the L or M part of the universe is in a low entropy state is equivalent to saying that some of the possible states are much more common for the nodes in the L or M part than other states. Our coarse predictor will necessarily make wrong predictions given some input states. Since the actual laws are time symmetric, if the input states to our predictor were randomly distributed over all possible states, our predictions would fail equally often predicting from left to right or from right to left. Only if on the left the states we can predict correctly happen more often than on the right will there be an inequality in the number of correct predictions.

...except that I now seem to have concluded that time always flows in the opposite direction of what Eliezers conditional dependence indicates, so I'm not sure how to interpret that. Maybe it is because I am assuming time symmetric laws and Eliezer is using time-asymmetric probablistic laws. However, it still seems correct to me that in the case of time symmetric underlying laws and a coarse (incomplete) predictor, predictions can only be better in one way than the other if there is a difference in how often we see correctly predicted input relative to incorrectly predicted input, and therefore if there is a difference in entropy.

Comment author: somejan 06 December 2011 06:10:45PM 0 points [-]

First¸ I didn't read all of the above comments, though I read a large part of it.

Regarding the intuition that makes one question Pascals mugging: I think it would be likely that there was a strong survival value in the ancestral environment to being able to detect and disregard statements that would cause you to pay money to someone else without there being any way to detect if these statements were true. Anyone without that ability would have been mugged to extinction long ago. This makes more sense if we regard the origin of our builtin utility function as a /very/ coarse approximation of our genes' survival fitness.

Regarding what the FAI is to do, I think the mistake made is assuming that the prior utility of doing ritual X is exactly zero, so that a very small change in our probabilities would make the expected utility of X positive. (Where X is "give the Pascal mugger the money"). A sufficiently smart FAI would have thought about the possibility of being Pascal-mugged long before that actually happens, and would in fact consider it a likely event to sometimes happen. I am not saying that this actually happening is not a tiny sliver of evidence in favor of the mugger telling the truth, but it is very tiny. The FAI would (assuming it had enough resources) compute for every possible Matrix scenario the appropriate probabilities and utilities for every possible action, taking the scenario's complexity into account. There is no reason to assume the prior expected utility for any religious ritual (such as paying Pascal muggers, whose statements you can't check) is exactly zero. Maybe the FAI finds that there is a sufficiently simple scenario in which a god exists and in which it is extremely utillious to worship that god, more so than any alternative scenarios. Or in which one should give in to (specific forms of) Pascal mugging.

However, the problem as presented in this blogpost implicitly assumes that the prior probabilities the FAI holds are such that the tiny sliver of probability provided by one more instance of Pascal's mugging happening, is enough to push the probability of the scenario of 'Extra-Matrix deity killing lots of people if I don't pay' over that of 'Extra-Matrix deity killing lots of people if I do pay'. Since these two scenarios need not have the exact same Kolmogorov complexity this is unlikely.

In short, either the FAI is already religious, (which may include as a ritual 'give money to people who speak a certain passphrase') or it is not, but the event of a Pascal mugging happening is unlikely to change its beliefs.

Now, the question becomes if we should accept the FAI doing things that are expected to favor a huge number of extra-matrix people at a cost to a smaller number of inside-matrix people. If we actually count every human life as equal, and we accept what Solomonoff-inducted bayesian probability theory has to say about huge payoff-tiny probability events and dutch books, the FAI's choice of religion would be the rational thing to do. Else, we could add a term to the AI's utility function to favor inside-matrix people over outside-matrix people, or we could make it favor certainty (of benefitting people known to actually exist) over uncertainty (of outside-matrix people not known to actually exist).

Comment author: NancyLebovitz 03 August 2011 03:40:20PM 2 points [-]

It's hard to say especially since terrorists are a tiny proportion of engineers, and it would be good to study engineers rather than guessing about them.

Engineer-terrorists mystify me. Shouldn't engineers be the people least likely to think that you can get the reaction you want from a complex system by giving it a good hard kick?

Comment author: somejan 06 August 2011 04:56:51PM 1 point [-]

As another datapoint (though I don't have sources), I heard that among evangelical church leaders you also find a relatively higher proportion of engineers.

Comment author: NancyLebovitz 25 May 2011 06:07:57PM -1 points [-]

I was wondering if engineers were less biased than other scientific types?

Apparently not.

There's a surprising correlation between studying engineering and being a terrorist. I don't know if the correlation holds up for people who actually work in engineering rather than just having studied it.

I also haven't seen anything that looks solid about why the correlation exists.

Comment author: somejan 03 August 2011 12:29:39PM 3 points [-]

Might it be that engineering teaches you to apply a given set of rules to its logical conclusion, rather than questioning if those rules are correct? To be a suicide bomber, you'd need to follow the rules of your variant of religion and act on them, even if that requires you to do something that goes against your normal desires, like kill yourself.

I'd figure questioning things is what you learn as a scientist, but apparently the current academic system is not set up to question generally accepted hypotheses, or generally do things the fund providers don't like.

Looking at myself, studying philosophy and also having an interest in fundamental physics, computer science and cognitive psychology helps, but how many people do that.

Comment author: Joshua 12 February 2011 08:34:43PM *  0 points [-]

I'm thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.

Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn't that conclusion be data for more inaccurate conclusions?

So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn't go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.

Tried to unpack it. Noticed that I seem to expect this "exact art" of rationality to be a system that can stand on its own when it doesn't. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn't Elizer already have done it?

I feel like I'm making mistakes and being foolish right now, so I'm going to stop writing and eagerly await your corrections.

Comment author: somejan 22 February 2011 12:25:53PM 0 points [-]

There's nothing in being a rationalist that prevents you from considering multiple hypotheses. One thing I've not seen elaborated on a lot on this site (but maybe I've just missed it) is that you don't need to commit to one theory or the other, the only time you're forced to commit yourself is if you need to make a choice in your actions. And then you only need to commit for that choice, not for the rest of your life. So a bunch of perfect rationalists who have observed exactly the same events/facts (which of course doesn't happen in real life) would ascribe exactly the same probabilities to a bunch of theories. If new evidence came in they would all switch to the new hypothesis because they were all already contemplating it but considering it less likely than the old hypothesis.

The only thing preventing you from considering all possible hypotheses is lack of brain power. This limited resource should probably be divided among the possible theories in the same ratio that you're certain about them, so if you think theory A has a probability of 50% of being right, theory B a probability of 49% and theory C a probability of 1%, you should spend 99% of your efforts on theory A and B. But if the probabilities are 35%, 33% and 32% you should spend almost a third of your resources on theory C. (Assuming the goal is just to find truth, if the theories have other utilities that should be weighted in as well.)