If you experienced psychic powers as being real in a way consistent and sustained enough that, for example, over thirty years people became well able to study and control them, and your local neighborhood Psi Corps was as familiar a figure as your local neighborhood police, I doubt that thirty years from now you would be huddled up in a ball with your fingers in your ears saying "this isn't real, this isn't real".
Perhaps in this case you should taboo "reality" and "hallucination" and talk about whether an experience is consistent and sustained. An experience of psychic powers may be inconsistent - for example, when you say "Look at me move this weight with my mind", your friend says "You're just sitting there with an intent look on your face and nothing's happening" - or more subtle, in that you hallucinate your friends approving but not other obvious consequences, like you winning the Randi Prize, scientists taking you off to study, or a gradual shift in society's view of the physical versus the spiritual. A sufficiently consistent hallucination would be strong evidence that it's not a hallucination at all, unless you believe your brain could predict the way society would develop post-psychic powers and could feed you all the appropriate stimuli forever - in which case you have minimal reason to believe your normal pre-psychic life isn't a hallucination too.
Or it may not be sustained. For example, you may have a relatively vivid hallucination of being in a world with psychic powers, but it only lasts a few minutes before you end up back in the real world and people tell you you've been in a coma the whole time.
If you find yourself in a world with psychic powers, or Omega, or whatever, then the longer it stays consistent and sustained, the greater the probability with which you can predict that acting as if the powers or Omega are real is the way to go, either because they are real or because your hallucination is so vivid and complex that you know acting as if they're real will have the same results as if they are real.
You could also try giving a small amount of probability to the "real" hypothesis and a large amount to the "hallucination" hypothesis. Then, as it becomes gradually clearer and clearer that nothing you do can break you out of the hallucination and satisfy your aims in reality, all the actions in the "hallucination" hypothesis have equal utility and it's the utilities in the "real" branch that should guide your actions.
...this kind of makes me want to write rationalist Thomas Covenant fanfiction, which would probably be a bad idea.
...this kind of makes me want to write rationalist Thomas Covenant fanfiction, which would probably be a bad idea.
Why would it be a bad idea?
Cause the book is basically psychological, and if Tom had the brainpower to realize that when you're up against an enemy whose power derives from self-loathing, loathing yourself all the time might not be such a good idea, then there goes the conflict and depth, and it devolves to smacking cavewights around. With the revelation in the latest book that (spoiler) nothing done out of love can ever aid the Despiser, even indirectly, he literally couldn't go wrong by turning his frown upside-down.
And that's not even getting to the stuff with the Earthblood. Seriously, Elena, you've gained complete omnipotence for the space of one wish, and the best you can think of is "resurrect the vengeful ghost of the morally conflicted apocalyptic ex-ruler because maybe he's mellowed out"? How about "I wish for a Friendly AI under the control of Revelstone"?
::taps 3B, casts Zombify on thread::
Hmmm... since Thomas Covenant fanfiction is problematic, how about Life on Mars fanfiction instead?
Presumably you'd have to change some details like Eliezer did in order to make Harry not have an auto-win button. But it would likely require a lot more setting modification (I'm not sure. I haven't thought about this that much.)
(nods) These sorts of situations really do arise.
A couple of years ago, shortly after some significant brain damage, I had some very emotionally compelling experiences of the sort typically described as "supernatural." And so I asked myself the question: what's more likely? That the things I experienced happening actually did happen? Or that I was hallucinating due to brain damage?
I eventually bit the bullet and accepted that "hallucinating" was overwhelmingly more likely... but it was not an easy thing to do.
If Omega actually showed up and presented you with Newcomb's Problem, you may as well at least take your possibly-imaginary million dollars and deposit it in a bank account before you get your head checked, in case it's true. And in case you are hallucinating, then that strategy will probably be quick route to a mental hospital anyway, because you'll probably actually be doing something like trying to deposit a potted plant at a McDonald's drive-through.
Should we have, on first observing the results of the double split experiment, concluded that we are insane instead of inventing quantum mechanics?
Apocryphal story about Schrodinger: he used his cat thought experiment as a litmus test to determine whether or not he should continue working on QM. When he got the result he didn't want, he switched over to biology, and wrote a book that inspired Watson and Crick.
I think we answer those sorts of questions through being careful thinkers and collecting evidence. If quantum mechanics is correct, what is made incorrect by that statement? Why do we believe the things that QM suggests are incorrect?
The double-split experiment, if correct, shatters our beliefs about microscopic existence formed by our experience with macroscopic objects, and has a number of testable predictions. We can construct other experiments based on those predictions, and when we do them it turns out that the microscopic world and the macroscopic world actually behave differently, but in a way that is consistent instead of contradictory. It's bizarre but it's possible.
We can check why my priors for insanity are higher than my priors for magic. I have solid evidence that I am more likely to believe supernatural claims because of irrationality than because of rationality.
Did Thomas Jefferson have solid evidence that meteors didn't exist? No- and it looks like he recognized that. He just had enough evidence to consider that the possibility discoverers of meteors were lying was stronger than the possibility they were telling the truth. The Platypus is another great example- many naturalists had enough evidence to believe it was a hoax rather than a real animal.
But the evidence against meteors and platypi is fairly small, and can be overcome relatively easily. What about the evidence against supernatural causation? It seems like the statement "you are insane if you believe in supernatural causality" might be true by definition.
What are the consequences if human porntuition is so strong that they predict where it'll pop up 53% of the time instead of 50% of the time? Either that statistical significance needs to be statistically significant (hey, when I run the results 100 times, I get a result at the p=.01 level! Fascinating!), that there's some systemic error in the setup, or that causality isn't unidirectional. I don't think I can express how much evidence we have against the last proposition.
If there were a precise theory of causality flowing backwards in time, that explained under what conditions it happened, and why we haven't encountered those conditions before, and many different experiments were conducted to produce those conditions, and they confirmed the predictions, and were reliably replicated, and led to the development of new technology that use backwards causality, would that be sufficient evidence for you to accept backwards causality?
I think it would all hinge on why we haven't encountered those conditions before. I think I can imagine there existing a reason why we haven't encountered those conditions before which doesn't trigger my suspicion of insanity, but that would dramatically limit how radically those technologies could impact life.
So that seems to boil down to a "yes, if that were true, I would believe it, but I have no reason to expect it is possible for that to be true in the world I reside in." (i.e. I would believe in God if he existed throughout my life so far, but the spontaneous appearance of God would cause me to suspect I am insane.)
but that would dramatically limit how radically those technologies could impact life.
You say, using a device built out of transistors, so that nearly anyone in the world with a similar device can read it. How limited would you have predicted the technology based and quantum mechanics would be?
Conditions that we are unlikely to observe now in our daily lives could become prevalent if we deliberately seek them out.
I could have predicted personal telegraphs in 1900, and personal television-typewriter combinations in 1920. Knowing what I do about brain makeup now, the presence of telepathy would not suggest insanity to me (so long as there are sensors involved more sophisticated than the human brain).
What I'm saying is that my condition for reverse causality existing and me considering myself sane despite possessing evidence for it is that the impacts of said reverse causality are minimal. Maybe I could be eased into something more dramatic? I'm not sure.
Edit- a comment by Marx, that quantitative changes become qualitative changes, comes to mind. If I have evidence for quantitative changes underlying a qualitative change, I can be happy with it- if I don't, it seems like evidence for insanity. Obviously, computers are more than super typewriters like Excel is more than a super abacus- but the differences are more differences of degree than of kind. Even QM and classical mechanics and relativity appear separated by quantitative changes (or, at least, we have good reason to expect that they are).
human porntuition
*Eyebrow raise
Actually, can you clarify that whole paragraph? What is the claim you are evaluating.
Actually, can you clarify that whole paragraph? What is the claim you are evaluating.
The word is a portmanteau of "porn" and "intuition," and is from my brief-glance understanding of Bem's recent psychic findings which people are aflutter about. He set it up so people would guess which side an erotic image would pop up on, then it would pop up randomly, and they were right 53% of the time, which was supposedly statistically significant (I did not see a standard deviation in my brief glance). I consider it unlikely that this is a reproducible effect.
Of course not. We should say "huh, I'm confused". Historically we didn't immediately leap to quantum mechanics from any of the various discrepancies that classical mechanics didn't explain, and we waited for a fair bit of evidence before accepting/inventing quantum mechanics. Probably slightly too much. On the other hand, the N-rays show that scientists are often too trusting of weird results.
I agree with you that the "I am insane/dreaming" hypothesis is much, much stronger than is given credit in most supernatural/Omega-type scenarios.
On the other hand, if I was to read through a lengthy scientific paper detailing a respectable process which showed significant indicators of telepathic ability and posited a believable, testable mechanism, and at every point during this process I was, for example, physically capable of turning the lights on and off, people seemed to speak in understandable English, I got hungry after several hours of not eating, I experienced going to sleep and waking up, etc., etc. then I doubt that "I am insane" would continue strictly dominating the hypothesis "people are psychic."
There are so many tests for sanity that verification of it should be fairly easy, unless you are insane in such a specific way that you only hallucinate a single scientific paper which you have discussed extensively with others in your imagination; or your imagination is vastly powerful, enough to create an entire functioning world which may be sufficient to say that some abstract "solipsism" really isn't decision-theoretically important.
Are there good self-tests for sanity?
I read an account some years ago by a woman who'd gone through withdrawal from steroids. She had an elaborate hallucination of secret agencies setting up a communications center in her hospital room.
She said that it never occurred to her during the hallucination to do a plausibility check.
If it never occurred to her to do a plausibility check, that's strong evidence that given insanity, you will not check for sanity; not that it's impossible to check for sanity.
Forming a strong precommitment to test your sanity when certain unlikely events occur seems like an even better idea given that it usually doesn't even occur to people.
I suppose you can test for consistency but not much else. If you're processing a consistent stream of signals, how would you be able to differentiate stream A from A', where A is reality and A' is a hallucination consistent with your past? That said, checking for consistency should be enough.
I follow a heuristic which says that, for the purposes of making a decision, any statement that leads to the conclusion that your decision doesn't matter, should be considered false. If Omega is a figment of your imagination, then it doesn't matter how many boxes you take, so for purposes of deciding you can rule out that possibility. You only need to consider the possibility that you're hallucinating afterwards, when deciding whether to consult a psychologist and get an MRI.
I believe the point of Omega is to make it easier to set up hypothetical situations without getting fixated to much on irrelevant details.
I believe the point of Omega is to make it easier to set up hypothetical situations without getting fixated to much on irrelevant details.
The issue is that often those irrelevant details are generally the crux of the problem.
The Newcomb problem is a great example: someone trustworthy presents you with two buttons. "If you press the left button, you get $1,000. If you don't press the left button and you press the right button, you get $1,000,000. If you press neither button, you get nothing." The right response is to press only the right button. Why would anyone care about this question?
The change from my summary to the real thing is that the person is made infinitely trustworthy but makes a statement which is also infinitely unbelievable. See, rationalists sure are dumb because they don't take anything on faith!
Its not the probability of hallucinating full stop, its the probability of hallucinating omega or psychic powers in particular. Also, while "Omega" sounds implausible, there are much more plausible scenarios involving humans inventing advanced brain scanning tech.
ET Jaynes' Probability Theory treats this a bit - some hypotheses will strictly dominate others, it's true.
However, I don't think it's so simple to say that your specific example is unbelievable. I would bet I could find a way to give you evidence of your sanity that is independent of whether psychics exist. The probability that you're being tricked can be reduced by writing the code and conducting the experiments yourself. Therefore the posterior probabilities don't all move as one, so under some situations you could believe that psychic powers are real.
Given that some arguments that we live in a simulation claim a better than even possibility, there would be a chance that, encountering Omega or someone with psychic powers, you just met a root user.
I think it's true that insanity/trickery is always going to be a stronger hypothesis than these truly outlandish claims. At the same time, if you could still model the nature of the entities in your hallucination, it would end up being equivalent to believing you were in a simulation. So in practice, you would go about using your psychic powers as if they were real, and wouldn't have to suffer for being rational in an extremely unlikely world, while still believing that you must actually be in some sort of simulated world. And as a bounded agent, it seems like there would be another point where your belief that you're in a simulation with memories of an old life in a strange world would start feeling insane.
Something worth pondering: conditional on you being in fact strongly psychotic - on your perceptions having gone so rotten that your brain populates the world with Omegas and waistcoat-clad white rabbits - is there any course of action worth pursuing any longer? Deciding to see a psychiatrist may not have a high correlation with actually seeing a psychiatrist, you might end up blubbering incomprehensibly at your friendly neighbourhood crack dealer instead.
Effectively, this would drastically reduce or even nullify the weight you should give to the insanity hypothesis. If insanity made you able to undergo the actions you consciously chose only, say, 10% of the time, whereas the "Omega exists" scenario does not similarly hamper you, then you should divide by 10 the weight you gave to the option of assuming insanity.
And now I realise the obvious answer - that there are many orders of magnitude of difference between the likelihood of personal brain damage and that of Omega appearing, while I don't see the reasoning above eliminating more than two or three. Still, I'm leaving this comment since it seems there's something worthwhile in it.
A problem with the insanity hypothesis is that "non-sane perception" is a non-apple. You can dismiss any perception, even the perception that everyone around you saw the same incredible things you did, by saying the whole thing is a hallucination. No, it's worse than merely being a non-apple, it's an invisible dragon. It can automatically expand to cover any observation at all.
That problem is why you have to be careful with the insanity hypothesis: there IS a possibility that you are insane, but it doesn't have all the same evidence that Omega existing does. Which is where a lot of people in this thread are coming from: sanity tests, dreaming tests, and so on will give you evidence about your sanity without having to involve Omega; so that you can through repeated and varied tests reduce the probability of your insanity below Omega existing, and then start considering the Omega possibility.
In other words, experiencing Omega existing is strong evidence for your insanity, and strong evidence for Omega existing. Since your prior for insanity is higher than your prior for Omega existing, you ought to conclude that you need to test your insanity hypothesis until you have enough strong evidence against it to update your priors so that Omega is more likely than insanity.
As far as the precognition study is concerned, the insanity hypothesis would not even rise to my attention were it not for reading this thread. Having considered the hypothesis for about a millisecond, I dismiss it out of hand. I do not take seriously the idea that I imagined the existence of the study or this thread discussing it.
As a general rule, I'm not much interested in such tiny effects as it reports, however astounding it would be if they were real, because the smaller they are, the easier it is for some tiny flaw in the experiment to have produced it by mundane means. The p values mean nothing more than "something is going on", and excluding all mundane explanations will take a lot of work. I have not read the paper to see how much of this work they have done already.
What it would take to believe in Omega is irrelevant to Omega problems. The point of such a problem is to discuss it conditional on the assumptions that Omega exists as described and that the people in the story know this. Within the fiction, certain things are true and are believed to be true by everyone involved, and the question is not "how come?" but "what next?"
What it would take to believe in Omega is irrelevant to Omega problems.
But it is relevant to the argument about dismissing hypotheticals as only occurring when you're insane or dreaming, and so trying to sanely work your way through such a problem has no use for you, because you'll never apply any of this reasoning in any sane situation.
The main post made the point that
it seems that a decision theory that adjusts itself to give the right answer in insane situations is not going down the right track.
and this is the counterpoint: that there are ways of checking whether the situation's insane, and ways of assuring yourself that the situation is almost certainly sane. Saying that the point of the Omega problems is to discuss it conditional on it being truthful doesn't help when the author is saying that the "whole point of the Omega problems" is only going to be of use to you when you're insane.
Being completely simulated by an external party is an unrealistic scenario for a human, but a very realistic one for an artificial intelligence. I always assumed that was one of the primary reasons for LW's fascination with Omega problems.
Also, not all Omega problems are equal. As has been pointed out a bazillion of times, Newcomb's Paradox works just as well if you only assume a good guesser with a consistent better-than-even track record (and indeed, IMO, should be phrased like this from the start, sacrificing simplicity for the sake of conceptual hygiene), so insanity considerations are irrelevant. On the flip-side, Counterfactual Mugging is "solved" by earning the best potential benefit right now as you answer the question, so the likelihood of Omega does have a certain role to play.
Being completely simulated by an external party is an unrealistic scenario for a human, but a very realistic one for an artificial intelligence
Being completely simulated by an external party is a realistic scenario for a human, given that an artificial intelligence exists. This might also be part of the fascination.
Saying that the point of the Omega problems is to discuss it conditional on it being truthful doesn't help when the author is saying that the "whole point of the Omega problems" is only going to be of use to you when you're insane.
I read Omega as being like the ideal frictionless pucks, point masses, and infinitely flexible and inextensible cords of textbook physics problems. To object that such materials do not exist, or to speculate about the advances in technology that must have happened to produce them is to miss the point. These thought experiments are works of fiction and need not have a past, except where it is relevant within the terms of the experiment.
Yes. But there are people on LW seriously claiming to assign a nonzero (or non-zero-plus-epsilon) probability of Omega - a philosophical abstraction for the purpose of thought experiments on edge cases - manifesting in the real world. At that point, they've arguably thought themselves less instrumentally functional.
there are people on LW seriously claiming to assign a nonzero probability of Omega - a philosophical abstraction for the purpose of thought experiments on edge cases - manifesting in the real world.
Being charitable, they assign a non-zero probability to the possibility of something coming into existence that is powerful enough to have the kinds of powers that our fictional Omega has. This is subtly different from believing the abstraction itself will come to life.
The problem is that Omega is by definition a hypothetical construct for purposes of exploring philosophical edge cases. We're not talking about any reasonable real-world phenomenon.
Let's taboo Omega. What are you actually describing happening here? What is the entity that can do these things, in your experience? I don't believe your first thought would be "gosh, it turns out the philosophical construct Omega is real." You'd think of the entity as a human. What characteristics would you ascribe to this person?
e.g. A rich and powerful man called O'Mega (of, say, Buffett or Soros levels of wealth and fame - you know this guy is real, very smart and ridiculously more successful than you) shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. O'Mega shows you that he has put $1,000 in box B. O'Mega says that if he thinks you will take box A only, he will have put $1,000,000 in it (he does not show you). Otherwise he will have left it empty. O'Mega says he has played this game many times, and says he has never been wrong in his predictions about whether someone will take both boxes or not.
Would the most useful response in this real-world situation be: 1. Take both boxes. 2. Take box A. 3. walk away, not taking any money, because you don't understand his wacky game and want no part of it, before going home to hit the Internet and tell Reddit and LessWrong about it. 4. Invent timeless decision theory?
Would the most useful response in this real-world situation be: 1. Take both boxes. 2. Take box A. 3. walk away, not taking any money, because you don't understand his wacky game and want no part of it, before going home to hit the Internet and tell Reddit and LessWrong about it. 4. Invent timeless decision theory?
I think the first thing I would do is ask what he'll give me if he is wrong. ;-)
(Rationale: my first expectation is that Mr. O'Mega likes to mess with people's minds, and sees it as worth risking $1000 just to watch people take an empty box and get pissed at him, or to crow in delight when you take both boxes and feel like you lost a million that you never would've had in the first place. So, absent any independent way to determine the odds that he's playing the game sincerely, I at least want a consolation prize for the "he's just messing with me" case.)
Well, the situation that the post was discussing is not Newcomb's problem, it's counterfactual mugging, so how about:
The CEO of your employer Omegacorp schedules a meeting with you. When you enter, there is a coin showing tails in front of him. He tells you that as you were outside his office, he flipped this coin. If it came up heads, he would triple your salary, but if it came up tails he would ask you to take a 10% cut in your pay. Now, he says, arbitrary changes in your pay, even for the better, can't happen without your approval. Given that you desire money, you clearly want to refuse the pay cut. He notes that you would have accepted either outcome if you'd been given a choice to take the bet before the coin flipped, and that the only reason he didn't let you in the office to watch the flip (or pre-commit to agreeing) was that company regulations prohibit gambling with employees salaries.
In this case you ought to consider the possibility you are dreaming or insane. Response 3 makes a lot of sense here, even down to the Reddit and LessWrong parts.
In this case you ought to consider the possibility you are dreaming or insane.
Before that I'd consider the possibility that the CEO is not entirely sound.
Has anyone actually attempted a counterfactual mugging at a LessWrong meetup?
I think you're right to conclude you're insane in the case of Omega. It sufficiently parallels traditional delusions and doesn't even make sense even if what you see and here are true (why if this super-intelligence exists is it offering me weird bets to test game theory paradoxes?). In any case the though experiments just assume Omega is honest, in real life you would require a proof before you started handing him money.
The psy thing seems altogether different from me. People don't hallucinate scientific studies, it just doesn't fit with any known pattern of delusion. Moreover, the hypothesis, while contrary to our current understanding of the world, isn't nearly as a priori implausible as an honest and omniscient AI appearing at your front door.
I think you're right to conclude you're insane in the case of Omega.
Ouch. Once LW members start going insane at an elevated rate, we pretty much know what we're going to hallucinate, so all that decision theory stuff is going to become really useful.
On a minimally similar note, while I acknowledge that it would be rational for me, right now, to sincerely intend to pay should I run into a counterfactual mugger, I currently have no intention to do so on the basis that
sum{for k=0 to infinity} of (k * P(an event sufficiently similar to CM with potential payoff k happens to me)) <= (warm and fuzzy feeling I get by not going with the other sheeple [*insert teenager contrarian laugh*])
So far as I know, seeing evidence that supports an unlikely general hypothesis doesn't match any standard sort of insanity.
I don't know, "evidence that supports an unlikely general hypothesis" is an awfully large set of potential qualia / symptoms.
In centuries past (and not so past), religious people have gone through highly detailed and 'realistic' experiences that fit nicely into mainstream theology and were taken pretty much at face value (see e.g. St. Marguerite Alacocque who, inbetween masochistic frenzies of licking bodily fluids at Jesus' command, also had more marketable visions which were accepted as a a major element of modern worship).
Give the same brain malfunction to a SIAI member and I wouldn't be surprised if they woke up to have a very realistic and believable meeting with Omega; and I for one would be a few orders of magnitude more likely to believe in a Omega hypothesis than in the JHWH hypothesis.
I agree that thinking you've met Omega would be a symptom of insanity (or possibly extreme gullibility and having run into a practical joker). Thinking that you've seen strong experimental evidence for psi doesn't seem like a normal hallucination, though it might be what Korzybski called unsanity-- the usual sort of jumping to conclusions.
You might be interested in this New Scientist article: Evidence that we can see the future to be published
Thinking about two separate problems has caused me to stumble onto another, deeper problem. The first is psychic powers-what evidence would convince you to believe in psychic powers? The second is the counterfactual mugging problem- what would you do when presented with a situation where a choice will hurt you in your future but benefit you in a future that never happened and never will happen to the you making the decision?
Seen as a simple two-choice problem, there are some obvious answers: "Well, he passed test X, Y, and Z, so they must be psychic." "Well, he passed text X, Y, and Z, so that means I need to come up with more tests to know if they're psychic." "Well, if I'm convinced Omega is genuine, then I'll pay him $100, because I want to be the sort of person that he rewards so any mes in alternate universes are better off." "Well, even though I'm convinced Omega is genuine, I know I won't benefit from paying him. Sorry, alternate universe mes that I don't believe exist!"
I think the correct choice is the third option- I have either been tricked or gone insane.1 I probably ought to run away, then ask someone who I have more reason to believe is non-hallucinatory for directions to a mental hospital.
The math behind this is easy- I have prior probabilities that I am gullible (low), insane (very low), and that psychics / Omega exist (very, very, very low). When I see that the result of test X, Y, and Z suggests someone is psychic, or see the appearance of an Omega who possesses great wealth and predictive ability, that is generally evidence for all three possibilities. I can imagine evidence which is counter-evidence for the first but evidence for the second two, but I can't imagine the existence of evidence consistent with the axioms of probability which increases the possibility of magic (of the normal or sufficiently advanced technology kind) to higher than the probability of insanity.2
This result is shocking and unpleasant, though- I have decided some things are literally unbelievable, because of my choice of priors. P(Omega exists | I see Omega)<<1 by definition, because any evidence for the existence of Omega is at least as strong evidence for the non-existence of Omega because it's evidence that I'm hallucinating! We can't even be rescued by "everyone else agrees with you that Omega exists," because the potential point of failure is my brain, which I need to trust to process any evidence. It would be nice to be a skeptic who is able to adapt to the truth, regardless of what it is, but there seems to be a boundary to my beliefs created by the my experience of the human condition. Some hypotheses, once they fall behind, simply cannot catch up with other competing hypotheses.
That is the primary consolation: this isn't simple dogmatism. Those priors are the posteriors of decades of experience in a world without evidence for psychics or Omegas but where gullibility and insanity are common- the preponderance of the evidence is already behind gullibility or insanity as a more likely hypothesis than a genuine visitation of Omega or manifestation of psychic powers. If we lived in a world where Omegas popped by from time to time, paying them $100 on a tails result would be sensible. Instead, we live in a world where people often find themselves with different perceptions from everyone else, and we have good reason to believe their data is simply wrong.
I worry this is an engineering answer to a philosophical problem- but it seems that a decision theory that adjusts itself to give the right answer in insane situations is not going down the right track. Generally, a paradox is a confusion in terms, and nothing more- if there is an engineering sense in which your terms are well-defined and the paradox doesn't exist, that's the optimal result.
I don't offer any advice for what to do if you conclude you're insane besides "put some effort into seeking help," because that doesn't seem to me to be a valuable question to ponder (I hope to never face it myself, and don't expect significant benefits from a better answer). "How quickly should I get over the fact that I'm probably insane and start realizing that Narnia is awesome?" does not seem like a deep question about rationality or decision theory.
I also want to note this is only a dismissal of acausal paradoxes. Causal problems like Newcomb's Problem are generally things you could face while sane, keeping in mind that you can't tell the difference between an acausal Newcomb's Problem (where Omega has already filled or not filled the box and left it alone) and a causal Newcomb's Problem (where the entity offering the choice has rigged it so selecting both box A and B obliterates the money in box B before you can open it). Indeed, the only trick to Newcomb's Problem seems to be sleight of hand- the causal nature of the situation is described as acausal because of the introduction of a perfect predictor and that description is the source of confusion.
1- Or am dreaming. I'm going to wrap that into being insane- it fits the same basic criteria (perceptions don't match external reality) but the response is somewhat different (I'm going to try and enjoy the ride / wake up from the nightmare rather than find a mental hospital).
2- I should note that I'm not saying that the elderly, when presented with the internet, should conclude they've gone insane. I'm saying that when a genie comes out of a bottle, you look at the situation surrounding it, not its introduction- "Hi, I'm a FAI from another galaxy and have I got a deal for you!" shouldn't be convincing but "US Robotics has just built a FAI and collected tremendous wealth from financial manipulation" could be, and the standard "am I dreaming?" diagnostics seem like they would be valuable, but "am I insane?" diagnostics are harder to calibrate.
EDIT- Thanks to Eugene_Nier, you can read Eliezer's take on a similar issue here. His Jefferson quote is particularly striking.