Utility, probability and false beliefs
A putative new idea for AI control; index here.
This is part of the process of rigourising and formalising past ideas.
Paul Christiano recently asked why I used utility changes, rather than probability changes, to have an AI believe (or act as if it believed) false things. While investigating that, I developed several different methods for achieving the belief changes that we desired. This post analyses these methods.
Different models of forced beliefs
Let x and ¬x refer to the future outcome of a binary random variable X (write P(x) as a shorthand for P(X=x), and so on). Assume that we want P(x):P(¬x) to be in the 1:λ ratio for some λ (since the ratio is all that matters, λ=∞ is valid, meaning P(x)=0). Assume that we have an agent, who has utility u, has seen past evidence e, and wishes to assess the expected utility of their action a.
Typically, for expected utility, we sum over the possible worlds. In practice, we almost always sum over sets of possible worlds, the sets determined by some key features of interest. In assessing the quality of health interventions, for instance, we do not carefully and separately treat each possible position of atoms in the sun. Thus let V be the set of variables or values we can about, and v a possible value vector V can take. As usual, we'll be writing P(v) as a shorthand for P(V=v). The utility function u assigns utilities to possible v's.
One of the advantages of this approach is that it can avoid many issues of conditionals like P(A|B) when P(B)=0.
The first obvious idea is to condition on x and ¬x:
- (1) Σv u(v)(P(v|x,e,a)+λP(v|¬x,e,a))
The second one is to use intersections rather than conditionals (as in this post):
- (2) Σv u(v)(P(v,x|e,a)+λP(v,¬x|e,a))
Finally, imagine that we have a set of variables H, that "screen off" the effects of e and a, up until X. Let h be a set of values H can take. Thus P(x|h,e,a)=P(x|h). One could see H as the full set of possible pre-X histories, but it could be much smaller - maybe just the local environment around X. This gives a third definition:
- (3) Σv Σh u(v)(P(v|h,x,e,a)+λP(v|h,¬x,e,a))P(h|,e,a)
Changing and unchangeable P(x)
An important thing to note is that all three definitions are equivalent for fixed P(x), up to changes of λ. The equivalence of (2) and (1) derives from the fact that Σv u(v)(P(v,x|e,a)+λP(v,¬x|e,a)) = Σv u(v)(P(x)P(v|x,e,a)+λP(¬x)P(v|¬x,e,a)) (we write P(x) rather than P(x|e,a) since the probability of x is fixed). Thus a type (2) agent with λ is equivalent with a type (1) agent with λ'=λP(x)/P(¬x).
Similarly, P(v|h,x,e,a)=P(v,h,x|e,a)/(P(x|h,e,a)*P(h|e,a)). Since P(x|h,e,a)=P(x), equation (3) reduces to Σv Σh u(v)(P(x)P(v,h,x|e,a)+λP(¬x)P(v,h,¬x|e,a)). Summing over h, this becomes Σv u(v)(P(x)P(v,x|e,a)+λP(¬x)P(v,¬x|e,a))=Σv u(v)(P(v|x,e,a)+λP(v|¬x,e,a)), ie the same as (1).
What about non-constant x? Let c(x) and c(¬x) be two contracts that pay out under x and ¬x, respectively. If the utility u is defined as 1 if a payout is received (and 0 otherwise), it's clear that both agent (1) and agent (3) assess c(x) as having an expected utility of 1 while c(¬x) has an expected utility of λ. This assessment is unchanging, whatever the probability of x. Therefore agents (1) and (3), in effect, see the odds of x as being a constant ratio 1:λ.
Agent (2), in contrast, gets a one-off artificial 1:λ update to the odds of x and then proceeds to update normally. Suppose that X is a coin toss that the agent believes is fair, having extensively observed the coin. Then it will believe that the odds are 1:λ. Suppose instead that it observes the coin has a λ:1 odd ratio; then it will believe the true odds are 1:1. It will be accurate, with a 1:λ ratio added on.
The effects of this percolate backwards in time from X. Suppose that X was to be determined by the toss of one of two unfair coins, one with odds ε:1 and one with odds 1:ε. The agent would assess the odds of the first coin being used rather than the second as around 1:λ. This update would extend to the process of choosing the coins, and anything that that depended on. Agent (1) is similar, though its update rule always assumes the odds of x:¬x being fixed; thus any information about the processes of coin selection is interpreted as a change in the probability of the processes, not a change in the probability of the outcome.
Agent (3), in contrast, is completely different. It assess the probability of H=h objectively, but then assumes that the odds of x and ¬x, given any h, is 1:λ. Thus if given updates about the probability of which coin is used, it will assess those updates objectively, but then assume that both coins are "really" giving 1:1 odds. It cuts off the update process at h, thus ensuring that it is "incorrect" only about x and its consequences, not its pre-h causes.
Utility and probability: assessing goal stability
Agents with unstable goals are likely to evolve towards being (equivalent to) expected utility maximisers. The converse is more complicated, but we'll assume here that an agent's goal is stable if it is an expected utility maximiser for some probability distribution.
Which one? I've tended to shy away from changing the probability, preferring to change the utility instead. If we divide the probability in equation (2) by 1+λ, it becomes a u-maximiser with a biased probability distribution. Alternatively, if we defined u'(v,x)=u(v) and u'(v,¬x)=λu(v), then it is a u'-maximiser with an unmodified probability distribution. Since all agents are equivalent for fixed P(x), we can see that in that case, all agents can be seen as expected utility maximisers with the standard probability distribution.
Paul questioned whether the difference was relevant. I preferred the unmodified probability distribution - maybe the agent uses the distribution for induction, maybe having false probability beliefs will interfere with AI self-improvement, or maybe agents with standard probability distributions are easier to corrige - but for agent (2) the difference seems to be arguably a matter of taste.
Note that though agent (2) is stable, it's definition is not translation invariant in u. If we add c to u, we add c(P(x|e,a)+λP(¬x|e,a)) to u'. Thus, if the agent can affect the value of P(x) through its actions, different constants c likely give different behaviours.
Agent (1) is different. Except for the cases λ=0 and λ=∞, the agent cannot be an expected utility maximiser. To see this, just notice that an update about the process that could change the probability of x, gets reinterpreted as an update on the probability of that process. If we have the ε:1 and 1:ε coins, then any update about their respective probabilities of being used gets essentially ignored (as long as the evidence that the coins are biased is much stronger than the evidence as to which coin is used).
In the cases λ=0 and λ=∞, though, agent (1) is a u-maximiser that uses the probability distribution that assumes x or ¬x is certain, respectively. This is the main point of agent (1) - providing a simple maximiser for those cases.
What about agent (3)? Define u' by: u'(v,h,x)=u(v)/P(x|h), and u'(v,h,¬x)=λu(v)/P(¬x|h). Then consider the u'-maximiser:
- (4) Σv Σh u'(v,h,x)P(v,h,x|e,a)+u'(v,h¬x)P(v,h,¬x|e,a)
Now P(v,h,x|e,a)=P(v|h,x,e,a)P(x|h,e,a)P(h|e,a). Because of the screening off assumptions, the middle term is the constant P(x|h). Multiplying this by u'(v,h,x)=u(v)/P(x|h) gives u(v)P(v|h,x,e,a)P(h|e,a). Similarly, the second term becomes λu(v)P(v|h,¬x,e,a)P(h|e,a). Thus a u'-maximiser, with the standard probability distribution, is the same as agent (3), thus proving the stability of that agent type.
Beyond the future: going crazy or staying sane
What happens after the event X has come to pass? In that case, agent (4), the u'-maximiser will continue as normal. Its behaviour will not be unusual as long as neither λ nor 1/λ is close to 0. The same goes for agent (2).
In contrast, agent (3) will no longer be stable after X, as H no longer screens off evidence after that point. And agent (1) was never stable in the first place, and now it denies all the evidence it sees to determine that impossible events actually happened. But what of those two agents, or the stable ones if λ or 1/λ were close to 0? In particular, what if λ falls below the probability that the agent is deluded in its observation of X?
In those cases, it's easy to argue that the agents would effectively go insane, believing wild and random things to justify their delusions.
But maybe not, in the end. Suppose that you, as a human, believe an untrue fact - maybe that Kennedy was killed on the 23rd of November rather than the 22nd. Maybe you construct elaborate conspiracy theories to account for the discrepancy. Maybe you posit an early mistake by some reporter that was then picked up and repeated. After a while, you discover that all the evidence you can find points to the 22nd. Thus, even though you believe with utter conviction that the assassination was on the 23rd, you learn to expect that the next piece of evidence will point to the 22nd. You look for the date-changing conspiracy, and never discover anything about it; and thus learn to expect they have covered their tracks so well they can't be detected.
In the end, the expectations of this "insane" agent could come to resemble those of normal agents, as long as there's some possibility of a general explanation of all the normal observations (eg a well-hidden conspiracy) given the incorrect assumption.
Of course, the safer option is just to corrige the agent to some sensible goal soon after X.
State-Space of Background Assumptions
[Update]: I received 720+ responses to the survey. Thanks everyone who helped! I have also concluded the statistical analysis (factor analysis, mediation analysis, clustering and prediction). I have not, however, done the writeup. This may take some time since I just started working. It will be done :) I just wanted to let people know this is the current stage.
Hello everyone!
My name is Andrés Gómez Emilsson, and I'm the former president of the Stanford Transhumanist Association. I just graduated from Stanford with a masters in computational psychology (my undergraduate degree was in Symbolic Systems, the major with the highest LessWronger density at Stanford and possibly of all universities).
I have a request for the LessWrong community: I would like as many of you as possible to fill out this questionnaire I created to help us understand what causes the diversity of values in transhumanism. The purpose of this questionnaire is twofold:
- Characterize the state-space of background assumptions about consciousness
- Evaluate the influence of beliefs about consciousness, as well as personality and activities, in the acquisition of memetic affiliations
The first part is not specific to transhumanism, and it will be useful whether or not the second is fruitful. What do I mean by the state-space of background assumptions? The best way to get a sense of what this would look like is to see the results of a previous study I conducted: State-space of drug effects. There I asked participants to "rate the effects of a drug they have taken" by selecting the degree to which certain phrases describe the effects of the drug. I then conducted factor analysis on the dataset and extracted 6 meaningful factors accounting for more than 50% of the variance. Finally, I mapped the centroid of the responses of each drug in the state-space defined, so that people could visually compare the relative position of all of the substances in a normalized 6-dimensional space.
I don't know what the state-space of background assumptions about consciousness looks like, but hopefully the analysis of the responses to this survey will reveal them.
The second part is specific to transhumanism, and I think it should concerns us all. To the extent that we are participating in the historical debate about how the future of humanity should be, it is important for us to know what makes people prefer certain views over others. To give you a fictitious example of a possible effect I might discover: It may turn out that being very extraverted predisposes you to be uninterested in Artificial Intelligence and its implications. If this is the case, we could pin-point possible sources of bias in certain communities and ideological movements, thereby increasing the chances of making more rational decisions.
The survey is scheduled to be closed in 2 days, on July 30th 2015. That said, I am willing to extend the deadline until August 2nd if I see that the number of LessWrongers answering the questionnaire is not slowing down by the 30th. [July 31st edit: I extend the deadline until midnight (California time) of August 2nd of 2015.]
Thank you all!
Andrés :)
Here are some links about my work in case you are interested and want to know more:
Psychophysics for Psychedelic Research
Psychedelic Perception of Visual Textures
Utility vs Probability: idea synthesis
A putative new idea for AI control; index here.
This post is a synthesis of some of the ideas from utility indifference and false miracles, in an easier-to-follow format that illustrates better what's going on.
Utility scaling
Suppose you have an AI with a utility u and a probability estimate P. There is a certain event X which the AI cannot affect. You wish to change the AI's estimate of the probability of X, by, say, doubling the odds ratio P(X):P(¬X). However, since it is dangerous to give an AI false beliefs (they may not be stable, for one), you instead want to make the AI behave as if it were a u-maximiser with doubled odds ratio.
Assume that the AI is currently deciding between two actions, α and ω. The expected utility of action α decomposes as:
u(α) = P(X)u(α|X) + P(¬X)u(α|¬X).
The utility of action ω is defined similarly, and the expected gain (or loss) of utility by choosing α over ω is:
u(α)-u(ω) = P(X)(u(α|X)-u(ω|X)) + P(¬X)(u(α|¬X)-u(ω|¬X)).
If we were to double the odds ratio, the expected utility gain becomes:
u(α)-u(ω) = (2P(X)(u(α|X)-u(ω|X)) + P(¬X)(u(α|¬X)-u(ω|¬X)))/Ω, (1)
for some normalisation constant Ω = 2P(X)+P(¬X), independent of α and ω.
We can reproduce exactly the same effect by instead replacing u with u', such that
- u'( |X)=2u( |X)
- u'( |¬X)=u( |¬X)
Then:
u'(α)-u'(ω) = P(X)(u'(α|X)-u'(ω|X)) + P(¬X)(u'(α|¬X)-u'(ω|¬X)),
= 2P(X)(u(α|X)-u(ω|X)) + P(¬X)(u(α|¬X)-u(ω|¬X)). (2)
This, up to an unimportant constant, is the same equation as (1). Thus we can accomplish, via utility manipulation, exactly the same effect on the AI's behaviour as a by changing its probability estimates.
Notice that we could also have defined
- u'( |X)=u( |X)
- u'( |¬X)=(1/2)u( |¬X)
This is just the same u', scaled.
The utility indifference and false miracles approaches were just special cases of this, where the odds ratio was sent to infinity/zero by multiplying by zero. But the general result is that one can start with an AI with utility/probability estimate pair (u,P) and map it to an AI with pair (u',P) which behaves similarly to (u,P'). Changes in probability can be replicated as changes in utility.
Utility translating
In the previous, we multiplied certain utilities by two. But by doing so, we implicitly used the zero point of u. But utility is invariant under translation, so this zero point is not actually anything significant.
It turns out that we don't need to care about this - any zero will do, what matters simply is that the spread between options is doubled in the X world but not in the ¬X one.
But that relies on the AI being unable to affect the probability of X and ¬X itself. If the AI has an action that will increase (or decrease) P(X), then it becomes very important where we set the zero before multiplying. Setting the zero in a different place is isomorphic with adding a constant to the X world and not the ¬X world (or vice versa). Obviously this will greatly affect the AI's preferences between X and ¬X.
One way of avoiding the AI affecting X is to set this constant so that u'(X)=u'(¬X), in expectation. Then the AI has no preferences between the two situations, and will not seek to boost one over the other. However, note that u(X) is an expected utility calculation. Therefore:
- Choosing the constant so that u'(X)=u'(¬X) requires accessing the AI's probability estimate P for various worlds; it cannot be done from outside, by multiplying the utility, as the previous approach could.
- Even if u'(X)=u'(¬X), this does not mean that u'(X|Y)=u'(¬X|Y) for every event Y that could happen before X does. Simple example: X is a coin flip, and Y is the bet of someone on that coin flip, someone the AI doesn't like.
This explains all the complexity of the utility indifference approach, which is essentially trying to decompose possible universes (and adding constants to particular subsets of universes) to ensure that u'(X|Y)=u'(¬X|Y) for any Y that could happen before X does.
Have you changed your mind recently?
Our beliefs aren't just cargo that we carry around. They become part of our personal identity, so much so that we feel hurt if we see someone attacking our beliefs, even if the attacker isn't speaking to us individually. These "beliefs" are not necessarily grand things like moral frameworks and political doctrines, but can also be as inconsequential as an opinion about a song.
This post is for discussing times when you actually changed your mind about something, detaching from the belief that had wrapped itself around you.
Relevant reading: The Importance of Saying "Oops", Making Beliefs Pay Rent
Googling is the first step. Consider adding scholarly searches to your arsenal.
Related to: Scholarship: How to Do It Efficiently
There has been a slightly increased focus on the use of search engines lately. I agree that using Google is an important skill - in fact I believe that for years I have came across as significantly more knowledgeable than I actually am just by quickly looking for information when I am asked something.
However, There are obviously some types of information which are more accessible by Google and some which are less accessible. For example distinct characteristics, specific dates of events etc. are easily googleable1 and you can expect to quickly find accurate information on the topic. On the other hand, if you want to find out more ambiguous things such as the effects of having more friends on weight or even something like the negative and positive effects of a substance - then googling might leave you with some contradicting results, inaccurate information or at the very least it will likely take you longer to get to the truth.
I have observed that in the latter case (when the topic is less 'googleable') most people, even those knowledgeable of search engines and 'science' will just stop searching for information after not finding anything on Google or even before2 unless they are actually willing to devote a lot of time to find it. This is where my recommendation comes - consider doing a scholarly search like the one provided by Google Scholar.
And, no, I am not suggesting that people should read a bunch of papers on every topic that they discuss. By using some simple heuristics we can easily gain a pretty good picture of the relevant information on a large variety of topics in a few minutes (or less in some cases). The heuristics are as follows:
1. Read only or mainly the abstracts. This is what saves you time but gives you a lot of information in return and this is the key to the most cost-effective way to quickly find information from a scholary search. Often you wouldn't have immediate access to the paper anyway, however you can almost always read the abstract. And if you follow the other heuristics you will still be looking at relatively 'accurate' information most of the time. On the other hand, if you are looking for more information and have access to the full paper then the discussion+conclusion section are usually the second best thing to look at; and if you are unsure about the quality of the study, then you should also look at the method section to identify its limitations.3
2. Look at the number of citations for an article. The higher the better. Less than 10 citations in most cases means that you can find a better paper.
3. Look at the date of the paper. Often more recent = better. However, you can expect less citations for more recent articles and you need to adjust accordingly. For example if the article came out in 2013 but it has already been cited 5 times this is probably a good sign. For new articles the subheuristic that I use is to evaluate the 'accuracy' of the article by judging the author's general credibilty instead - argument from authority.
4. Meta-analyses/Systematic Reviews are your friend. This is where you can get the most information in the least amount of time!
5. If you cannot find anything relevant fiddle with your search terms in whatever ways you can think of (you usually get better at this over time by learning what search terms give better results).
That's the gist of it. By reading a few abstracts in a minute or two you can effectively search for information regarding our scientific knowledge on a subject with almost the same speed as searching for specific information on topics that I dubbed googleable. In my experience scholarly searches on pretty much anything can be really beneficial. Do you believe that drinking beer is bad but drinking wine is good? Search on Google Scholar! Do you think that it is a fact that social interaction is correlated with happiness? Google Scholar it! Sure, some things might seem obvious to you that X but it doesn't hurt to search on google scholar for a minute just to be able to cite a decent study on the topic to those X disbelievers.
This post might not be useful to some people but it is my belief that scholarly searches are the next step of efficient information seeking after googling and that most LessWrongers are not utilizing this enough. Hell, I only recently started doing this actively and I still do not do it enough. Furthermore I fully agree with this comment by gwern:
My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.
A lot of people will be reluctant to start doing scholarly searches because they have barely done any or because they have never done it. I want to tell those people to still give it a try. Start by searching for something easy, maybe something that you already know from lesswrong or from somewhere else. Read a few abstracts, if you do not understand a given abstract try finding other papers on the topic - some authors adopt a more technical style of writing, others focus mainly on statistics, etc. but you should still be able to find some good information if you read multiple abstracts and identify the main points. If you cannot find anythinr relevant then move on and try another topic.
P.S. In my opinion, when you are comfortable enough to have scholarly searches as a part of your arsenal you will rarely have days when there is nothing to check for. If you are doing 1 scholarly search per month for example you are most probably not fully utilizing this skill.
1. By googleable I mean that the search terms are google friendly - you can relatively easily and quickly find relevant and accurate information.
2. If the people in question have developed a sense for what type of information is more accessible by google then they might not even try to google the less accessible-type things.
3. If you want to get a better and more accurate view on the topic in question you should read the full paper. The heuristic of mainly focusing on abstracts is cost-effective but it invariably results in a loss of information.
Subsuming Purpose, Part 1
Summary:
The purpose of this entry is to establish the existence of local equilibriums which introduce deviations from an ends-driven organization (an organization whose primary focus is a particular purpose) to transform it into a means-driven organization (an organization whose primary focus is the means to achieve its purpose, rather than the purpose itself).
Subsuming Purpose, Part 1
Imagine you run a charity, and you have two star employees; one shares your goals without any emphasis on a means, the other believes in the cause but believes firmly in fundraising as the best means to that end. Both contribute to your charity, but the fundraiser does more good overall. The fundraiser enables your organization. Who do you set as your successor?
Who will your successor choose as their successor?
The person who believes in the purpose will choose the best person for achieving that purpose. The person who believes in a specific means to achieve that ends will choose the best person for those means. The means will subsume the ends. A person who values specific means, say, fundraising, is more likely to promote fellow fundraisers; he values their contributions more. Specialists, and in particular the lines of thinking which lead to specialization, create rigidity in the organization.
Suppose that you choose the fundraiser. The fundraiser, by dint of having chosen to specialize in fundraising, probably believes that fundraising is more important than the alternative means of supporting the organization: he will probably choose to promote other effective fundraisers over their alternatives.
And now people who don't agree that fundraising will start protesting, seeing their charity becoming increasingly subverted; fundraising is rewarded over the charitable purpose of the organization. They will leave, or protest; if their protests aren't heeded, for example because fundraisers who believe in fundraising do already run the company, they may be marginalized. Such individuals may be selected out, either self-selectively, or by explicit opposition by management to introducing people who are likely to cause trouble for them in the future.
Generalized:
In the example above, I made one particular assumption: That somebody who possesses some choice-driven characteristic X (competency at fundraising in the example) is more likely to believe that X is important, and will favor X over alternative characteristics. It's not necessary that this is always the case; a generalist may also possess some characteristic X. It's only necessary that p(XY) > p(X!Y), where X is possession of characteristic X, and Y is belief that X is an important characteristic to have (belief that fundraising is the most valuable pursuit for the charitable organization in the example).
Any preference, once established, which follows a tendency such that p(XY) > p(X!Y) will concrete itself into the organization once given a foothold; those who are selected based on X will also have, on average, a preference for X. They will select individuals with X.
The danger of organization specialization, as opposed to individual specialization, arises when that preference extends to preference; when, given two people X, those who have a preference for X (those who have characteristic Y) are preferred over those who do not. This is the point at which selecting people for X and Y becomes a runaway process, a process which may subsume the original purpose of the organization.
When those who do not have a preference for X begin to believe that X has already overtaken the original purpose of the organization, the meaningful possibilities are that they will either fight it or leave. If they simply leave, they harden the preference for X; there are fewer individuals in the organization who oppose Y. If they fight it and win, they've won for a day; an equilibrium has not yet been reached. If they fight it and lose, they establish a preference for preference; people who disagree with the orthodoxy of X begin to be seen as potential conflict creators in the organization, and just as problematically, revealing the preference for X may alter the decisions of those who might enter the organization otherwise; a non-Y individual may choose another organization which better suits their preferences.
Every Cause Wants to be a Cult. Every belief wants to be an orthodoxy. Orthodoxy is a stable equilibrium, the pit surrounding the gently sloped hill of idea diversity.
Mental Clarity; or How to Read Reality Accurately
Hey all - I typed this out to help me understand, well... how to understand things:
Mental clarity is the ability to read reality accurately.
I don't mean being able to look at the complete objective picture of an event, as you don't have any direct access to that. I'm talking about the ability to read the data presented by your subjective experience: thoughs, sights, sounds, etc. Once you get a clear picture of what that data is, you can then go on and use it to build or falsify your ideas about the world.
This post will focus on the "getting a clear picture" part.
I use the word "read" because it's no different than reading from a book, or from these words. When you read a book, you are actually curious as to what the words are saying. You wouldn't read anything into it that's not there, which would be counterproductive to your understanding.
You just look at the words plainly, and through this your mind automatically recognizes and presents the patterns: the meaning of the sentences, their relation to the topic, the visual imagery associated with them, all of that. If you want to know a truth about reality, just look at it and read what's there.
Want to know what the weather's like? Look outside - read what's going on.
Want to know if the Earth revolves around the Sun, or vice versa? Look at the movement of the planets, read what they're doing, see which theory fits better.
Want to check if your beliefs about the world are correct? Take one, read the reality that the belief tries to correspond to, and see how well they compare.
This is the root of all science and all epiphanies.
But if it's so simple and obvious, why am I talking about it?
It's not something that we as a species often do. For trivial matters, sure, for science too, but not for our strongly-held opinions. Not for the beliefs and positions that shape our self-image, make us feel good/comfortable, or get us approval. Not for our political opinions, religious ideas, moral judgements, and little white lies.
If you were utterly convinced that your wife was faithful, moreso, if you liked to think of her in that way, and your friend came along and said she was cheating on you, you'd be reluctant to read reality and check if that's true. Doing this would challenge your comfort and throw you into an unknown world with some potentially massive changes. It would be much more comforting to rationalize why she still might be faithful, than to take one easy look at the true information. It would also more damaging.
Delusion is reading into reality things which aren't there. Telling yourself that everything's fine when it obviously isn't, for example. It's the equivalent of looking at a book about vampires and jumping to the conclusion that it's about wizards.
Sounds insane. You do it all the time. You'll catch yourself if you're willing to read the book of your own thoughts: flowing through your head, in plain view, is a whole mess of opinions and ideas of people, places, and positions you've never even encountered. Crikey!
That mess is incredibly dangerous to have. Being a host to unchecked or false beliefs about the world is like having a faulty map of a terrain: you're bound to get lost or fall off a cliff. Reading the terrain and re-drawing the map accordingly is the only way to accurately know where you're going. Having an accurate map is the only way to achieve your goals.
So you want to develop mental clarity? Be less confused, or more successful? Have a better understanding of the world, the structure of reality, or the accuracy of your ideas?
Just practice the accurate reading of what's going on. Surrender the content of your beliefs to the data gathered by your reading of reality. It's that simple.
It can also be scary, especially when it comes to challenging your "personal" beliefs. It's well worth the fear, however, as a life built on truth won't crumble like one built on fiction.
Truth doesn't crumble.
Stay true.
Further reading:
Stepvhen from Burning true on truth vs. fantasy.
Kevin from Truth Strike on why this skill is important to develop.
Fireplace Delusions [LINK]
Sam Harris, in his recent article called The Fireplace Delusion, tries to make you feel what it's like to react to a cached belief being irreparably destroyed. Just incase you forgot what your apostasy (if you had one, of course) was like in its early stages.
What are some of the Fireplace Delusions you've come across in your days?
EDIT: WOODSMOKE HEALTH EFFECTS
= 783df68a0f980790206b9ea87794c5b6)

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)