You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Anxiety and Rationality

32 helldalgo 19 January 2016 06:30PM

Recently, someone on the Facebook page asked if anyone had used rationality to target anxieties.  I have, so I thought I’d share my LessWrong-inspired strategies.  This is my first post, so feedback and formatting help are welcome.  

First things first: the techniques developed by this community are not a panacea for mental illness.  They are way more effective than chance and other tactics at reducing normal bias, and I think many mental illnesses are simply cognitive biases that are extreme enough to get noticed.  In other words, getting a probability question about cancer systematically wrong does not disrupt my life enough to make the error obvious.  When I believe (irrationally) that I will get fired because I asked for help at work, my life is disrupted.  I become non-functional, and the error is clear.

Second: the best way to attack anxiety is to do the things that make your anxieties go away.  That might seem too obvious to state, but I’ve definitely been caught in an “analysis loop,” where I stay up all night reading self-help guides only to find myself non-functional in the morning because I didn’t sleep.  If you find that attacking an anxiety with Bayesian updating is like chopping down the Washington monument with a spoon, but getting a full night’s sleep makes the monument disappear completely, consider the sleep.  Likewise for techniques that have little to no scientific evidence, but are a good placebo.  A placebo effect is still an effect.

Finally, like all advice, this comes with Implicit Step Zero:  “Have enough executive function to give this a try.”  If you find yourself in an analysis loop, you may not yet have enough executive function to try any of the advice you read.  The advice for functioning better is not always identical to the advice for functioning at all.  If there’s interest in an “improving your executive function” post, I’ll write one eventually.  It will be late, because my executive function is not impeccable.

Simple updating is my personal favorite for attacking specific anxieties.  A general sense of impending doom is a very tricky target and does not respond well to reality.  If you can narrow it down to a particular belief, however, you can amass evidence against it. 

Returning to my example about work: I alieved that I would get fired if I asked for help or missed a day due to illness.  The distinction between believe and alieve is an incredibly useful tool that I immediately integrated when I heard of it.  Learning to make beliefs pay rent is much easier than making harmful aliefs go away.  The tactics are similar: do experiments, make predictions, throw evidence at the situation until you get closer to reality.  Update accordingly.  

The first thing I do is identify the situation and why it’s dysfunctional.  The alief that I’ll get fired for asking for help is not actually articulated when it manifests as an anxiety.  Ask me in the middle of a panic attack, and I still won’t articulate that I am afraid of getting fired.  So I take the anxiety all the way through to its implication.  The algorithm is something like this:

  1.       Notice sense of doom
  2.       Notice my avoidance behaviors (not opening my email, walking away from my desk)
  3.       Ask “What am I afraid of?”
  4.       Answer (it's probably silly)
  5.       Ask “What do I think will happen?”
  6.       Make a prediction about what will happen (usually the prediction is implausible, which is why we want it to go away in the first place)

In the “asking for help” scenario, the answer to “what do I think will happen” is implausible.  It’s extremely unlikely that I’ll get fired for it!  This helps take the gravitas out of the anxiety, but it does not make it go away.*  After (6), it’s usually easy to do an experiment.  If I ask my coworkers for help, will I get fired?  The only way to know is to try. 

…That’s actually not true, of course.  A sense of my environment, my coworkers, and my general competence at work should be enough.  But if it was, we wouldn’t be here, would we?

So I perform the experiment.  And I wait.  When I receive a reply of any sort, even if it’s negative, I make a tick mark on a sheet of paper.  I label it “didn’t get fired.”  Because again, even if it’s negative, I didn’t get fired. 

This takes a lot of tick marks.  Cutting down the Washington monument with a spoon, remember?

The tick marks don’t have to be physical.  I prefer it, because it makes the “updating” process visual.  I’ve tried making a mental note and it’s not nearly as effective.  Play around with it, though.  If you’re anything like me, you have a lot of anxieties to experiment with. 

Usually, the anxiety starts to dissipate after obtaining several tick marks.  Ideally, one iteration of experiments should solve the problem.  But we aren’t ideal; we’re mentally ill.  Depending on the severity of the anxiety, you may need someone to remind you that doom will not occur.  I occasionally panic when I have to return to work after taking a sick day.  I ask my husband to remind me that I won’t get fired.  I ask him to remind me that he’ll still love me if I do get fired.  If this sounds childish, it’s because it is.  Again: we’re mentally ill.  Even if you aren’t, however, assigning value judgements to essentially harmless coping mechanisms does not make sense.  Childish-but-helpful is much better than mature-and-harmful, if you have to choose.

I still have tiny ugh fields around my anxiety triggers.  They don’t really go away.  It’s more like learning not to hit someone you’re angry at.  You notice the impulse, accept it, and move on.  Hopefully, your harmful alief starves to death.

If you perform your experiment and doom does occur, it might not be you.  If you can’t ask your boss for help, it might be your boss.  If you disagree with your spouse and they scream at you for an hour, it might be your spouse.  This isn’t an excuse to blame your problems on the world, but abusive situations can be sneaky.  Ask some trusted friends for a sanity check, if you’re performing experiments and getting doom as a result.  This is designed for situations where your alief is obviously silly.  Where you know it’s silly, and need to throw evidence at your brain to internalize it.  It’s fine to be afraid of genuinely scary things; if you really are in an abusive work environment, maybe you shouldn’t ask for help (and start looking for another job instead). 

 

 

*using this technique for several months occasionally stops the anxiety immediately after step 6.  

Making My Peace with Belief

14 OrphanWilde 03 December 2015 08:36PM

I grew up in an atheistic household.

Almost needless to say, I was relatively hostile towards religion for most of my early life.  A few things changed that.

First, the apology of a pastor.  A friend of mine was proselytizing at me, and apparently discussed it with his pastor; the pastor apologized to my parents, and explained to my friend he shouldn't be trying to convert people.  My friend apologized to me after considering the matter.  We stayed friends for a little while afterwards, although I left that school, and we lost contact.

I think that was around the time that I realized that religion is, in addition to being a belief system, a way of life, and not necessarily a bad one.

The next was actually South Park's Mormonism episode, which pointed out that a belief system could be desirable on the merits of the way of life it represented, even if the beliefs themselves are stupid.  This tied into Douglas Adam's comment on Feng Shui, that "...if you disregard for a moment the explanation that's actually offered for it, it may be there is something interesting going on" - which is to say, the explanation for the belief is not necessarily the -reason- for the belief, and that stupid beliefs may actually have something useful to offer - which then requires us to ask whether the beliefs are, in fact, stupid.

Which is to say, beliefs may be epistemically irrational while being instrumentally rational.

The next peace I made with belief actually came from quantum physics, and reading about how there were several disparate and apparently contradictory mathematical systems, which all predicted the same thing.  It later transpired that they could all be generalized into the same mathematical system, but I hadn't read that far before the isomorphic nature of truth occurred to me; you can have multiple contradictory interpretations of the same evidence that all predict the same thing.

Up to this point, however, I still regarded beliefs as irrational, at least on an epistemological basis.

The next peace came from experiences living in a house that would have convinced most people that ghosts are real, which I have previously written about here.  I think there are probably good explanations for every individual experience even if I don't know them, but am still somewhat flummoxed by the fact that almost all the bizarre experiences of my life all revolve around the same physical location.  I don't know if I would accept money to live in that house again, which I guess means that I wouldn't put money on the bet that there wasn't something fundamentally odd about the house itself - a quality of the house which I think the term "haunted" accurately conveys, even if its implications are incorrect.

If an AI in a first person shooter dies every time it walks into a green room, and experiences great disutility for death, how many times must it walk into a green room before it decides not to do that anymore?  I'm reasonably confident on a rational level that there was nothing inherently unnatural about that house, nothing beyond explanation, but I still won't "walk into the green room."

That was the point at which I concluded that beliefs can be -rational-.  Disregard for a moment the explanation that's actually offered for them, and just accept the notion that there may be something interesting going on underneath the surface.

If we were to hold scientific beliefs to the same standard we hold religious beliefs - holding the explanation responsible rather than the predictions - scientific beliefs really don't come off looking that good.  The sun isn't the center of the universe; some have called this theory "less wrong" than an earth-centric model of the universe, but that's because the -predictions- are better; the explanation itself is still completely, 100% wrong.

Likewise, if we hold religious beliefs to the same standard we hold scientific beliefs - holding the predictions responsible rather than the explanations - religious beliefs might just come off better than we'd expect.

Min/max goal factoring and belief mapping exercise

-1 Clarity 23 June 2015 05:30AM

Edit 3: Removed description of previous edits and added the following:

This thread used to contain the description of a rationality exercise.

I have removed it and plan to rewrite it better.

I will repost it here, or delete this thread and repost in the discussion.

Thank you.

I Want To Believe: Rational Edition

4 27chaos 18 November 2014 08:00PM

Relevant: http://lesswrong.com/lw/k7h/a_dialogue_on_doublethink/

I would like this conversation to operate under the assumption that there are certain special times when it is instrumentally rational to convince oneself of a proposition whose truth is indeterminate, and when it is epistemically rational as well. The reason I would like this conversation to operate under this assumption is that I believe questioning this assumption makes it more difficult to use doublethink for productive purposes. There are many other places on this website where the ethics or legitimacy of doublethink can be debated, and I am already aware of its dangers, so please don't mention such things here.

I am hoping for some advice. "Wanting to believe" can be both epistemically and instrumentally rational, as in the case of certain self-fulfilling prophecies. If believing that I am capable of winning a competition will cause me to win, believing that I am capable of winning is rational both in the instrumental sense that "rationality is winning" and in the epistemic sense that "rationality is truth".

I used to be quite good at convincing myself to adopt beliefs of this type when they were beneficial. It was essentially automatic, I knew that I had the ability and so applying it was as trivial as remembering its existence. Nowadays, however, I'm almost unable to do this at all, despite what I remember. It's causing me significant difficulties in my personal life.

How can I redevelop my skill at this technique? Practicing will surely help, and I'm practicing right now so therefore I'm improving already. I'll soon have the skill back stronger than ever, I'm quite confident. But are there any tricks or styles of thinking that can make it more controllable? Any mantras or essays that will help my thought to become more fluidly self-directed? Or should I be focused on manipulating my emotional state rather than on initiating a direct cognitive override?

I feel as though the difficulties I've been having become most pronounced when I'm thinking about self-fulfilling prophecies that do not have guarantees of certainty attached. The lower my estimated probability that the self-fulfilling prophecy will work for me, the less able I am to use the self-fulfilling prophecy as a tool, even if the estimated gains from the bet are large. How might I deal with this problem, specifically?

Reasons to believe

9 irrational 02 December 2013 05:44AM

I've been thinking recently that I believe in the Theory of Evolution on about the same level as in the Theory of Plate Tectonics. I have grown up being taught that both are true, and I am capable of doing research in either field, or at least reading the literature to examine them for myself. I have not done so in either case, to any reasonable extent.

I am not swayed by the fact that some people consider the former (and not so much the latter) to be controversial, primarily because those people aren't scientists. I tend to be self-congratulatory about this fact, but then I think that I am essentially not interested in examining the evidence, but I am essentially taking it on faith (which the creationists are quick to point out). I think I have good Bayesian reasons to take science on faith (rather than, say, mythology that is being offered in its stead), but do I therefore have good reasons to accept a particular well-established scientific theory on faith, or is it incumbent upon me to examine it, if I think its conclusions are important to my life?

In other words, is it epistemologically wrong to rely on an authority that has produced a number of correct statements (that I could and did verify) to be more or less correct in the future? If I think of this problem as a sort of belief network, with a parent node that has causal connections to hundreds of children, I think such a reliance is reasonable, once you establish that the authority is indeed accurate. On the other hand, appeal to authority is probably the most famous fallacy there is.

Any thoughts? If Eliezer or other people have written on this exact topic, a reference would be appreciated.

[LINK] Bets do not (necessarily) reveal beliefs

12 Cyan 27 May 2013 08:13PM

When does a bet fail to reveal your true beliefs? When it hedges a risk in your portfolio.

If this claim does not immediately strike you as obviously true, you may benefit from reading this post by econblogger Noah Smith. Excerpt:

 

...Alex Tabarrok famously declared that "a bet is a tax on bullshit".

But this idea, attractive as it is, is not quite true. The reason is something that I've decided to call the Fundamental Error of Risk. It's a mistake that most people make (myself often included!), and that an intro finance class spends months correcting. The mistake is looking at the risk and return of single assets instead of total portfolios. Basically, the risk of an asset - which includes a bet! - is based mainly on how that asset relates to other assets in your portfolio.

 

Attempting to rescue logical positivism

3 RolfAndreassen 25 April 2013 06:20PM

Very brief recap: The logical positivists said "All truths are experimentally testable". Their critics responded: "If that's true, how did you experimentally test it? And if it's not true, who cares?" Which is a fair criticism. Logical positivism pretty much collapsed as a philosophical position. But it seems to me that a very slight rephrasing might have saved it: "All _beliefs_ are experimentally testable". For if the critic makes the same adjustment, asking "Is that a belief, and if so -" you can interrupt him and say, "No, that's not a belief, that's a definition of what it means to say 'I believe X'."

A definition is not true or false, it is useful or not useful. Why is this definition useful? Because it allows us to distinguish between two classes of declarative statements; the ones that are actual beliefs, and the ones that have the grammatical form of beliefs but are empty of meaningful belief-content.

It seems to me, then, that both the positivists and their critics fell into the trap of confusing 'belief' and 'truth', and that carefully making this distinction might have saved positivism from considerable undeserved mockery.

Falsifiable and non-Falsifiable Ideas

-1 shaih 19 February 2013 02:24AM


I have been talking to some people (few specific people I thought would benefit and appreciate it) in my dorm and teaching them rationality. I have been thinking of skills that should be taught first and it made me think about what skill is most important to me as a rationalist.

I decided to start with the question “What does it mean to be able to test something with an experiment?” which could also mean “What does it mean to be falsifiable?”

To help my point I brought up the thought experiment with a dragon in Carl Sagan’s garage which is as follows

Carl: There is a dragon in my garage
Me: I thought dragons only existed in legends and I want to see for myself
Carl: Sure follow me and have a look
Me: I don’t see a dragon in there
Carl: My dragon is invisible
Me: Let me throw some flour in so I can see where the dragon is by the disruption of the flour 
Carl: My dragon is incorporeal

And so on

The answer that I was trying to bring about was along the lines that if something could be tested by an experiment then it must have at least one different effect if it were true than if it were false. Further if something had at least one effect different if it were true than if it was false then I could at least in theory test it with an experiment.

This led me to the statement:
If something cannot at least in theory be tested by experiment then it has no effect on the world and lacks meaning from a truth stand point therefore rational standpoint.

Anthony (the person I was talking to at the time) started his counter argument with any object in a thought experiment cannot be tested for but still has a meaning.

So I revised my statement any object that if brought into the real world cannot be tested for has no meaning. Under the assumption that if an object could not be tested for in the real world it also has no effect on anything in the thought experiment. i.e. the story with the dragon would have gone the same way independent of its truth values if it were in the real world.

Then the discussion continued into could it be rational to have a belief that could not even in theory be tested. It became interesting when Anthony gave the argument that if believing in a dragon in your garage gave you happiness and the world would be the same either way besides the happiness combined with the principle that rationality is the art of systematized winning it is clearly rational to believe in the dragon.

I responded with truth trumps happiness and believing the dragon would force you to believe the false belief which is not worth the amount of happiness received by believing it. Even further I argued that it would in fact be a false belief because p(world) > p(world)p(impermeable invisible dragon) which is a simple occum’s razor argument.

My intended direction for this argument with Anthony from this point was to apply these points to theology but we ran out of time and we have not had time again to talk so that may be a future post.

 

Today however Shminux pointed out to me that I held beliefs that were themselves non-falsifiable. I realized then that it might be rational to believe non-falsifiable things for two reasons (I’m sure there’s more but these are the main one’s I can think of please comment your own)

1)   The belief has a beauty to it that flows with falsifiable beliefs and makes known facts fit more perfectly. (this is very dangerous and should not be used lightly because it focuses to closely on opinion)

2)   You believe that the belief will someday allow you to make an original theory which will be falsifiable.

Both of these reasons if not used very carefully will allow false beliefs. As such I myself decided that if a belief or new theory sufficiently meets these conditions enough to make me want to believe them I should put them into a special category of my thoughts (perhaps conjectures).  This category should be below beliefs in power but still held as how the world works and anything in this category should always strive to leave it, meaning that I should always strive to make any non-falsifiable conjecture no longer be a conjecture through making it a belief or disproving it. 

 

Note: This is my first post so as well as discussing the post, critiques simply to the writing are deeply welcomed in PM to me. 

 

[Link] On the Height of a Field

11 badger 02 January 2013 11:20AM

Mark Eichenlaub posted a great little case-study about the difficulty of updating beliefs, even over trivial matters like the slope of a baseball field. The basic story of Bayes-updating assumes the likelihood of evidence in different states is obvious, but feedback between observations and judgments about likelihood quickly complicate the situation:

The story of how belief is supposed to work is that for each bit of evidence, you consider its likelihood under all the various hypotheses, then multiplying these likelihoods, you find your final result, and it tells you exactly how confident you should be. If I can estimate how likely it is for Google Maps and my GPS to corroborate each other given that they are wrong, and how likely it is given that they are right, and then answer the same question for every other bit of evidence available to me, I don’t need to estimate my final beliefs – I calculate them. But even in this simple testbed of the matter of a sloped baseball field, I could feel my biases coming to bear on what evidence I considered, and how strong and relevant that evidence seemed to me.  The more I believed the baseball field was sloped, the more relevant (higher likelihood ratio) it seemed that there was that short steep hill on the side, and the less relevant that my intuition claimed the field was flat. The field even began looking more sloped to me as time went on, and I sometimes thought I could feel the slope as I ran, even though I never had before.

That’s what I was interested in here. I wanted to know more about the way my feelings and beliefs interacted with the evidence and with my methods of collecting it. It is common knowledge that people are likely to find what they’re looking for whatever the facts, but what does it feel like when you’re in the middle of doing this, and can recognizing that feeling lead you to stop?

Edit: Title changed from "An Empirical Evaluation into Runner's High," the original title of the article, to match the author's new title.

Struck with a belief in Alien presence

-9 [deleted] 11 November 2012 01:20AM

Recently I've been struck with a belief in Aliens being present on this Earth. It happened after I watched this documenary (and subsequently several others). My feeling of belief is not particular interesting in itself - I could be lunatic or otherwise psychological dysfunctional. What I'm interested in knowing is to what extend other people, who consider themselves rationalists, feel belief in the existence of aliens on this earth, after watching this documentary. Is anyone willing to try and watch it and then report back?

Another question arising in this matter is how to treat evidence of extraordinary things. Should one require 'extraordinary evidence for extraordinary claims'? I somehow feel that this notion is misguided - it discriminates evidence prior to observation. That is not the right time to start discriminating. At most we should ascribe a prior probability of zero and then do some Bayesian updating to get a posterior. Hmm, if no one has seen a black swan and some bayesian thinking person then sees a black swan a) in the distance or b) up front, what will his a posterior probability of the existence of black swans then be?

Evolutionary psychology as "the truth-killer"

10 Benedict 23 July 2012 08:44PM

So, a little background- I've just come out as an atheist to my dad, a Christian pastor, who's convinced he can "fix" my thinking and is bombarding me with a number of flimsy arguments that I'm having trouble articulating a response to, and need help shutting down. The particular issue at the moment deals with non-theistic explanations for human psychology and things like love, morality, and beauty. After attempting to communicate explanations from evolutionary psychology, I was met with amused dismissal of the subject as "speculation". 

There's one book in particular he's having me read- The Reason for God by Timothy Keller. In the book, he brings up evolutionary psychology as an alternative to theistic explanations, and immediately dismisses it as apparently self-defeating.

"Evolutionists say that if God makes sense to us, it is not because he is really there, it's only because that belief helped us survive and so we are hardwired for it. However, if we can't trust our belief-forming faculties to tell us the truth about God, why should we trust them to tell us the truth about anything, including evolutionary science? If our cognitive faculties only tell us what we need to survive, not what is true, why trust them about anything at all?" -Timothy Keller

The obvious answer is that knowing the truth about things is generally advantageous to survival- but it hardly addresses the underlying assertion- that without [incredibly specific collection of god-beliefs and assorted dogmas], human brains can't arrive at truth because they weren't designed for it. And of course, I'm talking to a guy with an especially exacting definition of "truth" (100% certainty about the territory)- I could use an LW post that succinctly discusses the role and definition of truth, there. 

Another thing Dad likes to do is back me into a corner WRT morality and moral relativism- "Oh, but can you really believe that the act of rape doesn't have an inherent [wrongness]? Are you saying it was justified for [insert historical monster] to do [atrocity] because it would make him reproductively successful?" Armed only with evolutionary explanations for their behavior, I couldn't really respond- possibly my fault, since I haven't read the Morality sequence on account of I got stuck in the Quantum Physics ultrasequence, and knowing that reality is composed of complex amplitudes flowing between explicit configurations or aaasasdjgasjdga whatever the frig even (I CAN'T) has proven to be staggeringly unhelpful in this situation.

In addition to particular arguments WRT the question posed, I could also use recommendations for good, well-argued and accessible books on the subject of evolutionary psychology, with a focus on practical experimental results and application- the guy can't be given a book and not read it, so I'm hoping to at least get him to not dismiss the science as "speculation" or a joke. It's likely he's aware that the field evolutionary psychology is really prone to hindsight bias and thus ignores it completely, so along with the book, a good article or study demonstrating the accuracy and predictive power of the evolutionary psychological model would be appreciated.

Thanks!

We prosecute CEOs for failing to do due diligence. But with people, we call it 'faith'

11 avichapman 05 July 2012 08:51AM

I wrote the following on my blog last night. I thought that I'd run it past an intelligent audience. Note that what I have referred to as an idea is what we here at lesswrong would call a 'belief'. I changed the name to remove any strange foggy baggage that might appear in the heads of potential readers who are not familier with belief vs belief-in-belief and other concepts like that.

What are your thoughts?

I recently got into a discussion on Facebook that started with an assertion that free-thought/atheism/humanism/etc was no different than the certainties of fundamentalism. But that discussion moved into many topics, one of which is why it should not be controversial to assert that one idea can be more 'right' than another.


I asserted that the view that the universe was created 13.72 billion years ago was more 'right' than the view that the universe was sneezed out of a giant space cow. My interlocutor felt that the giant space cow could be 'right' for one person, even if it is not 'right' for others.

It is at this point that we ran into a problem, as it became apparent that her view of the meaning of 'right' and my own were different. As best I can tell, she felt that 'right' meant that it feels right or brings comfort. I, of course, use the word 'right' to mean that an idea contains explanatory and predictive power. The idea that the universe started in a big bang explains a lot about what we see in the cosmos and predicts what we will see as we keep looking - with a high degree of accuracy. That makes is 'right'. And it's more 'right' now than it was two decades ago because we have found places where it is 'wrong' (the increase in the rate of expansion of the universe) and revised the idea to explain it (dark energy), making it more 'right'.

So we had two versions of 'right'. It wasn't a given that she would accept my version, so I had to come up with a good reason why my version of 'right' was, well, 'right'.

What I came up with was to point out an ethical imperative to be as 'right' as possible - using my definition. Consider this: If there are two ideas, one of which is 'right' enough to predict certain unintended consequences of an action that the other idea fails to predict. If you consciously choose the less 'right' of the two ideas (perhaps because it is 'more right for you'), you have consciously chosen to risk harming others in ways that could have been prevented by choosing the more 'right' of the two options.

Perhaps it will be clearer with an example:  Sally is worried about vaccinating herself before travelling to another country. She knows that the doctor says that it is necessary and safer than not being vaccinated. But she's also heard some bad stories about the side-effects of vaccinations. She decides that not vaccinating is 'right for her'. After all, if she's wrong, what's the harm? She might get sick, but that's a fate she brought on herself. What she fails to realise is that the more 'right' idea (that vaccines are safer than not having them) also predicts that if she fails to vaccinate, she can bring those diseases back to Australia and infect others.

But this isn't just a problem on the left wing: Consider the case of Josephine, who concedes that there is little evidence for the existence of an afterlife. But she chooses to believe anyway because it is 'right for her'. Why not? If she's wrong, she'll never know it because she'll have ceased to have existed. But here comes those unintended consequences again. This time, they come in the form of predictions that the less 'right' of the two ideas makes - that death is not the end of all existence, but a transition to a greater existence. As it happens, Josephine is an Australian senator and is about to vote on an authorisation for the ADF to bomb some village  in Afghanistan. She briefly worries about the fate of any innocent bystanders but is comforted by the fact if the ADF's aim is off, any innocents will go to heaven. But she's so busy that she fails to remember that her assumption about heaven was an arbitrary one for her own comfort and shouldn't be used outside the confines of her own skull.

Now consider this case: The CEO of a company sees credible evidence that the government is about to change, leading to a major change in policies directly affecting the company's business environment. She should probably hedge her bets to prepare for the likely change. But what if she really liked the current government? What if the prospective change to the opposition caused her distress? Might she choose to believe that the government will almost certainly win the next election because that idea feels 'right for her'? I would suggest that the stock holders would feel that her due diligence required her to hedge the company's bets, whatever her feelings.

But change this CEO to a mother making a choice on matters of vaccines or faith healing, and now she hasn't made any kind of ethical lapse - she's has just exercised her faith. We owe it ourselves and to those who are affected by our actions (which is everyone really) to try to be as 'right' as possible as often as possible. Never chose an idea because it is 'right for you'. And always be on the lookout for ideas that are even more 'right' than the ones you already cling to.

Mailing List for Digitized Belief Network Discussion

3 avichapman 06 June 2012 11:27PM

Hi all,

This is a follow-up to a previous post of mine - 'A digitized belief network?'.

I have now created a discussion group for anyone who wants to discuss the problems involved in creating a digital representation of a human's beliefs. Anyone who is interested in joining us can sign up here.

See you all around the list,
Avi

Help please!

13 Michelle_Z 06 June 2012 03:51PM

Yesterday my mom noticed (at a funeral) that I wasn't praying or participating in the mass. She confronted me about it, and I told her that no, I am not Catholic. Apparently it's sinking in and she's a bit hysterical... crying and screaming that she doesn't know me anymore.

What do I do? I don't know how to react/behave when she's doing this. It's like she wants me to feel like I'm doing something wrong, but it isn't working, so she's getting hysterical.

 

*edit*

I gave her a hug when she calmed down and told her I love her. That seemed to help, a little. Based on her previous behavior in situations where I've done something "wrong," she will (in the future) make barbs and slight passes at my beliefs. (Already she made one: insisting my love of science is causing my social anxiety disorder.) The advice given in the comments is really helpful. I plan on making the most of it.

A digitized belief network?

6 avichapman 25 May 2012 01:27AM

Hello to all,

Like the rest of you, I'm an aspiring rationalist. I'm also a software engineer. I design software solutions automatically. It's the first place my mind goes when thinking about a problem.

Today's problem is the fact that our beliefs all rest on beliefs that rest on beliefs. Each one has a <100% probability of being correct. Thus, each belief built on it has an even smaller chance of being correct.

When we discover a belief is false (or less dramatically, revise its probability of being true), it propagates to all other beliefs that are wholly or partially based on it. This is an imperfect process and can take a long time (less in rationalists, but still limited by our speed of thought and inefficiency in recall).

I think that software can help with this. If a dedicated rationalist spent a large amount of time committing each belief of theirs to a database (including a rational assessment of its probability overall and given that all other beliefs that it rests on are true) as well as which other beliefs their beliefs rest on, you would eventually have a picture of your belief network. The software could then alert you to contradictions between your estimate of a belief's probability of being true and its estimate based on the truth estimate of the beliefs that it rests on. It could also find cyclical beliefs and other inconsistencies. Plus, when you update a belief based on new evidence, it can spit out a list of beliefs that should be reconsidered.

Obviously, this would only work if you are brutally honest about what you believe and fairly accurate about your assessments of truth probabilities. But I think this would be an awesome tool.

Does anyone know of an effort to build such a tool? If not, would anyone be interested in helping me design and build such a tool? I've only been reading LessWrong for a little while now, so there's probably a bunch of stuff that I haven't considered in the design of such a tool.

Your's rationally,
Avi

Learning the basics of probability & beliefs

3 tomme 31 March 2012 09:18AM

Let's say that I believe that the sky is green.

1) How can I know whether this belief is true?

2) How can I assign a probability to it to test its degree of truthfulness?

3) How can I update this belief?

Thank you.

What happens when your beliefs fully propagate

20 Alexei 14 February 2012 07:53AM

This is a very personal account of thoughts and events that have led me to a very interesting point in my life. Please read it as such. I present a lot of points, arguments, conclusions, etc..., but that's not what this is about.

I've started reading LW around spring of 2010. I was at the rationality minicamp last summer (2011). The night of February 10, 2012 all the rationality learning and practice finally caught up with me. Like a water that has been building up behind a damn, it finally broke through and flooded my poor brain.

"What if the Bayesian Conspiracy is real?" (By Bayesian Conspiracy I just mean a secret group that operates within and around LW and SIAI.) That is the question that set it all in motion. "Perhaps they left clues for those that are smart enough to see it. And to see those clues, you would actually have to understand and apply everything that they are trying to teach." The chain of thoughts that followed (conspiracies within conspiracies, shadow governments and Illuminati) it too ridiculous to want to repeat, but it all ended up with one simple question: How do I find out for sure? And that's when I realized that almost all the information I have has been accepted without as much as an ounce of verification. So little of my knowledge has been tested in the real world. In that moment I achieved a sort of enlightenment: I realized I don't know anything. I felt a dire urge to regress to the very basic questions: "What is real? What is true?" And then I laughed, because that's exactly where The Sequences start.

Through the turmoil of jumbled and confused thoughts came a shock of my most valuable belief propagating through my mind, breaking down final barriers, reaching its logical conclusion. FAI is the most important thing we should be doing right now! I already knew that. In fact, I knew that for a long time now, but I didn't... what? Feel it? Accept it? Visualize it? Understand the consequences? I think I didn't let that belief propagate to its natural conclusion: I should be doing something to help this cause.

I can't say: "It's the most important thing, but..." Yet, I've said it so many times inside my head. It's like hearing other people say: "Yes, X is the rational thing to do, but..." What follows is a defense that allows them to keep the path to their goal that they are comfortable with, that they are already invested in.

Interestingly enough, I've already thought about this. Right after rationality minicamp, I've asked myself the question: Should I switch to working on FAI, or should I continue to make games? I've thought about it heavily for some time, but I felt like I lacked the necessary math skills to be of much use on FAI front. Making games was the convenient answer. It's something I've been doing for a long time, it's something I am good. I decided to make games that explain various ideas that LW presents in text. This way I could help raise the sanity waterline. Seemed like a very nice, neat solution that allowed me to do what I wanted and feel a bit helpful to the FAI cause.

Looking back, I was dishonest with myself. In my mind, I already wrote the answer I wanted. I convinced myself that I didn't, but part of me certainly sabotaged the whole process. But that's okay, because I was still somewhat helpful, even though may be not in the most optimal way. Right? Right?? The correct answer is "no". So, now I have to ask myself again: What is the best path for me? And to answer that, I have to understand what my goal is.

Rationality doesn't just help you to get what you want better/faster. Increased rationality starts to change what you want. May be you wanted the air to be clean, so you bought a hybrid. Sweet. But then you realized that what you actually want is for people to be healthy. So you became a nurse. That's nice. Then you realized that if you did research, you could be making an order of magnitude more people healthier. So you went into research. Cool. Then you realized that you could pay for multiple researchers if you had enough money. So you went out, become a billionaire, and created your own research institute. Great. There was always you, and there was your goal, but everything in between was (and should be) up for grabs.

And if you follow that kind of chain long enough, at some point you realize that FAI is actually the thing right before your goal. Why wouldn't it be? It solves everything in the best possible way!

People joke that LW is a cult. Everyone kind of laughs it off. It's funny because cultists are weird and crazy, but they are so sure they are right. LWers are kind of like that. Unlike other cults, though, we are really, truly right. Right? But, honestly, I like the term, and I think it has a ring of truth to it. Cultists have a goal that's beyond them. We do too. My life isn't about my preferences (I can change those), it's about my goals. I can change those too, of course, but if I'm rational (and nice) about it, I feel that it's hard not to end up wanting to help other people.

Okay, so I need a goal. Let's start from the beginning:

What is truth?

Reality is truth. It's what happens. It's the rules that dictate what happens. It's the invisible territory. It's the thing that makes you feel surprised.

(Okay, great, I won't have to go back to reading Greek philosophy.)

How do we discover truth?

So far, the best method has been the scientific principle. It's has also proved itself over and over again by providing actual tangible results.

(Fantastic, I won't have to reinvent the thousands of years of progress.)

Soon enough humans will commit a fatal mistake.

This isn't a question, it's an observation. The technology is advancing on all fronts to the point where it can be used on a planetary (and wider) scale. Humans make mistakes. Making mistake with something that affects the whole world could result in an injury or death... for the planet (and potentially beyond).

That's bad.

To be honest, I don't have a strong visceral negative feeling associated with all humans becoming extinct. It doesn't feel that bad, but then again I know better than to trust my feelings on such a scale. However, if I had to simply push a button to make one person's life significantly better, I would do it. And I would keep pushing that button for each new person. For something like 222 years, by my rough calculations. Okay, then. Humanity injuring or killing itself would be bad, and I can probably spent a century or so to try to prevent that, while also doing something that's a lot more fun that mashing a button.

We need a smart safety net.

Not only smart enough to know that triggering an atomic bomb inside a city is bad, or that you get the grandma out of a burning building by teleporting her in one piece to a safe spot, but also smart enough to know that if I keep snoozing every day for an hour or two, I'd rather someone stepped in and stopped me, no matter how much I want to sleep JUST FIVE MORE MINUTES. It's something I might actively fight, but it's something that I'll be grateful for later.

FAI

There it is: the ultimate safety net. Let's get to it?

Having FAI will be very very good, that's clear enough. Getting FAI wrong will be very very bad. But there are different levels of bad, and, frankly, a universe tiled with paper-clips is actually not that high on the list. Having an AI that treats humans as special objects is very dangerous. An AI that doesn't care about humans will not do anything to humans specifically. It might borrow a molecule, or an arm or two from our bodies, but that's okay. An AI that treats humans as special, yet is not Friendly could be very bad. Imagine 3^^^3 different people being created and forced to live really horrible lives. It's hell on a whole another level. So, if FAI goes wrong, pure destruction of all humans is a pretty good scenario.

Should we even be working on FAI? What are the chances we'll get it right? (I remember Anna Salamon's comparison: "getting FAI right" is like "trying to make the first atomic bomb explode in a shape of an elephant" would have been a century ago.) What are the chances we'll get it horribly wrong and end up in hell? By working on FAI, how are we changing the probability distribution for various outcomes? Perhaps a better alternative is to seek a decisive advantage like brain uploading, where a few key people can take a century or so to think the problem through?

I keep thinking about FAI going horribly wrong, and I want to scream at the people who are involved with it: "Do you even know what you are doing?!" Everything is at stake! And suddenly I care. Really care. There is curiosity, yes, but it's so much more than that. At LW minicamp we compared curiosity to a cat chasing a mouse. It's a kind of fun, playful feeling. I think we got it wrong. The real curiosity feels like hunger. The cat isn't chasing the mouse to play with it; it's chasing it to eat it because it needs to survive. Me? I need to know the right answer.

I finally understand why SIAI isn't focusing very hard on the actual AI part right now, but is instead pouring most of their efforts into recruiting talent. The next 50-100 years is going to be a marathon for our lives. Many participants might not make it to the finish line. It's important that we establish a community that can continue to carry the research forward until we succeed.

I finally understand why when I was talking about making games that help people be more rational with Carl Shulman, his value metric was to see how many academics it could impact/recruit. That didn't make sense to me. I just wanted to raise the sanity waterline for people in general. I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.

I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.

I've realized a lot of things lately. A lot of things have been shaken up. It has been a very stressful couple of days. I'll have to re-answer the question I asked myself not too long ago: What should I be doing? And this time, instead of hoping for an answer, I'm afraid of the answer. I'm truly and honestly afraid. Thankfully, I can fight pushing a lot better than pulling: fear is easier to fight than passion. I can plunge into the unknown, but it breaks my heart to put aside a very interesting and dear life path.

I've never felt more afraid, more ready to fall into a deep depression, more ready to scream and run away, retreat, abandon logic, go back to the safe comfortable beliefs and goals. I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it. Armed with my rationality toolkit, I could probably do wonders in that field.

Yet, I've also never felt more ready to make a step of this magnitude. Maximizing utility, all the fallacies, biases, defense mechanisms, etc, etc, etc. One by one they come to mind and help me move forward. Patterns of thoughts and reasoning that I can't even remember the name of. All these tools and skills are right here with me, and using them I feel like I can do anything. I feel that I can dodge bullets. But I also know full well that I am at the starting line of a long and difficult marathon. A marathon that has no path and no guides, but that has to be run nonetheless.

May the human race win.

two puzzles on rationality of defeat

4 fsopho 12 December 2011 02:17PM

I present here two puzzles of rationality you LessWrongers may think is worth to deal with. Maybe the first one looks more amenable to a simple solution, while the second one has called attention of a number of contemporary epistemologists (Cargile, Feldman, Harman), and does not look that simple when it comes to a solution. So, let's go to the puzzles!

 

Puzzle 1 

At t1 I justifiably believe theorem T is true, on the basis of a complex argument I just validly reasoned from the also justified premises P1, P2 and P3.
So, in t1 I reason from premises:
 
(R1) P1, P2 ,P3
 
To the known conclusion:
 
(T) T is true
 
At t2, Ms. Math, a well known authority on the subject matter of which my reasoning and my theorem are just a part, tells me I’m wrong. She tells me the theorem is just false, and convince me of that on the basis of a valid reasoning with at least one false premise, the falsity of that premise being unknown to us.
So, in t2 I reason from premises (Reliable Math and Testimony of Math):
 
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
 
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
 
(R2) F, P1, P2 and P3
 
To the justified conclusion:
 
(~T) T is not true
 
It could be said by some epistemologists that (~T) defeat my previous belief (T). Is it rational for me to do this way? Am I taking the correct direction of defeat? Wouldn’t it also be rational if (~T) were defeated by (T)? Why ~(T) defeats (T), and not vice-versa? It is just because ~(T)’s justification obtained in a later time?


Puzzle 2

At t1 I know theorem T is true, on the basis of a complex argument I just validly reasoned, with known premises P1, P2 and P3. So, in t1 I reason from known premises:
 
(R1) P1, P2 ,P3
 
To the known conclusion:
 
(T) T is true
 
Besides, I also reason from known premises:
 
(ME) If there is any evidence against something that is true, then it is misleading evidence (evidence for something that is false)
 
(T) T is true
 
To the conclusion (anti-misleading evidence):
 
(AME) If there is any evidence against (T), then it is misleading evidence
 
At t2 the same Ms. Math tells me the same thing. So in t2 I reason from premises (Reliable Math and Testimony of Math):
 
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
 
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
 
But then I reason from::
 
(F*) F, RM and TM are evidence against (T), and
 
(AME) If there is any evidence against (T), then it is misleading evidence
 
To the conclusion:
 
(MF) F, RM and TM is misleading evidence
 
And then I continue to know T and I lose no knowledge, because I know/justifiably believe that the counter-evidence I just met is misleading. Is it rational for me to act this way?
I know (T) and I know (AME) in t1 on the basis of valid reasoning. Then, I am exposed to misleading evidences (Reliable Math), (Testimony of Math) and (F). The evidentialist scheme (and maybe still other schemes) support the thesis that (RM), (TM) and (F) DEFEATS my justification for (T) instead. So that whatever I inferred from (T) is no longer known. However, given my previous knowledge of (T) and (AME), I could know that (MF): F is misleading evidence. It can still be said that (RM), (TM) and (F) DEFEAT my justification for (T), given that (MF) DEFEAT my justification for (RM), (TM) and (F)?

Cognitive Style Tends To Predict Religious Conviction (psychcentral.com)

10 Incorrect 23 September 2011 06:28PM

http://psychcentral.com/news/2011/09/21/cognitive-style-tends-to-predict-religious-conviction/29646.html

Participants who gave intuitive answers to all three problems [that required reflective thinking rather than intuitive] were one and a half times as likely to report they were convinced of God’s existence as those who answered all of the questions correctly.

Importantly, researchers discovered the association between thinking styles and religious beliefs were not tied to the participants’ thinking ability or IQ.

participants who wrote about a successful intuitive experience were more likely to report they were convinced of God’s existence than those who wrote about a successful reflective experience.

I think this is the source but I can't be sure:

http://www.apa.org/pubs/journals/releases/xge-ofp-shenhav.pdf

http://lesswrong.com/lw/7o4/atheism_autism_spectrum/4vbc

Reddit /r/psychology discussion

'Paley's iPod: The cognitive basis of the design argument within natural theology'

1 lukeprog 11 September 2011 04:35AM

De Cruz & de Smedt (2010) tries to explain, using cognitive science, why many people find design arguments so compelling. Abstract:

The argument from design stands as one of the most intuitively compelling arguments for the existence of a divine Creator. Yet, for many scientists and philosophers, Hume’s critique and Darwin’s theory of natural selection have definitely undermined the idea that we can draw any analogy from design in artifacts to design in nature. Here, we examine empirical studies from developmental and experimental psychology to investigate the cognitive basis of the design argument. From this it becomes clear that humans spontaneously discern purpose in nature. When constructed theologically and philosophically correctly, the design argument is not presented as conclusive evidence for God’s existence but rather as an abductive, probabilistic argument. We examine the cognitive basis of probabilistic judgments in relationship to natural theology. Placing emphasis on how people assess improbable events, we clarify the intuitive appeal of Paley’s watch analogy. We conclude that the reason why some scientists find the design argument compelling and others do not lies not in any intrinsic differences in assessing design in nature but rather in the prior probability they place on complexity being produced by chance events or by a Creator. This difference provides atheists and theists with a rational basis for disagreement

[LINK, TED video] Kathryn Schulz on Being Wrong

2 bogus 04 May 2011 03:52PM

http://www.ted.com/talks/kathryn_schulz_on_being_wrong.html

Kathryn Schulz is a self-identified "Wrongologist" (in fact, @wrongologist is her user name on Twitter).  She has written a popular book ("Being Wrong: Adventures in the Margin of Error", web site) and also writes the Slate column 'The Wrong Stuff'.  Her TED talk covers the problem of disagreement, the nature of belief, overconfidence bias and how to actually change your mind.  She maintains that most folks actively avoid the unpleasant feeling of "being wrong", which is an important point I have not seen before (but see The Importance of Saying 'Oops' and Crisis of Faith).  Unfortunately, she does not discuss reasoning about uncertainty, so her arguments against 'the feeling of right' end up seeming rather shallow.

Discuss her TED talk here. (Her broader work is also obviously on topic.)

What does your web of beliefs look like, as of today?

15 lukeprog 20 February 2011 07:47PM

Every few months, I post a summary of my beliefs to my blog. This has several advantages:

  1. It helps to clarify where I'm "coming from" in general.
  2. It clears up reader confusion arising from the fact that my beliefs change.
  3. It's really fun to look back on past posts and assess how my beliefs have changed, and why.
  4. It makes my positions easier to criticize, because they are clearly stated and organized into one place.
  5. It's an opportunity for people to very quickly "get to know me."

To those who are willing: I invite you to post your own web of beliefs. I offer my own, below, as an example (previously posted here). Because my world is philosophy, I frame my web of beliefs in those terms, but others need not do the same:

 

 

My Web of Beliefs (Feb. 2011)

Philosophy

Philosophy is not a matter of opinion. As in science, some positions are much better supported by reasons than others are. I do philosophy as a form of inquiry, continuous with science.

But I don’t have patience for the pace of mainstream philosophy. Philosophical questions need answers, and quickly.

Scientists know how to move on when a problem is solved, but philosophers generally don’t. Scientists don’t still debate the fact of evolution or the germ theory of disease just because alternatives are (1) logically possible, (2) appeal to many people’s intuitions, (3) are “supported” by convoluted metaphysical arguments, or (4) fit our use of language better. But philosophers still argue about Cartesian dualism and theism and contra-causal free will as if these weren’t settled questions.

How many times must the universe beat us over the head with evidence before we will listen? Relinquish your dogmas; be as light as a feather in the winds of evidence.

Epistemology

My epistemology is one part cognitive science, one part probability theory.

We encounter reality and form beliefs about it by way of our brains. So the study of how our brains do that is central to epistemology. (Quine would be pleased.) In apparent ignorance of cognitive science and experimental psychology, most philosophers make heavy use of intuition. Many others have failed to heed the lessons of history about how badly traditional philosophical methods fare compared to scientific methods. I have little patience for this kind of philosophy, and see myself as practicing a kind of ruthlessly reductionistic naturalistic philosophy.

do not care whether certain beliefs qualify as “knowledge” or as being “rational” according to varying definitions of those terms. Instead, I try to think quantitatively about beliefs. How strongly should I believe P? How should I adjust my probability for P in the face of new evidence X? There is a single, exactly correct answer to each such question, and it is provided by Bayes’ Theorem. We may never know the correct answer, but we can plug estimated numbers into the equation and update our beliefs accordingly. This may seem too subjective, but remember that you are always giving subjective, uncertain probabilities. Whenever you use words like “likely” and “probable”, you are doing math. So stop pretending you aren’t doing math, and do the math correctly, according to the proven theorem of how probable P given X is – even if we are always burdened by uncertainty.1

Language

Though I was recently sympathetic to the Austin / Searle / Grice / Avramides family of approaches to language, I now see that no simple theory of meaning can capture every use (and hypothetical use) of human languages. Besides, categorizing every way in which humans use speech and writing to have an effect on themselves and others is a job for scientists, not armchair philosophers.

However, it is useful to develop an account of language that captures most of our discourse systematically – specifically for use in formal argument and artificial intelligence. To this end, I think something like the Devitt / Sterelny account may be the most useful.

A huge percentage of Anglophone philosophy is still done in service of conceptual analysis, which I see as a mostly misguided attempt to build a Super Dictionary full of definitions for common terms that are (1) self-consistent, (2) fit the facts if they are meant to, and (3) agree with our use of and intuitions about each term. But I don’t think we should protect our naive use of words too much – rather, we should use our words to carve reality at its joints, because that allows us to communicate more effectively. And effective communication is the point of language, no? If your argument doesn’t help us solve problems when you play Taboo with your key terms and replace them with their substantive meaning, then what is the point of the argument if not to build a Super Dictionary?

A Super Dictionary would be nice, but humanity has more urgent and important problems that require a great many philosophical problems to be solved. Conceptual analysis is something of a lost purpose.

Normativity

The only source of normativity I know how to justify is the hypothetical imperative: “If you desire that P, then you ought to do Y in order to realize P.” This reduces (roughly) to the prediction: “If you do Y, you are likely to objectively satisfy your desire that P.”2

For me, then, the normativity of epistemology is: “If you want to have more true beliefs and fewer false beliefs, engage in belief-forming practices XY, and Z.”

The normativity of logic is: “If you want to be speaking the same language as everyone else, don’t say things like ‘The ball is all green and all blue at the same time in the same way.’”

Ethics, if there is anything worth calling by that name (not that it matters much; see the language section), must also be a system of hypothetical imperatives of some kind. Alonzo Fyfe and I are explaining our version of this here.

Focus

Recently, the focus of my research efforts has turned to the normative (not technical) problems of how to design the motivational system of a self-improving superintelligent machine. My work on this will eventually be gathered here. A bibliography on the subject is here.