You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

The call of the void

-6 Elo 28 August 2016 01:17PM

Original post:  http://bearlamp.com.au/the-call-of-the-void

L'appel du vide - The call of the void.

When you are standing on the balcony of a tall building, looking down at the ground and on some track your brain says "what would it feel like to jump".  When you are holding a kitchen knife thinking, "I wonder if this is sharp enough to cut myself with".  When you are waiting for a train and your brain asks, "what would it be like to step in front of that train?".  Maybe it's happened with rope around your neck, or power tools, or what if I take all the pills in the bottle.  Or touch these wires together, or crash the plane, crash the car, just veer off.  Lean over the cliff...  Try to anger the snake, stick my fingers in the moving fan...  Or the acid.  Or the fire.

There's a strange phenomenon where our brains seem to do this, "I wonder what the consequences of this dangerous thing are".  And we don't know why it happens.  There has only been one paper (sorry it's behind a paywall) on the concept.  Where all they really did is identify it.  I quite like the paper for quoting both (“You know that feeling you get when you're standing in a high place… sudden urge to jump?… I don't have it” (Captain Jack Sparrow, Pirates of the Caribbean: On Stranger Tides, 2011). And (a drive to return to an inanimate state of existence; Freud, 1922).

Taking a look at their method; they surveyed 431 undergraduates for their experiences of what they coined HPP (High Place Phenomenon).  They found that 30% of their constituents have experienced HPP, and tried to measure if it was related to anxiety or suicide.  They also proposed a theory. 

...we propose that at its core, the experience of the high place phenomenon stems from the misinterpretation of a safety or survival signal. (e.g., “back up, you might fall”)

I want to believe it, but today there are Literally no other papers on the topic.  And no evidence either way.  So all I can say is - We don't really know.  s'weird.  Dunno.


This week I met someone who uncomfortably described their experience of toying with L'appel du vide.  I explained to them how this is a common and confusing phenomenon, and to their relief said, "it's not like I want to jump!".   Around 5 years ago (before I knew it's name) an old friend recounting the experience of living and wondering what it was like to step in front of moving busses (with discomfort), any time she was near a bus.  I have coaxed a friend out of the middle of a road (they weren't drunk and weren't on drugs at the time).  And dragged friends out of the ocean.  I have it with knives, in a way that borderlines OCD behaviour.  The desire to look at and examine the sharp edges.

What I do know is this.  It's normal.  Very normal.  Even if it's not 30% of the population, it could easily be 10 or 20%.  Everyone has a right to know that it happens, and it's normal and you're not broken if you experience it.  Just as common a shared human experience as common dreams like your teeth falling out, or of flying, running away from groups of people, or being underwater.  Or the experience of rehearsing what you want to say before making a phone call.  Or walking into a room for a reason and forgetting what it was.

Next time you are struck with the L'appel du vide, don't get uncomfortable.  Accept that it's a neat thing that brains do, and it's harmless.  Experience it.  And together with me - wonder why.  Wonder what evolutionary benefit has given so many of us the L'appel du vide.  

And be careful.


Meta: this took one hour to write.

Addendum to applicable advice

-8 Elo 16 August 2016 12:59AM

Original post: http://bearlamp.com.au/addendum-to-applicable-advice/
(part 1: http://bearlamp.com.au/applicable-advice/)


If you see advice in the wild and think somethings along the lines of "that can't work for me", that's a cached thought.  It could be a true cached thought or it could be a false one.  Some of these thoughts should be examined thoroughly and defeated.

If you can be any kind of person - being the kind of person that advice works for - is an amazing skill to have.  This is hard.  You need to examine the advice and decide how that advice happened to work, and then you need to modify yourself to make that advice applicable to you.

All too often in this life we think of ourselves as immutable.  And our problems fixed, with the only hope of solving them to find a solution that works for the problem.  I propose it's the other way around.  All too often the solutions are immutable, we are malleable and the problems can be solved by applying known advice and known knowledge in ways that we need to think of and decide on.


Is it really the same problem if the problem isn't actually the problem any more, but rather the problem is a new method of applying a known solution to a known problem?

(what does this mean) Example: Dieting - is an easy example.

This week we have been talking about Calories in/Calories out.  It's pretty obvious that CI/CO is true on a black-box system level.  If food goes (calories in) in and work goes out (calories out - BMR, incidental exercise, purposeful exercise), that is what determines your weight.  Ignoring the fact that drinking a litre of water is a faster way to gain weight than any other way I know of.  And we know that weight is not literally health but a representation of what we consider healthy because it's the easiest way to track how much fat we store on our body (for a normal human who doesn't have massive bulk muscle mass).

CICO makes for terrible advice.  On one level, yes.  To modify the weight of our black box, we need to modify the weight going in and the weight going out so that it's not in the same feedback loop as it was (the one that caused the box to be fat).  On one level CICO is exactly all the advice you need to change the weight of a black box (or a spherical cow in a vacuum).  

On the level of human systems: People are not spherical cows in a vacuum.  Where did spherical cows in a vacuum come from?  It's a parody of what we do in physics.  We simplify a system down to it's basic of parts and generate rules that make sense.  Then we build up to a complicated model and try to find how to apply that rule.  It's why we can work out where projectiles are going to land because we have projectile motion physics (even though often air resistance and wind direction end up changing where our projectile lands, we still have a good guess.  And we later build estimation systems based on using those details for prediction too).  

So CICO is a black-box system, a spherical cow system.  It's wrong.  It's so wrong when you try to apply it to the real world.  But that doesn't matter!  It's significantly better than nothing.  Or the blueberry diet.


The applicable advice of CICO

The point of applicable advice is to look at spherical cows and not say, "I'm no spherical cow!".  Instead think of ways in which you are a spherical cow.  Ways in which the advice is applicable.  Places where - actually if I do eat less, that will improve the progress of my weight loss in cases where my problem is that I eat too much (which I guarantee is relevant for lots of people).  CICO might not be your silver bullet for whatever reason.  It might be grandma, it might be Chocolate bars, It might be really really really delicious steak.  Or dinner with friends.  Or "looking like you are able to eat forever in front of other people".  If you take your problem.  Add in a bit of CICO, and ask, "how can I make this advice applicable to me?".  Today you might make progress on your problem.


And now for some fun from Grognor:  Have you tried solving the problem?


Meta: this took 30mins to write.  All my thoughts were still clear after recently writing part 1, and didn't need any longer to process.

Part 1: http://bearlamp.com.au/applicable-advice/
(part 1 on lesswrong: http://lesswrong.com/r/discussion/lw/nu3/applicable_advice/)

Mental models - giving people personhood and taking it away

-10 Elo 11 August 2016 08:32AM

Original post: http://bearlamp.com.au/giving-people-personhood-and-taking-it-away

This post is about the Kegan levels of self development.  If you don't know what that is, this post might still be interesting to you but you might be missing some key structure to understand where it fits among that schema.  More information can be found here (https://meaningness.wordpress.com/2015/10/12/developing-ethical-social-and-cognitive-competence/)

I am not ready to definitely accept the Kegan levels as a useful model because often it makes retrospective predictions.  Rather than predictions of the future.  A model is only as useful as what it can predict, so if it can't be used on the fly when you want to explain the universe you might as well throw it out.  Having said that, this idea is interesting.


When I was little, people fell into different categories.  There was my parents - the olderClass humans (going to refer to them as Senior-humans), my siblings - which, as I grew up turned into my age-group humans and through school - my peergroup humans.

People like doctors fell into SeniorClass, Dentists, Vets, Plumbers, PIC (People In Charge) - all fell into the SeniorClass of humans.  A big one was teachers - they were all PIC.  A common trope among children is that the teachers sleep at school.  Or to use a gaming term - we feel as though they are the NPC's of that part of our journey in life.

As far as I can tell (from trying to pinpoint this today); the people I meet on my own terms become peergroup humans.  Effectively friends.  People I meet not on my terms; as well as strangers - first join some kind of seniorclass of humans, if I get to know them enough they transition to my peergroup.  Of course this is a bit strange because on the one hand I imagine I want to be friends with the PIC, or the senior-class humans because of the opportunity to get ahead in life.  the good ol' I know a guy who know's a guy.  Which is really not what a peergroup constitutes.

Peergroup humans are not "A guy with skills" much as we might hope for; they are (hopefully) all at our own, or near our own skill level.  (on Kegan's stage 3) people who's opinions and ideas we care about because they are similar to us.


Recently I have noticed events that have taken some of my long term SeniorClass and shift them into my peergroup.  Effectively "demoting" them from "Professional" to "human".  When I think "person has their shit together" or "person doesn't have their shit together".  I guess there were always people who seemed to have their shit together.  Now that I am an adult it's clear that less and less people are competent and more and more people are winging it through their lives.  It's mildly uncomfortable to think of people as being less "together" than I thought they were.

The other place where it's been an uncomfortable transition is in my memory.  I will from time to time think back to a time when I deferred judgement, decision making capacity, or high-level trust in someone else having my own best interests at heart - where now looking back retrospectively they were just as lost and confused as I was in some of those situations, but they had a little kid to take care of/be in charge of/be in seniority to.

What I wonder about this process of demoting people is - what if instead of demoting my adults as they prove their humanity; I instead promote all the humans to Senior-Class.  What would that do to my model of humans?  And I guess I don't really know where I stand.  Am I an adult?  Am I a peer?  I have always been an observer...

I'm not really getting at anything with this post.  Just interesting to observe this reclassification happening and fit kegan's stages around it.  Obviously some of the way that I sorted Senior-class humans is particularly relevant to a stage 3 experience of how I managed my relationships when I was smaller.  I also wonder that given the typical mind - whether this is normal or unusual.  

Question for today:

  • Do you divide people into "advanced" and "equal" and "simpler" - (or did you do it when you were younger?)
  • Do people ever change category on you?  In which direction?  What do you do about that?
  • Assuming I am on some kind of path of gradually increasing understanding and growing and changing models of the world around me - what is next?

Meta: this took 3 hours to write over a few days.

3 classifications of thinking, and a problem.

0 Elo 26 July 2015 03:33PM

I propose 3 areas of defining thinking "past", "future", "present".  followed by a hard question.

 

Past

This can be classified as any system of review, any overview of past progress, and any learning from the past broadly including history, past opportunities or challenges, shelved projects, known problems and previous progress.  Where a fraction of your time should be spent in the process of review in order to influence your plan for the future.

 

Future

Any planning-thinking tasks, or strategic intention about plotting a course forward towards a purposeful goal.  This can overlap with past-strategising by the nature of using the past to plan for the future.

 

Present

These actions include tasks that get done now, This is where stuff really happens; (technically both past-thinking and future-thinking classify as something you can do in the present, and take up time in the present, but I want to keep them apart for now)  This is the living-breathing getting things done time.  the bricks-and mortar of actually building something; creating and generating progress towards a designated future goal.

 

The hard question

I am stuck on finding a heuristic or estimate for how long should be spent in each area of being/doing.  I reached a point where I uncovered a great deal of neglect for both past events and making future purposeful plans.  

Where if 100% of time is spent on the past, nothing will ever get done, other than a clear understanding of your mistakes;

Similarly 100% on the future will lead to a lot of dreaming and no progress towards the future.  

Equally if all your time is spent running very fast in the present-doing-state you might be going very fast; but by the nature of not knowing where you are going in the future; you might be in a state of not-even-wrong, and not know.

10/10/80?  20/20/60?  25/25/50? 10/20/70?

I am looking for suggestions as to an estimate of how to spend each 168 hour week that might prove a fruitful division of time, or a method or reason for a certain division (at least before I go all empirical trial-and-error on this puzzle).

I would be happy with recommended reading on the topic if that can be provided.

Have you ever personally tackled the buckets? Did you come up with a strategy for how to decide between them?

Thanks for the considerations.

Noodling on a cloud : how to converse constructively

2 Douglas_Reay 15 June 2015 10:30AM

Noodling on a cloud

SUMMARY:

By teaching others, we also learn ourselves.   How can we best use conversation as a tool to facilitate that?

 

 

Sensemaking

How do people make sense out of raw input?

Marvin Cohen suggests that it is usually a two-way process.  Not only do we use the data to suggest a mental models to try for good fit, but also we simultaneously try to use mental models to select and connect the data. (LINK)

The same thing applies when the data is a cloud of vaguely associated concepts in our head.  One of the ways that we can make sense of them, turn them into crystallized thoughts that we can then associate with a handle, is by attempting to verbalize them.  The discipline of turning something asyndetic into a linear progression of connected thoughts forces us to select between possible mental models and actually pick just one, allowing us to then consider whether it fits the data well or not.

But the first possibility we pick won't necessarily be the one that fits best.  Going around a loop, iterating, trying different starting points or angles of approach, trying different ways of stating things, and seeing what associations those raise to add to the cloud, takes longer but can often produce more useful results.  However, its a delicate process, because of the way memory works.

 

Working memory

The size of cloud you can crystallize is limited.  The type of short term memory that the brain uses to store them where you're aware of them lasts about 18 seconds.  (LINK)  For a concept or datum to persist longer than that, part of your attention needs to be used to 'revisit' it.   The faster your ability to do that, the more mental juggling balls you can keep in the air without dropping one.  Most adults can keep between 5 and 7 balls in the air, in their 'working memory'. (LINK)

There are a number of ways around this limitation.   You can group multiple concepts together and treat them as a single 'ball', if you can attach to them a mental handle (a reference, such as a word or image, that recalls them). (LINK

You can put things down on paper, rather than doing it all in your head, using the paper to store links to different parts of the cloud.  So, for instance, rather than try to consider 12 things at once, split them into 4 groups of 3 (A, B, C & D), and systematically consider the concepts 6 at a time: A+B, A+C, A+D, B+C, B+D, C+D (and hope that the vital combination you needed wasn't larger than 6, or spread over more than 2 of your groups).

And you can use other parts of your short term memory as a temporary cache, to expand your stack.  For example, the phonological loop, which gets used when we talk out aloud. (LINK)

 

Talk

In section 4 of their 2007 paper (LINK), Simon Jones and Charles Fernyhough say some very interesting things about the origins of thought, and also about Vygotsky's theory of how self-talk relates to how children learn to think through self-narration. (LINK)

It explains why talking aloud is actually one of the most effective ways of coming up with new thoughts and deciding what you actually think about something.  And that's not limited to when you explicitly talk to yourself.  The same process takes place when you are talking to other people; when you're having a conversation.

When this works harmoniously, your conversation partners acts as a sounding board, as additional sources of concepts to add to the cloud you're jointly noodling on, and the sound of the words (via the phonological loop part of your memory) works in effect as an expansion to the size of your working memory.

The downside is potential interruptions.

 

Interrupting the flow

A lot has been written about the evils of interrupting computer programmers (LINK, LINK):

THIS IS WHY YOU SHOULDN'T INTERRUPT A PROGRAMMER

and, to some extent, the same applies when you interrupt while someone else is talking, or totally derail the conversation onto a different topic when they pause.

People interrupt because they don't know better (children who have not yet learned how to take turns), because they are egotistic (they think that what they want to say is more important or interesting - they want the attention), as a domination power play (yes, that get's taught as a deliberate technique: LINK), because they are desperately impatient (they've have a thought and are sure they'll forget it unless they speak it immediately) or even because they believe they are being helpful (completing your sentence, making efficient use of time).

But what the people worried about efficiency of communication are not taking into account is that there's more than one conversation going on.  When I talk aloud to you, I'm also talking aloud to myself.  When you interrupt my words to you, you also interrupt those same words going to me, which help me think.

As one person put it, in the context of a notice on a door in a work environment:

When I’m busy working, please don’t interrupt me unless
what you have to share is so urgent and important that
it’s worth erasing all the work I’ve done in the past hour.


Points of order

So is interruption ever ok?

Yes.  Sometimes people are not in the process of constructing thoughts that are new to them, on the very edge of what they can conceive.  Sometimes people ramble, because they are used to a conversational style that encourages interruptions, and welcome someone else 'rescuing' them from having to fill a silence.  And sometimes something new comes up which is not only important enough, but also urgent enough, to merit an interruption.

But I'd like to consider a different scenario.  Not a contentious one, where the interruption happens against your will, but where two or more well intentioned people are having a conversation designed to evoke new ideas and where certain type of interruption are part of a pre-agreed protocol, designed to aid the process.

For example, suppose people in a particular conversational group agreed certain hand signals, that could be used to cue each other to:
  • I'm currently trying to solidify a thought.  Please give me a moment to finish, then I'll restate it from the beginning in better order or answer questions.
or:
  • Stack Overflow.  I want to follow your explanation, but I now have so many pending questions that I can't take in anything new that you're saying.  Please could you find a pause point to let me off load some of those pending points, before you continue?

Does anyone here know of groups that have systematically investigated how best to use conversation as a tool to improve not the joint decision making or creativity, but the ability of individuals to conceptualise more complex ideas?

Is arrogance a symptom of bad intellectual hygeine?

12 enfascination 21 March 2015 07:59PM

I have this belief that humility is a part of good critical thinking, and that egoism undermines it.  I imagine arrogance as a kind of mind-death.  But I have no evidence, and no good mechanism by which it might be true.  In fact, I know the belief is suspect because I know that I want it to be true — I want to be able to assure myself that this or that intolerable academic will be magically punished with a decreased capacity to do good work. The truth could be the opposite: maybe hubris breeds confidence, and confidence results? After all, some of the most important thinkers in history were insufferable.

Is any link, positive or negative, between arrogance and reasoning too tenuous to be worth entertaining? Is humility a pretty word or a valuable habit? I don't know what I think yet.   Do you?

Kevin Drum's Article about AI and Technology

19 knb 15 May 2013 07:38AM

Kevin Drum has an article in Mother Jones about AI and Moore's Law:

THIS IS A STORY ABOUT THE FUTURE. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.

The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.

Although he only mentions consumer goods, Drum presumably means that scarcity will end for services and consumer goods. If scarcity only ended for consumer goods, people would still have to work (most jobs are currently in the services economy). 

Drum explains that our linear-thinking brains don't intuitively grasp exponential systems like Moore's law. 

Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.

By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.

At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.

So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?

But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.

He also includes this nice animated .gif which illustrates the principle very clearly. 

Drum continues by talking about possible economic ramifications.

Until a decade ago, the share of total national income going to workers was pretty stable at around 70 percent, while the share going to capital—mainly corporate profits and returns on financial investments—made up the other 30 percent. More recently, though, those shares have started to change. Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.

Drum says the share of (US) national income going to workers was stable until about a decade ago. I think the graph he links to shows the worker's share has been declining since approximately the late 1960s/early 1970s. This is about the time US immigration levels started increasing (which raises returns to capital and lowers native worker wages). 

The rest of Drum's piece isn't terribly interesting, but it is good to see mainstream pundits talking about these topics.

Cognitive Load and Effective Donation

16 Neotenic 10 March 2013 03:11AM

(previous title: Very low cognitive load) 

 

Trusting choices made by the same brain that turns my hot 9th grade teacher into a knife-bearing possum at the last second every damn night.

Sean Thomason

 

We can't trust brains when taken as a whole. Why should we trust their subareas?

 

Cognitive load is the load related to the executive control of working memory. Depending on what you are doing, the more parallel/extraneous cognitive load you have, the worse you'll do it. (The process may be the same as what the literature calls "Ego Depletion" or "system 2 depletion", the jury is still up on that)

If you go here and enter 0 as lower limit and 1.000.000 as upper limit, and try to keep the number in mind until you are done reading post and comments, you'll get a bit of load while you read this post. 

Now you may process numbers verbally, visually, or both. More generally, for anything you keep in mind, you are likely allocating it in a part of the brain that is primarily concerned with a sensory modality, so it will have some "flavour","shape", "location", "sound", or "proprioceptual location". It is harder to consciously memorize things using odours, since those have shortcuts within the brain. 

 

Let us in turn examine two domains in which understanding cognitive load can help you win: Moral Dilemmas and Personal Policy

 

Moral Games/Dilemmas

In Dictator game (you're given $20 and you can give any amount to a stranger and keep the rest) the effect of load is negligible.

In the tested versions of the Trolley problems (kill/indirectly kill/let die one to save five) people are likely to become less utilitarian when under non-visual load. It is assumed that higher functions of the brain (in VMPF cortex) - which integrate higher moral judgement with emotional taste buttons - fails to integrate, making the "fast thinking", emotional mode be the only one reacting.

Visual information about the problem brings into salience the gory aspect of killing someone, and other lower level features that incline non-utilitarian decisions. So when visual load requires you to memorize something else, like a bird drawing, you become more utilitarian since you fail to visualize the one person being killed (which we do more than the five) in as much gory detail. (Greene et al,2011)

(Bednar et al.2012) show that when playing two games simultaneously, the strategy of one spills over to the other one. Critically, heuristics that are useful for both games were used, increasing the likelihood that those heuristics will be suboptimal in each case. 

In altruistic donation scenarios, with donations to suffering people at stake, (Small et al. 2007) more load increased scope insensitivity, so less load made the donation more proportional to how many people are suffering. Contrary to load, priming increases the capacity of an area/module, by using it and not keeping the information stored, leaving free usable space. (Dickert et al.2010) shows that priming for empathy increases donation amount (but not decision to donate), whereas priming calculation decreases it.

Taken together, these studies indicate that to make people donate more it is most effective to, after being primed for thinking about how they will feel about themselves, and for empathic feelings, make them feel empathically and non-visually someone from their own race. After all that you make them keep a number and a drawing in mind, and this is the optimal time to donate.

Personal Policy

If given a choice between a high carb food, and a low carb one, people undergoing diets are substantially more likely to choose the high carb one if they are keeping some information in mind.

Forgetful people, and those with ADHD know that, for them, out of sight means out of mind. Through luck, intelligence, blind error or psychological help, they learn to put things, literally, in front of them, to avoid 'losing them' in their minds corner somewhere. They have a lower storage size for executive memory tasks.

Positive psychologists advise us to make our daily tasks, specially the ones we are always reluctant to start, in very visible places. Alternatively, we can make the commitment to start them smaller, but this only works if we actually remember to do them.

Marketing appropriates cognitive load in a terrible way. They know if we are overwhelmed with information, we are more likely to agree. They'll inform us more than what we need, and we aren't left with enough brain to decide well. One more reason to keep advertisement out of sight and out of mind.

 

Effective use of Cognitive Load

Once you understand how it works, it is simple to use cognitive load as a tool:

1)Even if your executive control of activities is fine, externalize as much as you can, by using a calendar and alarms to tell you everything you need to do.

2)Do apparently mean things to donors like the above suggestion.

3)When in need of moral empathy, type 1, fast, emotional buttons system, keep numerical and verbal things (like phone numbers) in mind while deciding.

4)When in need of moral utilitarianism, highjack the taste buttons, automatic, type 1 system, by giving yourself an emotional experience more proportional to the numbers  - for instance, when reasoning about the trolley problem, think about each of the five, or pinch yourself with a needle five times prior to deciding.

5)When in need of more cognitive calculating capacity, besides having freed yourself from executive tasks, use natural sensory modalities to keep stuff in mind, not only the classic castle mnemonics (spacial location), but put the chunks of information in different parts of your body (proprioception), associate them with textures (Feynman 1985), shapes, and actions.


If practising this looks sometimes unnecessary, or immoral, we can remember Max Tegmark's gloomy assessment of Science's pervasiveness (or lack thereof) at the Edge 2011 question. When discussing the dishonesty and marketing of opponents and defenders of facts/Science, he says: 
Yet we scientists are often painfully naive, deluding ourselves that just because we think we have the moral high ground, we can somehow defeat this corporate-fundamentalist coalition by using obsolete unscientific strategies. Based of what scientific argument will it make a hoot of a difference if we grumble "we won't stoop that low" and "people need to change" in faculty lunch rooms and recite statistics to journalists? 

We scientists have basically been saying "tanks are unethical, so let's fight tanks with swords".

 

To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically:

We need new science advocacy organizations which use all the same scientific marketing and fundraising tools as the anti-scientific coalition.
We'll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites.
We won't need to stoop all the way down to intellectual dishonesty, however. Because in this battle, we have the most powerful weapon of all on our side: the facts.

 

We'd better start pushing emotional buttons and twisting the mental knobs of people if we want to get something done. Starting with our own.

Conformity

8 Douglas_Reay 02 November 2012 07:02PM

A rather good 10 minute YouTube video presenting the results of several papers relevant to how conformity affects our thinking:

http://www.youtube.com/watch?v=TrNIuFrso8I

 

The papers mentioned are:

Sherif, M. (1935). A study of some social factors in perception. Archives of Psychology, 27(187), pp.17-22.

Asch, S.E. (1951). Effects of group pressure upon the modification and distortion of judgment. In H. Guetzkow (ed.) Groups, leadership and men. Pittsburgh, PA: Carnegie Press.
Asch, S.E. (1955). Opinions and social pressure. Scientific American, 193(5), pp.31-35.

Berns, G.S., Chappelow, J., Zink, C.F., Pagnoni, G., Martin-Skurski, M.E., and Richards, J. (2005) 'Neurobiological Correlates of Social Conformity and Independence During Mental Rotation' Biological Psychiatry, 58(3), pp.245-253.

Weaver, K., Garcia, S.M., Schwarz, N., & Miller, D.T. (2007) Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus. Journal of Personality and Social Psychology, 92(5), 821-833.

 

What techniques do other posters, here on LessWrong, use to monitor and counter these effects in their lives?

The video also lists some of the advantages to a society of having a certain amount of this effect in place.   Does anyone here conform too little?

Critical Thinking in Global Challenges - free Coursera class

-5 Utopiah 17 July 2012 03:23PM

"develop and enhance your ability to think critically, assess information and develop reasoned arguments in the context of the global challenges facing society today."

starts 28 January 2013

cf https://www.coursera.org/course/criticalthinking

see also http://lesswrong.com/lw/dni/a_beginners_guide_to_irrational_behavior_free/
and http://lesswrong.com/lw/d3w/coursera_behavioural_neurology_course/

Visualizing effect sizes

0 EvelynM 04 January 2012 05:18AM

http://healthyinfluence.com/wordpress/steves-primer-of-practical-persuasion-3-0/intro/windowpane/

"The point of this demonstration is to show that you can think with numbers in a practical and efficient way without having a statistician in the room.  Anyone can handle the windowpane approach with numbers.  Just have a clear definition of Changed? (Yes or No) and a clear definition of the Group (Treatment or Control).  Then just count and look for percentage differences.  A 10% difference is small, 30% is moderate, and 50% is large.  And, realize that while “small” may be hard to detect, it can definitely make big practical effect.

Now whether you conceptualize Effect Sizes as windowpanes or jars with marbles, you now understand what the idea, Difference, means.  You can count or see No, Small, Medium, or Large Differences and interpret those complex statistical arguments you encounter all the time.  Realize again, that this approach is not Statistics for Dummies, Idiots, or Fools, but is a standard and mathematically correct way to present quantitative information."

http://www.psychologicalscience.org/journals/pspi/pspi_8_2_article.pdf

tldr; Natural frequencies (ratios of counts of subjects) rather than Conditional probabilities, are easier for people to comprehend.

Which fields of learning have clarified your thinking? How and why?

12 [deleted] 11 November 2011 01:04AM

Did computer programming make you a clearer, more precise thinker? How about mathematics? If so, what kind? Set theory? Probability theory?

Microeconomics? Poker? English? Civil Engineering? Underwater Basket Weaving? (For adding... depth.)

Anything I missed?

Context: I have a palette of courses to dab onto my university schedule, and I don't know which ones to chose. This much is for certain: I want to come out of university as a problem solving beast. If there are fields of inquiry whose methods easily transfer to other fields, it is those fields that I want to learn in, at least initially.

Rip apart, Less Wrong!

What are the best ways of absorbing, and maintaining, knowledge?

17 [deleted] 03 November 2011 02:02AM

Recently, I've collapsed (ascended?) down/up a meta-learning death spiral -- doing a lot less of reading actual informative content, than figuring out how to manage and acquire such content (as well as completely ignoring the antidote). In other words, I've been taking notes on taking notes. And now, I'm looking for your notes on notes for notes.

What kind of scientific knowledge, techniques, and resources do we have right now in the way of information management? How would one efficiently extract useful information possible out of a single pass of the source? The second pass? 

The answers may depend on the media, and the media might not be readily apparent. Example: Edward Boyden, Assistant Professor at the MIT Media Lab, recommends recording in a notebook every conversation you ever have with other people. And how do you prepare yourself for the serendipity of a walk downtown? I know I'm more likely to regret not having a notebook on hand than spending the time to bring one along.

I'll conglomerate what I remember seeing on the N-Back Mailing List and in general: I sincerely apologize for my lack of citation.

Notes

  • I'm on the fence about Shorthand as a note-taking technique, given the learning overhead, but I'm sure that the same has been said for touch-typing. It would involve a second stage of processing if you can't read as well as you write, but given the way I have taken notes (... "non-linearly"...), that stage would have to come about anyway. The act of translation may serve as a way of laying connective groundwork down.
  • Livescribe Pens are nifty for those who write slowly, but they need to be combined with a written technique to be of any use (otherwise you're just recording the talk, and would have to live through it twice without any obvious annotation and tagging).
  • Cornell Notes or taking notes in a hierarchy may have been the method you were taught in high school; it was in mine. The issue I have had with this format is that I found it hard to generate a structure while listening to the teacher at the same time.
  • Mind-Mapping.
  • Color-coding annotations of text has been remarked to be useful on Science Daily.
Reading
  • Speed Reading Techniques  or removing sub-vocalization would seem to have benefits.
  • Once upon a time someone recommended me the book, "How to Read a Book". Nothing ground-breaking -- outline the author's intent, the structure of his argument, and its content. Then criticize. In short, book reverse-engineering.
Retention
  • Spaced Repetition. I'm currently flipping through the thoughts of  Peter Wozniak, who seems to have made it his dire mission to make every kind of media possible Spaced Repetition'able. I'm wondering if anyone has any thoughts on incremental reading or  video; also, how to possibly translate the benefits of SRS to dead-tree media, which seems a bit cumbersome.

(I've also heard a handful of individuals claim that SRS has helped them "internalize" certain behaviors, or maybe patterns of thought, like Non-Violent Comunication or Bayes Theorem... any takers on this?)

  • Wikis, which seem like a good format for creating social accountability, and filing notes that aren't note-carded.  But what kind of information should that be?
  • Emotionally charged stimuli, especially stressful, tends to be remembered to greater accuracy.
  • Category Brainstorming.Take your bits of knowledge, and organize them into as many different groups as you can think of, mixing and matching if need be. Sources for such provocations could include Edward De Bono's "Lateral Thinking" and Seth Godin's "Free Prize Inside", or George Polya's "How to Solve It". I'm a bit ambivalent of deliberately memorizing such provocations -- does it get in the way of seeing originally? -- but once again, it could lay down the connective framework needed for good recall.
  • Mnemonics to encode related information seems useful.
Any other information gathering, optimising and retaining techniques worthy of mention?

 

 

Thinking in Bayes: Light

6 atucker 10 October 2011 04:08AM

There are a lot of explanations of Bayes' Theorem, so I won't get into the technicalities. I will get into why it should change how you think. This post is pretty introductory, so free to totally skip it if you don't feel like there's anything about Bayes' Theorem that you don't understand.

For a while I was reading LessWrong and not seeing what the big deal about Bayes' Theorem was. Sure, probability is in the mind and all, but I didn't see why it was so important to insist on bayesian methods. For me they were a tool, rather than a way of thinking. This summary also helped someone in the DC group.

After using the Anki deck, a thought occurred to me:

Bayes theorem means that when seeing how likely a hypothesis is after an event, not only do I need to think about how likely the hypothesis said the event is, I need to consider everything else that could have possibly made that event more likely.

To illustrate:

pretty clearly shows how you need to consider P(e|H), but that's slightly more obvious than the rest of it.

If you write it out the way that you would compute it you get...

where h is an element of the hypothesis space.

This means that every way that e could have happened is important, on top of (or should I say under?) just how much probability the hypothesis assigned to e.

This is because P(e) comes from every hypothesis that contributes to e happening, or more mathilyeX P(e) is the sum over all possible hypotheses of the probability of the event and that hypothesis, computed by the probability of the hypothesis times the probability of the event given the hypothesis.

In LaTeX:

where h is an element of the hypothesis space.

Cognitive Style Tends To Predict Religious Conviction (psychcentral.com)

10 Incorrect 23 September 2011 06:28PM

http://psychcentral.com/news/2011/09/21/cognitive-style-tends-to-predict-religious-conviction/29646.html

Participants who gave intuitive answers to all three problems [that required reflective thinking rather than intuitive] were one and a half times as likely to report they were convinced of God’s existence as those who answered all of the questions correctly.

Importantly, researchers discovered the association between thinking styles and religious beliefs were not tied to the participants’ thinking ability or IQ.

participants who wrote about a successful intuitive experience were more likely to report they were convinced of God’s existence than those who wrote about a successful reflective experience.

I think this is the source but I can't be sure:

http://www.apa.org/pubs/journals/releases/xge-ofp-shenhav.pdf

http://lesswrong.com/lw/7o4/atheism_autism_spectrum/4vbc

Reddit /r/psychology discussion