Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] My Notes on the book, "Turn the ship around" by L David Marquet

3 Elo 30 March 2017 06:32AM

Exploration-Exploitation problems

9 Elo 28 December 2016 01:33AM

Original post: http://bearlamp.com.au/exploration-exploitation-problems/

I have been working on the assumption that exploration-exploitation knowledge was just common.  Unfortunately I did the smart thing of learning about them from a mathematician at a dojo in Melbourne, which means that no.  Not everyone knows about it.  I discovered that again today when I searched for a good quick explanation of the puzzle.  With that in mind this is about Exploration Exploitation.


The classic Exploration-Exploitation problem in mathematics is the multi-armed bandit.  Which is a slang term for a bank of slot machines.  Where the player knows that each machine has a variable payoff and you have a limit number of attempts before you run out of money.  You want to balance trying out new machines with unknown payoffs against exploiting the knowledge you already have from the earlier machines you tried.

When you first start on new bandits, you really don't know which will pay out and at what rates.  So some exploration is necessary to know what your reward ratio in the territory will be.  As your knowledge grows, you get to know which bandits are likely to pay, and which are not, and this later informs your choices as to where to place your dollars.

Mathematicians love a well specified problem like this because it allows us to make algorithm models of patterns that will return rewards or guarantee rewards given certain circumstances.  (see also - the secretary problem which does similar.  Where I showed how it applied to real life dating)

Some of the mathematical solutions to this problem look like:

Epsilon greedy - The best lever is selected for a proportion 1-ε of the trials, and a lever is selected at random (with uniform probability) for a proportion ε. A typical parameter value might be ε =0.1 but this can vary widely depending on circumstances.

Epsilon-decreasing strategy: Similar to the epsilon-greedy strategy, except that the value of ε decreases as the experiment progresses, resulting in highly exploratory behaviour at the start and highly exploitative behaviour at the finish.

Of course there are more strategies, and the context and nature of the problem matters.  If the machines suddenly one day in the future all change, you might have a strategy that would prepare for potential scenarios like that.  As you start shifting away from the hypothetical and towards real life your models need to increase complexity to cater to the details of the real world.

If this problem is more like real life (where we live and breathe), the possible variability of reality starts coming in to play more and more.  In talking about this - I want to emphasise not the problem as interesting, but the solution of <sometimes explore> and <sometimes exploit> in specific ratios or for specific reasons.  The mathematical solutions the the multi-armed bandit problem are used in such a way to take advantage of the balance between not knowing enough and taking advantage of what you do know.

What supercharges this solution and how it can be applied to real life is value of information.

Value of Information says that in relation to making a decision, what informs that decision is worth something.  With expensive decisions, risky decisions, dangerous decisions, highly lucrative decisions, or particularly unknown decisions being more sure is important to think about.

VoI suggests that any decision that is worth money (or worth something) can have information that informs that decision.  The value of information can add up to the value of the reward on correctly making the decision.  Of course if you spend all the potential gains from the decision on getting the perfect information you lose the chance to make a profit.  However usually a cheap (relative to the decision) piece of information exists that will inform the decision and assist.

How does this apply to exploration-exploitation?

The idea of VoI is well covered in the book, how to measure anything.  While the book goes into detail and is really really excellent for applying to big decisions, the ideas can also be applied to our simple every day problems as well.  With this in mind I propose a heuristic:

You want to explore as much as to increase your information with regard to both the quality of the rest of the exploration and possible results and the expected returns on the existing knowledge.


The next thing to supercharge our exploration-exploitation and VoI knowledge is Diminishing returns.

Diminishing returns on VoI is when you start out not knowing anything at all, and adding a little bit of knowledge goes a long way.  As you keep adding more and more information the return on the extra knowledge has a diminishing value.

Worked example:  Knowing the colour of the sky.

So you are blind and no one has ever told you what colour the sky is.  You can't really be sure what colour the sky is but generally if you ask enough people the consensus should be a good enough way to conclude the answer.

So one guy gave you your first inkling of what the answer is.  But can you really trust him?

Yea cool.  Ten people.  Probably getting sure of yourself now.

Really, what good is Two Thousand people after the first fifty?  Especially if they all agree.  There's got to be less value of the 2001st person telling you than there was the 3rd person telling you.


Going back to VoI, how valuable was the knowledge that the sky is blue?  Probably not very valuable, and this isn't a great way to gather knowledge in the long run.

The great flaw with this is also if I asked you the question - "what colour is the sky?" you could probably hint as to a confident guess.  If you are a well calibrated human, you already know a little bit of everything and the good news is that calibration is trainable.

With that in mind; if you want to play a calibration game there are plenty available on google.

The great thing about calibration is that it seems to apply across all your life, and all things that you estimate.  Which is to say that once you are calibrated, you are calibrated across domains.  This means that if you become good at it in one area, you become better at it in other areas.  We're not quite talking about hitting the bullseye every time, but we are talking about being confident that the bullseye is over there in that direction.  Which is essentially the ability to predict the future within a reasonable set of likelihoods.


Once you are calibrated, you can take calibration, use it to apply diminishing returns through VoI to supercharge your exploration exploitation.  But we're not done.  What if we add in Bayesian statistics?  What if we can shape our predicted future and gradually update our beliefs based on tiny snippits of data that we gather over time and purposefully by thinking about VoI, and the diminishing returns of information.

I don't want to cover Bayes because people far smarter than me have covered it very well.  If you are interested in learning bayes I would suggest heading to Arbital for their excellent guides.

But we're not done at bayes.  This all comes down to the idea of trade-offs.  Exploration VS exploitation is a trade off of {time/energy} vs expected reward.


A classic example of a trade-off is a story of Sharpening the Saw (from the book 7 habits of highly effective people)

A woodcutter strained to saw down a tree.  A young man who was watching asked “What are you doing?”

“Are you blind?” the woodcutter replied. “I’m cutting down this tree.”

The young man was unabashed. “You look exhausted! Take a break. Sharpen your saw.”

The woodcutter explained to the young man that he had been sawing for hours and did not have time to take a break.

The young man pushed back… “If you sharpen the saw, you would cut down the tree much faster.”

The woodcutter said “I don’t have time to sharpen the saw. Don’t you see I’m too busy?”

The thing about life and trade offs is that all of life is trade-offs between things you want to do and other things you want to do.

Exploration and exploitation is a trade off between the value of what you know and the value of what you might know if you find out.


Try this:

  • Make a list of all the things you have done over the last 7 days.  (Use your diary and rough time chunking)
  • Sort them into exploration activities and exploitation activities.
    Answer this:
  • Am I exploring enough? (on a scale of 1-10)
  • Am I exploiting enough? (on a scale of 1-10)
  • Have I turned down any exploring opportunities recently?
  • Have I turned down any exploitation opportunities recently?
  • How could I expand any exploring I am already doing?
  • How could I expand any exploiting I am already doing?
  • How could I do more exploring?  How could I do less exploring?
  • How could I do more exploiting?  How could I do less exploiting?

There are two really important things to take away from the Exploration-Exploitation dichotomy:

  1. You probably make the most measurable and ongoing gains in the Exploitation phase.  I mean - lets face it, these are long running goal-seeking behaviours like sticking to an exercise routine.
  2. The exploration might be seem more fun (find exciting and new hobbies) but are you sure that's what you want to be doing in regard to 1?

Meta: This is part 1 of a 4 part series.

This took in the order of 10-15 hours to finish because I was doing silly things like trying to fit 4 posts into 1 and stumbling over myself.

[Link] Animated explainer video promoting EA-themed effective giving ideas and meta-charities

-7 Gleb_Tsipursky 16 October 2016 10:44PM

No negative press agreement

-10 Elo 01 September 2016 11:10AM

Original post:  http://bearlamp.com.au/no-negative-press-agreement/

What is a no negative press agreement?

A no negative press agreement binds a media outlet's consent to publish information provided by a person with the condition that they be not portrayed negatively by the press.

Why would a person want that?

In recognising that the press has powers above and beyond every-day people to publish information and spread knowledge and perspective about an issue that can be damaging to an individual.  An individual while motivated by the appeal of publicity, is also concerned about the potential damage caused by negative press.

Every person is the hero of their own story, from one's own perspective they performed actions that were justified and motivated by their own intention and worldview, no reasonable person would be able to tell their story (other than purposefully) in which they are spun as the negative conspirator of a plot, actively causing negative events on the world for no reason.

Historically, humans have been motivated to care more about bad news than good news, for reasons that expand on the idea that bad news might ring your death (and be a cause of natural selection) and good news would be irrelevant for survival purposes.  Today we are no longer in that historic period, yet we still pay strong attention to bad news.  It's clear that bad news can personally effect individuals - not only those in the stories, but others experiencing the bad news can be left with a negative worldview or motivated to be upset or distraught.  In light of the fact that bad news is known to spread more than good news, and also risks negatively affecting us mentally, we are motivated to choose to avoid bad news, both in not creating it, not endorsing it and not aiding in it's creation.

The binding agreement is designed to do several things:

  • protect the individual from harm
  • reduce the total volume of negative press in the world
  • decrease the damage caused by negative press in the world
  • bring about the future we would rather live in
  • protect the media outlet from harming individuals

Does this limit news-maker's freedom to publish?

That is not the intent.  On the outset, it's easy to think that it could have that effect, and perhaps in a very shortsighted way it might have that effect.  Shortly after the very early effects, it will have a net positive effect of creating news of positive value, protecting the media from escalating negativity, and bringing about the future we want to see in the world.  If it limits media outlets in any way it should be to stop them from causing harm.  At which point any non-compliance by a media entity will signal the desire to act as agents of harm in the world.

Why would a media outlet be an agent of harm?  Doesn't that go against the principles of no negative press?

While media outlets (or humans), set out with the good intentions of not having a net negative effect on the world, they can be motivated by other concerns.  For example, the value of being more popular, or the direction from which they are paid for their efforts (for example advertising revenue).  The concept of competing commitment, and being motivated by conflicting goals is best covered by Scott under the name moloch.  

The no negative press agreement is an attempt to create a commons which binds all relevant parties to action better than the potential for a tragedy.  This commons has a desire to grow and maintain itself, and is motivated to maintain itself.  If any media outlets are motivated to defect, they are to be penalised by both the other press and the public.

How do I encourage a media outlet to comply with no negative press?

Ask them to publish a policy with regard to no negative press.  If you are an individual interested in interacting with the media, and are concerned with the risks associated with negative press, you can suggest an individual binding agreement in the interim of the media body designing and publishing a relevant policy.

I think someone violated the no negative press policy, what should I do?

At the time of writing, no one is bound by the concept of no negative press.  Should there be desire and pressure in the world to motivate entities to comply, they are more likely to comply.  To create the pressure a few actions can be taken:

  • Write to media entities on public record and request they consider a no negative press policy, outline clearly and briefly your reasons why it matters to you.
  • Name and shame media entities that fail to comply with no negative press, or fail to consider a policy.
  • Vote with your feet - if you find a media entity that fails to comply, do not subscribe to their information and vocally encourage others to do the same.

Meta: this took 45mins to write.

The call of the void

-6 Elo 28 August 2016 01:17PM

Original post:  http://bearlamp.com.au/the-call-of-the-void

L'appel du vide - The call of the void.

When you are standing on the balcony of a tall building, looking down at the ground and on some track your brain says "what would it feel like to jump".  When you are holding a kitchen knife thinking, "I wonder if this is sharp enough to cut myself with".  When you are waiting for a train and your brain asks, "what would it be like to step in front of that train?".  Maybe it's happened with rope around your neck, or power tools, or what if I take all the pills in the bottle.  Or touch these wires together, or crash the plane, crash the car, just veer off.  Lean over the cliff...  Try to anger the snake, stick my fingers in the moving fan...  Or the acid.  Or the fire.

There's a strange phenomenon where our brains seem to do this, "I wonder what the consequences of this dangerous thing are".  And we don't know why it happens.  There has only been one paper (sorry it's behind a paywall) on the concept.  Where all they really did is identify it.  I quite like the paper for quoting both (“You know that feeling you get when you're standing in a high place… sudden urge to jump?… I don't have it” (Captain Jack Sparrow, Pirates of the Caribbean: On Stranger Tides, 2011). And (a drive to return to an inanimate state of existence; Freud, 1922).

Taking a look at their method; they surveyed 431 undergraduates for their experiences of what they coined HPP (High Place Phenomenon).  They found that 30% of their constituents have experienced HPP, and tried to measure if it was related to anxiety or suicide.  They also proposed a theory. 

...we propose that at its core, the experience of the high place phenomenon stems from the misinterpretation of a safety or survival signal. (e.g., “back up, you might fall”)

I want to believe it, but today there are Literally no other papers on the topic.  And no evidence either way.  So all I can say is - We don't really know.  s'weird.  Dunno.


This week I met someone who uncomfortably described their experience of toying with L'appel du vide.  I explained to them how this is a common and confusing phenomenon, and to their relief said, "it's not like I want to jump!".   Around 5 years ago (before I knew it's name) an old friend recounting the experience of living and wondering what it was like to step in front of moving busses (with discomfort), any time she was near a bus.  I have coaxed a friend out of the middle of a road (they weren't drunk and weren't on drugs at the time).  And dragged friends out of the ocean.  I have it with knives, in a way that borderlines OCD behaviour.  The desire to look at and examine the sharp edges.

What I do know is this.  It's normal.  Very normal.  Even if it's not 30% of the population, it could easily be 10 or 20%.  Everyone has a right to know that it happens, and it's normal and you're not broken if you experience it.  Just as common a shared human experience as common dreams like your teeth falling out, or of flying, running away from groups of people, or being underwater.  Or the experience of rehearsing what you want to say before making a phone call.  Or walking into a room for a reason and forgetting what it was.

Next time you are struck with the L'appel du vide, don't get uncomfortable.  Accept that it's a neat thing that brains do, and it's harmless.  Experience it.  And together with me - wonder why.  Wonder what evolutionary benefit has given so many of us the L'appel du vide.  

And be careful.


Meta: this took one hour to write.

The barriers to the task

-7 Elo 18 August 2016 07:22AM

Original post: http://bearlamp.com.au/the-barriers-to-the-task/


For about two months now I have been putting in effort to run in the mornings.  To make this happen, I had to take away all the barriers to me wanting to do that.  There were plenty of them, and I failed to leave my house plenty of times.  Some examples are:

Making sure I don't need correct clothes - I leave my house shirtless and barefoot, and grab my key on the way out.  

Pre-commitment to run - I take my shirt off when getting into bed the night before, so I don't even have to consider the action in the morning when I roll out of bed.

Being busy in the morning - I no longer plan any appointments before 11am.  Depending on the sunrise (I don't use alarms), I wake up in the morning, spend some time reading things, then roll out of bed to go to the toilet and leave my house.  In Sydney we just passed the depths of winter and it's beginning to get light earlier and earlier in the morning.  Which is easy now; but was harder when getting up at 7 meant getting up in the dark.  

There were days when I would wake up at 8am, stay in bed until 9am, then realise if I left for a run (which takes around an hour - 10am), then came back to have a shower (which takes 20mins - 10:20), then left to travel to my first meeting (which can take 30mins 10:50).  That means if anything goes wrong I can be late to an 11am appointment.  But also - if I have a 10am meeting I have to skip my run to get there on time.

Going to bed at a reasonable hour - I am still getting used to deciding not to work myself ragged.  I decided to accept that sleep is important, and trust to let my body sleep as long as it needs.  This sometimes also means that I can successfully get bonus time by keeping healthy sleep habits.  But also - if I go to sleep after midnight I might not get up until later, which means I compromise my "time" to go running by shoving it into other habits.

Deciding where to run - google maps, look for local parks, plan a route with the least roads and least traffic.  I did this once and then it was done.  It was also exciting to measure the route and be able to run further and further each day/week/month.


What's in your way?

If you are not doing something that you think is good and right (or healthy, or otherwise desireable) there are likely things in your way.  If you just found out about an action that is good, well and right and there is nothing stopping you from doing it; great.  You are lucky this time - Just.Do.It.

If you are one of the rest of us; who know that:

  • daily exercise is good for you
  • The right amount of sleep is good for you
  • Eating certain foods are better than others
  • certain social habits are better than others
  • certain hobbies are more fulfilling (to our needs or goals) than others

And you have known this a while but still find yourself not taking the actions you want.  It's time to start asking what is in your way.  You might find it on someone else's list, but you are looking for the needle in the haystack.  

You are much better off doing this (System 2 exercise):

  1. take 15 minutes with pencil and paper.
  2. At the top write, "I want to ______________".
  3. If you know that's true you might not need this step - if you are not sure - write out why it might be true or not true.
  4. Write down the barriers that are in the way of you doing the thing.  think;
    • "can I do this right now?" (might not always be an action you can take while sitting around thinking about it - i.e. eating different foods)
    • "why can't I just do this at every opportunity that arises?"
    • "how do I increase the frequency of opportunities?"
  5. Write out the things you are doing instead of that thing.
    These things are the barriers in your way as well.
  6. For each point - consider what you are going to do about them.

Questions:

  • What actions have you tried to take on?
  • What barriers have you encountered in doing so?
  • How did you solve that barrier?
  • What are you struggling with taking on in the future?

Meta: this borrows from the Immunity to Change process, that can be best read about in the book, "right weight, right mind".  It also borrows from CFAR style techniques like resolve cycles (also known as focused grit), hamming questions, murphy-jitsu.

Meta: this took one hour to write.

Cross posted to lesswrong: http://lesswrong.com/lw/nuq

The meta-strategy

-6 Elo 02 August 2016 11:08PM

Original post:  http://bearlamp.com.au/against-the-five-love-languages/


You are in a relationship, someone made some objection about communication, you don't seem to understand what's going on.  Many years later you find yourself looking back at the relationship and reflecting with friends.  That's when someone brings up The Five Love Languages.  Oh deep and great and meaningful secrets encoded into a book.

The 5 languages are: 

  1. Gifts
  2. Quality time
  3. Words of affirmation
  4. Acts of service (devotion)
  5. Physical touch (intimacy)

Oooooh if only you had spent more energy trying to get quality time, and less effort on gifts that relationship could have been saved.  Or the other way - the relationship was doomed because you wanted quality time and they wanted gifts as a show of love.  

You start seeing the world in 5 languages, your coworker offering to get you a coffee is a gift.  Your boss praising your good work is words of affirmation.  You start thinking like a Man with a hammer.  Strictly speaking I enjoy man with a hammer syndrome.  I like to use a model to death, and then pick a new model and do it all again.


What I want you to do now is imagine you didn't do that.  Imagine we cloned the universe.  In one universe we gave you the love-languages book and locked you in a room to read it.  In the second universe we offered to run you through a new relationship-training exercise.  "It's no guide book on how to communicate with your partner, but it's a pretty good process", we lock you in a room with a chair, a desk, some paper, pens (few distractions) and order you to derive some theory and idea about how to communicate with your partner.

Which one do you predict will yield the best result?


When I ask my system 2, it is fairly happy with the idea that using someone else's model is a shortcut to finding the answers.  After all they pre-derived the model.  No need to spend hours working on it myself when it's all in a book.

When I ask my system 1, it thinks that the self-derived system is about a billion times better than the one I found in a book.  It's going to be personally suited, it's going to be sharp and accurate, and bend to my needs.


Meta-strategy

Which is going to yield the best result for the problem? Self-derived solutions to all future problems? Book-derived solutions for all problems?

I propose that the specific strategy used to answer the problem, depending on the problem (obviously sometimes 1+1 will only be solved with addition, and solving it with subtraction is going to be difficult), is mostly irrelevant compared to having the meta-strategy.  

In the original example:

My relationship has bad communication, so we end the relationship.

The meta-strategy for this case:

My relationship has bad communication, how do we find more information about that and solve that problem.

In the general case:

I have a problem, I will fix the problem.

the meta strategy for the general case:

I have a problem, what is the best way to solve the problem? 

Or the meta-meta strategy:

I have a problem, how will I go about finding what is the best way to solve the problem? 


I propose that having the meta strategy, and the meta-meta strategy is almost as powerful as the true strategy.  On the object level for the problem example, instead of searching for the book in the problem field that is the five love languages you could instead search for any book about the problem area.  Any book is better than no book.  In fact I would make a hierarchy:

The best strategy > a good strategy > any strategy > no strategy
The best book > a good book > any book on the topic > no book on the topic

You encounter a problem in the wild - what should you do?

  1. Try just solve the problem
  2. Try any strategy (with a small amount of thinking - a few seconds or minutes)
  3. search for a better strategy

Depending on the problem, the time, the real factors - the best path forward may be to just "think of what to do then do that", or it may be to "stop and write out a 10 page plan before executing 10 pages worth of instructions".


Should you read the five love languages book?  That depends.  What is the problem?  and have you tried solving the problem on your own first?

Meta: this took an hour to write.

My table of contents: lesswrong.com/r/discussion/lw/mp2/my_future_posts_a_table_of_contents/ (which needs updating)

[Link] Peer-Reviwed Piece on Meaning and Purpose in a Non-Religious Setting

-2 Gleb_Tsipursky 31 March 2016 10:59PM

My peer-reviewed article in a psychology journal on the topic of meaning and purpose in a non-religious setting is now accessible without a paywall for a limited time, so get it while it's free if you're interested. I'd be interested in hearing your feedback on it. For those curious, the article is not directly related to my Intentional Insights project, but is a part of my aspiration to raise the sanity waterline regarding religion, the focus of Eliezer's original piece on the sanity waterline.

Cultivate the desire to X

3 Elo 07 March 2016 03:40AM

Recently I have found myself encouraging people to cultivate the desire to X.

Examples that you might want to cultivate interest in include:

  • Diet
  • Organise ones self
  • Plan for the future
  • be a goal-oriented thinker
  • build the tools
  • Anything else in the list of common human goals
  • Getting healthy sleep
  • Being less wrong
  • Trusting people more
  • Trusting people less
  • exercise
  • interest in a topic (cars, fashion, psychology etc.)

Why do we need to cultivate?

We don't.  But sometimes we can't just "do".  Lot's of reasons are reasonable reasons to not be able to just "do" the thing:

  • Some things are scary
  • Some things need planning
  • Some things need research
  • Some things are hard
  • Some things are a leap of faith
  • Some things can be frustrating to accept
  • Some things seem stupid (well if exercising is so great why don't I automatically want to do it)
  • Other excuses exist.

On some level you have decided you want to do X; on some other level you have not yet committed to doing it.  Easy tasks can get done quickly.  More complicated tasks are not so easy to do right away.

Well if it were easy enough to just successfully do the thing - you can go ahead and do the thing (TTYL flying to the moon tomorrow - yea nope.).

  1. your system 1 wants to do the thing and your system 2 is not sure how.
  2. your system 2 wants to do the thing and your system 1 is not sure it wants to do the thing.  
  • The healthy part of you wants to diet; the social part of you is worried about the impact on your social life.

(now borrowing from Common human goals)

  • Your desire to live forever wants you to take a medication every morning to increase your longevity; your desire for freedom does not want to be tied down to a bottle of pills every morning.
  • Your desire for a legacy wants you to stay late at work; your desire for quality family time wants you to leave the office early.

The solution:

The solution is to cultivate the interest; or the desire to do the thing. From the initial point of interest or desire - you can move forward; do some research to either convince your system 2 of the benefits, or work out how to do the thing to convince your system 1 that it is possible/viable/easy enough.  Or maybe after some research the thing seems impossible.  I offer Cultivating the desire as a step along the way to working it out.

Short post for today; Cultivate the desire to do X.


Meta: time to write 1.5 hours.

My table of contents contains my other writing

feedback welcome

Outreach Thread

6 Gleb_Tsipursky 06 March 2016 10:18PM

Based on an earlier suggestion, here's an outreach thread where you can leave comments about any recent outreach that you have done to convey rationality-style ideas broadly. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.


 

[Link] Huffington Post article about dual process theory

9 Gleb_Tsipursky 06 January 2016 01:44AM

Published a piece in The Huffington Post popularizing dual-process theory in layman's language.

 

P.S. I know some don't like using terms like Autopilot and Intentional to describe System 1 and System 2, but I find from long experience that these terms resonate well with a broad audience. Also, I know dual process theory is criticized by some, but we have to start somewhere, and just explaining dual process theory is a way to start bridging the inference gap to higher meta-cognition.

The Winding Path

6 OrphanWilde 24 November 2015 09:23PM

The First Step

The first step on the path to truth is superstition.  We all start there, and should acknowledge that we start there.

Superstition is, contrary to our immediate feelings about the word, the first stage of understanding.  Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent.  If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.

Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause.  If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI.  It is engaging in superstition, it has developed an incorrect understanding of the issue.  But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping.  Superstition, like all belief, is only useful if you're willing to discard it.

The Next Step

Incorrect understanding is the first - and necessary - step to correct understanding.  It is, indeed, every step towards correct understanding.  Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.

No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge.  You must come up with wrong ideas in order to get at the right ones - which will always be one step further.  You must test your ideas.  And again, the only mistake is stopping, in assuming that you have it right now.

Intelligence is never your bottleneck.  The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.

Better answers are arrived at by the process of invalidating wrong answers.

The Winding Path

The process of becoming Less Wrong is the process of being, in the first place, wrong.  It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct".  It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.

The art of rationality is the art of walking this narrow path.  If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end.  That is the path of the faithful.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking.  If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end.  That is the path of the crank.

The path of rationality is winding and directionless.  It may head towards beauty, then towards ugliness; towards simplicity, then complexity.  The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth.  Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either.  Truth is its own path, found only by discarding what is wrong.  It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty.  It doesn't belong to any one of these things.

The path of rationality is a path without destination.

 


 

Written as an experiment in the aesthetic of Less Wrong.  I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).

... And Everyone Loses Their Minds

10 Ritalin 16 January 2015 11:38PM

Chris Nolan's Joker is a very clever guy, almost Monroesque in his ability to identify hypocrisy and inconsistency. One of his most interesting scenes in the film has him point out how people estimate horrible things differently depending on whether they're part of what's "normal", what's "expected", rather than on how inherently horrifying they are, or how many people are involved.

Soon people extrapolated this observation to other such apparent inconsistencies in human judgment, where a behaviour that once was acceptable, with a simple tweak or change in context, becomes the subject of a much more serious reaction.

I think there's rationalist merit in giving these inconsistencies a serious look. I intuit that there's some sort of underlying pattern to them, something that makes psychological sense, in the roundabout way that most irrational things do. I think that much good could come out of figuring out what that root cause is, and how to predict this effect and manage it.

Phenomena that come to mind, are, for instance, from an Effective Altruism point of view, the expenses incurred in counter-terrorism (including some wars that were very expensive in treasure and lives), and the number of lives said expenses save, compared with the number of lives that could be saved by spending that same amount into improving road safety, increasing public helathcare expense where it would do the most good, building better lightning rods (in the USA you're four times more likely to be struck by thunder than by terrorists), or legalizing drugs.

What do y'all think? Why do people have their priorities all jumbled-up? How can we predict these effects? How can we work around them?

Why "Changing the World" is a Horrible Phrase

26 ozziegooen 25 December 2014 06:04AM

Steve Jobs famously convinced John Scully from Pepsi to join Apple Computer with the line, “Do you want to sell sugared water for the rest of your life? Or do you want to come with me and change the world?”.  This sounds convincing until one thinks closely about it.

Steve Jobs was a famous salesman.   He was known for his selling ability, not his honesty.  His terminology here was interesting.  ‘Change the world’ is a phrase that both sounds important and is difficult to argue with.  Arguing if Apple was really ‘changing the world’ would have been pointless, because the phrase was so ambiguous that there would be little to discuss.  On paper, of course Apple is changing the world, but then of course any organization or any individual is also ‘changing’ the world.  A real discussion of if Apple ‘changes the world’ would lead to a discussion of what ‘changing the world’ actually means, which would lead to obscure philosophy, steering the conversation away from the actual point.  

‘Changing the world’ is an effective marketing tool that’s useful for building the feeling of consensus. Steve Jobs used it heavily, as had endless numbers of businesses, conferences, nonprofits, and TV shows.  It’s used because it sounds good and is typically not questioned, so I’m here to question it.  I believe that the popularization of this phrase creates confused goals and perverse incentives from people who believe they are doing good things.

 

Problem 1: 'Changing the World' Leads to Television Value over Real Value

It leads nonprofit workers to passionately chase feeble things.  I’m amazed by the variety that I see in people who try to ‘change the world’. Some grow organic food, some research rocks, some play instruments. They do basically everything.  

Few people protest this variety.  There are millions of voices giving the appeal to ‘change the world’ in the way that would validate many radically diverse pursuits.  

TED, the modern symbol of the intellectual elite for many, is itself a grab bag of a ways to ‘change the world’, without any sense of scale between pursuits.  People tell comedic stories, sing songs, discuss tales of personal adventures and so on.  In TED Talks, all presentations are shown side-by-side with the same lighting and display.  Yet in real life some projects produce orders of magnitude more output than others.

At 80,000 Hours, I read many applications for career consulting. I got the sense that there are many people out there trying to live their lives in order to eventually produce a TED talk.  To them, that is what ‘changing the world’ means.  These are often very smart and motivated people with very high opportunity costs.  

I would see an application that would express interest in either starting an orphanage in Uganda, creating a woman's movement in Ohio, or making a conservatory in Costa Rica.  It was clear that they were trying to ‘change the world’ in a very vague and TED-oriented way.

I believe that ‘Changing the World’ is promoted by TED, but internally acts mostly as a Schelling point.  Agreeing on the importance of ‘changing the world’ is a good way of coming to a consensus without having to decide on moral philosophy. ‘Changing the world’ is simply the minimum common denominator for what that community can agree upon.  This is a useful social tool, but an unfortunate side effect was that it inspired many others to follow this shelling point itself.  Please don’t make the purpose of your life the lowest common denominator of a specific group of existing intellectuals. 

It leads businesses to be gain employees and media attention without having to commit to anything.  I’m living in Silicon Valley, and ‘Change the World’ is an incredibly common phrase for new and old startups. Silicon Valley (the TV show) made fun of it, as do much of the media.  They should, but I think much of the time they miss the point; the problem here is not one where the companies are dishonest, but one where their honestly itself just doesn’t mean much.  Declaring that a company is ‘changing the world’ isn’t really declaring anything.  

Hiring conversations that begin and end with the motivation of ‘changing the world’ are like hiring conversations that begin and end with making ‘lots’ of money.  If one couldn’t compare salaries between different companies, they would likely select poorly for salary.  In terms of social benefit, most companies don’t attempt to quantify their costs and benefits on society except in very specific and positive ways for them.  “Google has enabled Haiti disaster recovery” for social proof sounds to me like saying “We paid this other person $12,000 in July 2010” for salary proof. It sounds nice, but facts selected by a salesperson are simply not complete.

 

Problem 2: ‘Changing the World’ Creates Black and White Thinking

The idea that one wants to ‘change the world’ implies that there is such a thing as ‘changing the world’ and such a thing is ‘not changing the world’.  It implies that there are ‘world changers’ and people who are not ‘world changers’. It implies that there is one group of ‘important people’ out there and then a lot of ‘useless’ others.

This directly supports the ‘Great Man’ theory, a 19th century idea that history and future actions are led by a small number of ‘great men’.  There’s not a lot of academic research supporting this theory, but there’s a lot of attention to it, and it’s a lot of fun to pretend is true.  

But it’s not.  There is typically a lot of unglamorous work behind every successful project or organization. Behind every Steve Jobs are thousands of very intelligent and hard-working employees and millions of smart people who have created a larger ecosystem. If one only pays attention to Steve Jobs they will leave out most of the work. They will praise Steve Jobs far too highly and disregard the importance of unglamorous labor.

Typically much of the best work is also the most unglamorous.  Making WordPress websites, sorting facts into analysis, cold calling donors. Many the best ideas for organizations may be very simple and may have been done before. However, for someone looking to get to TED conferences or become superstars, it is very easy to look over other comparatively menial labor. This means that not only will it not get done, but those people who do it feel worse about themselves.

So some people do important work and feel bad because it doesn’t meet the TED standard of ‘change the world’.  Others try ridiculously ambitious things outside their own capabilities, fail, and then give up.  Others don’t even try, because their perceived threshold is too high for them.  The very idea of a threshold and a ‘change or don’t change the world’ approach is simply false, and believing something that’s both false and fundamentally important is really bad.

In all likelihood, you will not make the next billion-dollar nonprofit. You will not make the next billion-dollar business. You will not become the next congressperson in your district. This does not mean that you have not done a good job. It should not demoralize you in any way once you fail hardly to do these things. 

Finally, I would like to ponder on what happens once or if one does decide they have changed the world. What now? Should one change it again?

It’s not obvious.  Many retire or settle down after feeling accomplished.  However, this is exactly when trying is the most important.  People with the best histories have the best potentials.  No matter how much a U.S. President may achieve, they still can achieve significantly more after the end of their terms.  There is no ‘enough’ line for human accomplishment.

Conclusion

In summary the phrase change the world provides a lack of clear direction and encourages black-and-white thinking that distorts behaviors and motivation.  However, I do believe that the phrase can act as a stepping stone towards a more concrete goal.  ‘Change the World’ can act as an idea that requires a philosophical continuation.  It’s a start for a goal, but it should be recognized that it’s far from a good ending.

Next time someone tells you about ‘changing the world’, ask them to follow through with telling you the specifics of what they mean.  Make sure that they understand that they need to go further in order to mean anything.  

And more importantly, do this for yourself.  Choose a specific axiomatic philosophy or set of philosophies and aim towards those.  Your ultimate goal in life is too important to be based on an empty marketing term.

Controversy - Healthy or Harmful?

2 Gunnar_Zarncke 07 April 2014 10:03PM

Follow-up to: What have you recently tried, and failed at?

Related-to: Challenging the Difficult Sequence

ialdabaoth's post about blockdownvoting and its threads have prompted me to keep an eye on controversial topics and community norms on LessWrong. I noticed some things.

I was motivated: My own postings are also sometimes controversial. I know beforehand which might be (this one possibly). Why do I post them nonetheless? Do I want to wreak havoc? Or do I want to foster productive discussion of unresolved but polarized questions? Or do I want to call in question some point the community may have a blind spot on or possibly has taken something for granted too early.

continue reading »

Supposing you inherited an AI project...

-5 bokov 04 September 2013 08:07AM

Supposing you have been recruited to be the main developer on an AI project. The previous developer died in a car crash and left behind an unfinished AI. It consists of:

A. A thoroughly documented scripting language specification that appears to be capable of representing any real-life program as a network diagram so long as you can provide the following:

 A.1. A node within the network whose value you want to maximize or minimize.

 A.2. Conversion modules that transform data about the real-world phenomena your network represents into a form that the program can read.

B. Source code from which a program can be compiled that will read scripts in the above language. The program outputs a set of values for each node that will optimize the output (you can optionally specify which nodes can and cannot be directly altered, and the granularity with which they can be altered).

It gives remarkably accurate answers for well-formulated questions. Where there is a theoretical limit to the accuracy of an answer to a particular type of question, its answer usually comes close to that limit, plus or minus some tiny rounding error.

 

Given that, what is the minimum set of additional features you believe would absolutely have to be implemented before this program can be enlisted to save the world and make everyone live happily forever? Try to be as specific as possible.

Research is polygamous! The importance of what you do needn't be proportional to your awesomeness

22 diegocaleiro 26 May 2013 10:29PM

In a recent discussion a friend was telling me how he felt he was not as smart as the people he thinks are doing the best research on the most important topics. He said a few jaw-dropping names, which indeed are smarter than him, and mentioned their research agenda, say, A B and C.  

From that, a remarkable implication followed, in his cognitive algorithm: 

 

Therefore I should research thing D or thing E. 

 

Which made me pause for a moment. Here is a hypothetical schematic of this conception of the world. Arrows stand for "Ought to research"

Humans by Level of Awesome (HLA)             Research Agenda by Level of Importance. (RALI)

HLA                 RALI

Mrs 1 --------> X-risk #1

2 --------> X-risk #2 

3 --------> Longevity

4 --------> Malaria Reduction 

5 --------> Enhancement 

1344 --------> Increasing Puppies Cuteness

Etc... 

 

It made me think of the problem of creating match making algorithms for websites where people want to pair to do stuff, such as playing tennis, chess or having a romantic relationship.

This reasoning is profoundly mistaken, and I can look back into my mind, and remember dozens of times I have made the exact same mistake. So I thought it would be good to spell out 10 times in different ways for the unconscious bots in my mind that didn't get it yet: 

1) Research agenda topics are polygamous, they do not mind if there is someone else researching them, besides the very best people who could be doing such research. 

2) The function above should not be one-to-one (biunivocal), but many-to-one. 

3) There is no relation of overshadowing based on someone's awesomeness to everyone else who researches the same topic, unless they are researching the same narrow minimal sub-type of the same question coming from the same background. 

4) Overdetermination doesn't happen at the "general topic level". 

5) Awesome people do not obfuscate what less awesome people do in their area, they catapult it, by creating resources. 

6) Being in an area where the most awesome people are is not asking to "lose the game" it is being in an environment that cultivates greatness

7) The amount of awesomeness in a field does not supervene on the amount of awesomeness in it's best explorer. 

8) The Best person in each area would never be able to cause progress alone. 

9) To want to be the best in something has absolutely no precedence over doing something that matters. 

10) If you believe in monogamous research, you'd be in the akward situation where finding out that no one gives a flying fuck about X-risk should make you ecstatic, and that can't be right. That there are people doing something that matters so well that you currently estimate you can't beat them should be fantastic news! 

Well, I hope every last cortical column I have got it now, and the overall surrounding being may be a little less wrong. 

Also, this text by Michael Vassar is magnificent, and makes a related set of points. 

 

 

 

A Rational Altruist Punch in The Stomach

8 Neotenic 01 April 2013 12:42AM

 

Robin Hanson wrote, five years ago

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it. 

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is "moral" to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

So why do many people seem to care about policy that effects far future folk?   I suspect our paternalistic itch pushes us to control the future, rather than to enrich it.  We care that the future celebrates our foresight, not that they are happy. 

 

In the comments  some people gave counterarguments. For those in a rush, the best ones are Toby Ord's. But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100. I have some trouble conceiving of what would beat a consistent argument a googol fold.  

Things that changed my behavior significantly over the last few years have not been many, but I think I'm facing one of them. Understanding biological immortality was one, it meant 150 000 non-deaths per day. Understanding the posthuman potential was another. Then came the 10^52 potential lives lost in case of X-risk, or if you are conservative and think only biological stuff can have moral lives on it, 10^31. You can argue about which movie you'll watch, which teacher would be best to have, who should you marry. But (if consequentialist) you can't argue your way out of 10^31 or 10^52. You won't find a counteracting force that exactly matches, or really reduces the value of future stuff by

3 000 000 634 803 867 000 000 000 000 000 000 777 000 000 000 999  fold 

Which is way less than 10^52 

You may find a fundamental and qualitative counterargument "actually I'd rather future people didn't exist", but you won't find a quantitative one. Thus I spend a lot of time on X-risk related things. 

Back to Robin's argument: so unless someone gives me a good argument against investing some money in the far future (and discovering some vague techniques of how to do it that will make it at least one in a millionth possibility) I'll set aside a block of money X, a block of time Y, and will invest in future people 12 thousand years from now. If you don't think you can beat 10^100, join me. 

And if you are not in a rush, read this also, for a bright reflection on similar issues. 

 

 

A place for casual, non-karmic discussion for lesswrongers?

19 [deleted] 04 November 2012 06:50PM

I have never been on a Lesswrong meetup because they tend to take place too far away from my range in terms of distance and budget. Because of that, I don't know if those perform this function to everyone's satisfaction in such a way that what I'm suggesting here doesn't seem worth the effort. I hear that they're a lot of fun, and involve quite a bit of silliness, though; I find those cruelly lacking on Lesswrong proper, whether it be in main posts or discussion posts, and their relevant threads. 

That's why I think it would be nice to have a forum, a place to have normal discussions, where you don't have to watch that you don't say anything stupid or out-of-line lest you unexpectedly lose karma. A place to exchange jokes, frivolities, and entertainment. A place to talk about stuff that isn't rationality or singularity-related. A place to relax and enjoy the company of like-minded folks. A place to take a more personal approach to communication, with sequential rather than branching conversations. A place to make and be friends.

Don't you think having that would be nice?

EDIT: Also, if this place does already exist and I'm not aware of it, I humbly request that you provide me a link, for which I would be most grateful.

Enjoy solving "impossible" problems? Group project!

-2 Epiphany 18 August 2012 12:20AM

In the Muehlhauser-Hibbard Dialogue on AGI, Hibbard states it will be "impossible to decelerate AI capabilities" but Luke counters with "Persuade key AGI researchers of the importance of safety ... If we can change the minds of a few key AGI scientists, it may be that key insights into AGI are delayed by years or decades." and before I read that dialogue, I had come up with three additional ideas on Heading off a near-term AGI arms race. Bill Hibbard may be right that "any effort expended on that goal could be better applied to the political and technical problems of AI safety" but I doubt he's right that it's impossible.

How do you prove something is impossible?  You might prove that a specific METHOD of getting to the goal does not work, but that doesn't mean there's not another method.  You might prove that all the methods you know about do not work.  That doesn't prove there's not some other option you don't see.  "I don't see an option, therefore it's impossible." is only an appeal to ignorance.  It's a common one but it's incorrect reasoning regardless.  Think about it.  Can you think of a way to prove that a method that does work isn't out there waiting to be discovered without saying the equivalent of "I don't see any evidence for this." We can say "I don't see it, I don't see it, I don't see it!" all day long. 

I say: "Then Look!"

How often do we push past this feeling to keep thinking of ideas that might work?  For many, the answer is "never" or "only if it's needed".  The sense that something is impossible is subjective and fallible.  If we don't have a way of proving something is impossible, but yet believe it to be impossible anyway, this is a belief.  What distinguishes this from bias? 

I think it's a common fear that you may waste your entire life on doing something that is, in fact, impossible.  This is valid, but it's completely missing the obvious:  As soon as you think of a plan to do the impossible, you'll be able to guess whether it will work.  The hard part is THINKING of a plan to do the impossible.  I'm suggesting that if we put our heads together, we can think of a plan to make an impossible thing into a possible one.  Not only that, I think we're capable of doing this on a worthwhile topic.  An idea that's not only going to benefit humanity, but is a good enough idea that the amount of time and effort and risk required to accomplish the task is worth it.

Here's how I am going to proceed: 

Step 1: Come up with a bunch of impossible project ideas. 

Step 2: Figure out which one appeals to the most people. 

Step 3: Invent the methodology by which we are going to accomplish said project. 

Step 4: Improve the method as needed until we're convinced it's likely to work.

Step 5: Get the project done.

 

Impossible Project Ideas

  • Decelerate AI Capabilities Research: If we develop AI before we've figured out the political and technical safety measures, we could have a disaster.  Luke's Ideas (Starts with "Persuade key AGI researchers of the importance of safety").  My ideas.
  • Solve Violent Crime: Testosterone may be the root cause of the vast majority of violent crime, but there are obstacles in treating it. 
  • Syntax/static Analysis Checker for Laws: Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities. 
  • Understand the psychology of money

  • Rational Agreement Software:  If rationalists should ideally always agree, why not make an organized information resource designed to get us all to agree?  This would track the arguments for and against ideas in such a way where each piece can be verified logically and challenged, make the entire collection of arguments available in an organized manner where none are repeated and no useless information is included, and it would need to be such that anybody can edit it like a wiki, resulting in the most rational outcome being displayed prominently at the top.  This is especially hard because it would be our responsibility to make something SO good, it convinces one another to agree, and it would have to be structured well enough that we actually manage to distinguish between opinions and facts. Also, Gwern mentions in a post about critical thinking that argument maps increase critical thinking skills.
  • Discover unrecognized bias:  This is especially hard since we'll be using our biased brains to try and detect it.  We'd have to hack our own way of imagining around the corners, peeking behind our own minds.
  • Logic checking AI: Build an AI that checks your logic for logical fallacies and other methods of poor reasoning.

Add your own ideas below (one idea per comment, so we can vote them up and down), make sure to describe your vision, then I'll list them here.

 

Figure out which one appeals to the most people.

Assuming each idea is put into a separate comment, we can vote them up or down.  If they begin with the word "Idea" I'll be able to find them and put them on the list.  If your idea is getting enough attention obviously, it will at some point make sense to create a new discussion for it.

 

Let's create a market for cryonics

43 michaelcurzi 10 April 2012 06:36AM

My uncle works in insurance. I recently mentioned that I'm planning to sign up for cryonics.

"That's amazing," he said. "Convincing a young person to buy life insurance? That has to be the greatest scam ever."

I took the comment lightly, not caring to argue about it. But it got me thinking - couldn't cryonics be a great opportunity for insurance companies to make a bunch of money?

Consider:

  1. Were there a much stronger demand for cryonics, cryonics organizations would flourish through competition, outside investment, and internal reinvestment. Costs would likely fall, and this would be good for cryonicists in general.
  2. If cryonics organizations flourish, this increases the probability of cryonics working. I can think of a bunch of ways in which this could happen; perhaps, for example, it would encourage the creation of safety nets whereby the failure of individual companies doesn't result in anyone getting thawed. It would increase R&D on both perfusion and revivification, encourage entrepreneurs to explore new related business models, etcetera.
  3. Increasing the demand for cryonics increases the demand for life insurance policies; thus insurance companies have a strong incentive to increase the demand for cryonics. Many large insurance companies would like nothing more than to usher in a generation of young people that want to buy life insurance.1
  4. The demand for cryonics could be increased by an insightful marketing campaign by an excellent marketing agency with an enormous budget... like those used by big insurance companies.2 A quick Googling says that ad spending by insurance companies exceeded $4.15 billion in 2009.

Almost a year ago, Strange7 suggested that cryonics organizations could run this kind of marketing campaign. I think he's wrong - there's no way CI or Alcor have the money. But the biggest insurance companies do have the money, and I'd be shocked if these companies or their agencies aren't already dumping all kinds of money into market research.

What would doing this require? 

  1. That an open-minded person in the insurance industry who is in the position to direct this kind of funding exists. I don't have a sense of how likely this is.
  2. That we can locate/get an audience with the person from step 1. I think research and networking could get this done, especially if the higher-status among us are interested.
  3. That we can find someone who is capable and willing to explain this clearly and convincingly to the person from step 1. I'm not sure it would be that difficult. In the startup world, strangers convince strangers to speculatively spend millions of dollars every week. Hell, I'll do it.

I want to live in a world where cryonics ads air on TV just as often as ads for everything else people spend money on. I really can see an insurance company owning this project - if they can a) successfully revamp the image of cryonics and b) become known as the household name for it when the market gets big, they will make lots of money.

What do you think? Where has my reasoning failed? Does anyone here know anyone powerful in insurance? 

Lastly, taking a cue from ciphergoth: this is not the place to rehash all the old arguments about cryonics. I'm asking about a very specific idea about marketing and life insurance, not requesting commentary on cryonics itself. Thanks!


Perhaps modeling the potential size of the market would offer insight here. If it turns out that this idea is not insane, I'll find a way to make it happen. I could use your help.

Consider what happened with diamonds in the 1900s:

... N. W. Ayer suggested that through a well-orchestrated advertising and public-relations campaign it could have a significant impact on the "social attitudes of the public at large and thereby channel American spending toward larger and more expensive diamonds instead of "competitive luxuries." Specifically, the Ayer study stressed the need to strengthen the association in the public's mind of diamonds with romance. Since "young men buy over 90% of all engagement rings" it would be crucial to inculcate in them the idea that diamonds were a gift of love: the larger and finer the diamond, the greater the expression of love. Similarly, young women had to be encouraged to view diamonds as an integral part of any romantic courtship.

Shortening the Unshortenable Way

-2 Duk3 26 July 2011 06:44AM

 

or

A Starting Point for Defense against Flexible Dark Artists and Circumstances

 

In On Seeking a Shortening of the Way the assertion “Maybe we're not geniuses because we don't bother paying attention to ordinary things” caught my eye. Certainly! I said. Obviously if we were able to pay the appropriate amount of attention to every occurrence so as to gain enough data to update our models in an optimal way, we would rapidly increase our overall ability to model the world and increase our probability of insights at the level currently considered ‘genius.’

 

                And then I remembered that I can’t really do that, on account of having crappy models of what is actually important, and thinking that i can't improve those models quickly. Whoops! I, like so many others, fail to know how much attention to pay to ordinary things so as to become a genius. C’est la vie. Fortunately the lesson here was not the factuality of the statement, which is high, but a reminder that you could probably gain benefits from paying more attention and being more disciplined in your thought.

                Which is even better because it’s great advice, and eminently doable. Thanks, Yvain! So I set about paying attention to how I currently pay attention and, like usual, paid attention to the cues I get about how other people pay attention, assuming that I make the mistakes they do at least some of the time.

                And then I realized… wait a minute, whenever other people aren’t actually paying attention is when I could most easily shanghai them into doing things they normally wouldn’t do (Were I a dark artist. Hypothetically.). So learning how to pay more attention and pay attention in the correct way is probably the best reflexive method of avoiding being dutch booked by people who are highly adaptable dark artists.

                And here’s my low-hanging fruit of techniques to build the foundational reflexes for shortening the way. The goal is to avoid being inattentive in certain sorts of situations where I noted personal susceptibility to being taken advantage of by changing situations or flexible con artists.

                Summary: Act like Suspicious, Smart, Rich People Do. Assume everyone and everything is both an opportunity and an encounter with a parasite, and don’t act like it unless it’s socially convenient. How do you do this, you say. It sounds more difficult than that, you say. On the contrary, skeptical sir! I will now present an exercise which rapidly becomes reflexive, in a manner which will cause it to become reflexive, which separates the exercise from the situation so that you can learn the requisite acting skills separately! Try this!

Ask yourself for new people , situations, arguments, and facts, what is this worth to me? What risks do I run by paying attention to this? What opportunities lie in this, if my understanding of it is correct? What risks do I run, if my understanding of it is incorrect? And you can go as much deeper as you think is valuable or are mentally capable of sustaining.

                  For the step-by-steppers out there (I salute you!), here’s explicitly How To start doing this in a low-cost way.

Step 1: In your journal for daily events (If you’re not keeping one of these go buy a journal and start. Without a daily log how do you know you’re actually making progress?) use Pen and Paper (The Great Equalizer!) and write down your understanding of a couple of important topics and a few simple topics (the simple topics shouldn’t take as long… right?). This will be a lot of work! But it’s only for one day, and developing this mental habit in particular and your ability to do rational yet seemingly onerous things for a brief period each day will both be massively valuable.

Step 2: When That Gets Boring, elaborate with pros and cons, an analysis of arguments, or other techniques that professionals use when it’s important (Imagine a lawyer not analyzing their opponent’s arguments, and then imagine yourself as their client.).  Do a Fermi calculation (here's some practice) if it involves a number of things you don’t understand well.

Step 3: Avoid abusing this method to convince yourself you don't need to run the numbers by pretending someone else, someone biased,wrote the analysis. (Those darned Biased people, cropping up even in your own journal!) Think of how future versions of yourself will look at your thought processes (you'll be smarter then... wiser... with a knowledge ofcommon logical fallacies and the heuristics and biases literature)(you might even read Godel Escher Bach or something andblow your mind. Anything is possible!). Look over your previous analyses before deciding (sleep on it and wait on it). Developing a decent set of evidence for fermi calculations and calibration exercises will let you use the same thought processes to do this right when you don't have time to run the numbers.

Step 4: Profit.