You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

The barriers to the task

-7 Elo 18 August 2016 07:22AM

Original post: http://bearlamp.com.au/the-barriers-to-the-task/


For about two months now I have been putting in effort to run in the mornings.  To make this happen, I had to take away all the barriers to me wanting to do that.  There were plenty of them, and I failed to leave my house plenty of times.  Some examples are:

Making sure I don't need correct clothes - I leave my house shirtless and barefoot, and grab my key on the way out.  

Pre-commitment to run - I take my shirt off when getting into bed the night before, so I don't even have to consider the action in the morning when I roll out of bed.

Being busy in the morning - I no longer plan any appointments before 11am.  Depending on the sunrise (I don't use alarms), I wake up in the morning, spend some time reading things, then roll out of bed to go to the toilet and leave my house.  In Sydney we just passed the depths of winter and it's beginning to get light earlier and earlier in the morning.  Which is easy now; but was harder when getting up at 7 meant getting up in the dark.  

There were days when I would wake up at 8am, stay in bed until 9am, then realise if I left for a run (which takes around an hour - 10am), then came back to have a shower (which takes 20mins - 10:20), then left to travel to my first meeting (which can take 30mins 10:50).  That means if anything goes wrong I can be late to an 11am appointment.  But also - if I have a 10am meeting I have to skip my run to get there on time.

Going to bed at a reasonable hour - I am still getting used to deciding not to work myself ragged.  I decided to accept that sleep is important, and trust to let my body sleep as long as it needs.  This sometimes also means that I can successfully get bonus time by keeping healthy sleep habits.  But also - if I go to sleep after midnight I might not get up until later, which means I compromise my "time" to go running by shoving it into other habits.

Deciding where to run - google maps, look for local parks, plan a route with the least roads and least traffic.  I did this once and then it was done.  It was also exciting to measure the route and be able to run further and further each day/week/month.


What's in your way?

If you are not doing something that you think is good and right (or healthy, or otherwise desireable) there are likely things in your way.  If you just found out about an action that is good, well and right and there is nothing stopping you from doing it; great.  You are lucky this time - Just.Do.It.

If you are one of the rest of us; who know that:

  • daily exercise is good for you
  • The right amount of sleep is good for you
  • Eating certain foods are better than others
  • certain social habits are better than others
  • certain hobbies are more fulfilling (to our needs or goals) than others

And you have known this a while but still find yourself not taking the actions you want.  It's time to start asking what is in your way.  You might find it on someone else's list, but you are looking for the needle in the haystack.  

You are much better off doing this (System 2 exercise):

  1. take 15 minutes with pencil and paper.
  2. At the top write, "I want to ______________".
  3. If you know that's true you might not need this step - if you are not sure - write out why it might be true or not true.
  4. Write down the barriers that are in the way of you doing the thing.  think;
    • "can I do this right now?" (might not always be an action you can take while sitting around thinking about it - i.e. eating different foods)
    • "why can't I just do this at every opportunity that arises?"
    • "how do I increase the frequency of opportunities?"
  5. Write out the things you are doing instead of that thing.
    These things are the barriers in your way as well.
  6. For each point - consider what you are going to do about them.

Questions:

  • What actions have you tried to take on?
  • What barriers have you encountered in doing so?
  • How did you solve that barrier?
  • What are you struggling with taking on in the future?

Meta: this borrows from the Immunity to Change process, that can be best read about in the book, "right weight, right mind".  It also borrows from CFAR style techniques like resolve cycles (also known as focused grit), hamming questions, murphy-jitsu.

Meta: this took one hour to write.

Cross posted to lesswrong: http://lesswrong.com/lw/nuq

Addendum to applicable advice

-8 Elo 16 August 2016 12:59AM

Original post: http://bearlamp.com.au/addendum-to-applicable-advice/
(part 1: http://bearlamp.com.au/applicable-advice/)


If you see advice in the wild and think somethings along the lines of "that can't work for me", that's a cached thought.  It could be a true cached thought or it could be a false one.  Some of these thoughts should be examined thoroughly and defeated.

If you can be any kind of person - being the kind of person that advice works for - is an amazing skill to have.  This is hard.  You need to examine the advice and decide how that advice happened to work, and then you need to modify yourself to make that advice applicable to you.

All too often in this life we think of ourselves as immutable.  And our problems fixed, with the only hope of solving them to find a solution that works for the problem.  I propose it's the other way around.  All too often the solutions are immutable, we are malleable and the problems can be solved by applying known advice and known knowledge in ways that we need to think of and decide on.


Is it really the same problem if the problem isn't actually the problem any more, but rather the problem is a new method of applying a known solution to a known problem?

(what does this mean) Example: Dieting - is an easy example.

This week we have been talking about Calories in/Calories out.  It's pretty obvious that CI/CO is true on a black-box system level.  If food goes (calories in) in and work goes out (calories out - BMR, incidental exercise, purposeful exercise), that is what determines your weight.  Ignoring the fact that drinking a litre of water is a faster way to gain weight than any other way I know of.  And we know that weight is not literally health but a representation of what we consider healthy because it's the easiest way to track how much fat we store on our body (for a normal human who doesn't have massive bulk muscle mass).

CICO makes for terrible advice.  On one level, yes.  To modify the weight of our black box, we need to modify the weight going in and the weight going out so that it's not in the same feedback loop as it was (the one that caused the box to be fat).  On one level CICO is exactly all the advice you need to change the weight of a black box (or a spherical cow in a vacuum).  

On the level of human systems: People are not spherical cows in a vacuum.  Where did spherical cows in a vacuum come from?  It's a parody of what we do in physics.  We simplify a system down to it's basic of parts and generate rules that make sense.  Then we build up to a complicated model and try to find how to apply that rule.  It's why we can work out where projectiles are going to land because we have projectile motion physics (even though often air resistance and wind direction end up changing where our projectile lands, we still have a good guess.  And we later build estimation systems based on using those details for prediction too).  

So CICO is a black-box system, a spherical cow system.  It's wrong.  It's so wrong when you try to apply it to the real world.  But that doesn't matter!  It's significantly better than nothing.  Or the blueberry diet.


The applicable advice of CICO

The point of applicable advice is to look at spherical cows and not say, "I'm no spherical cow!".  Instead think of ways in which you are a spherical cow.  Ways in which the advice is applicable.  Places where - actually if I do eat less, that will improve the progress of my weight loss in cases where my problem is that I eat too much (which I guarantee is relevant for lots of people).  CICO might not be your silver bullet for whatever reason.  It might be grandma, it might be Chocolate bars, It might be really really really delicious steak.  Or dinner with friends.  Or "looking like you are able to eat forever in front of other people".  If you take your problem.  Add in a bit of CICO, and ask, "how can I make this advice applicable to me?".  Today you might make progress on your problem.


And now for some fun from Grognor:  Have you tried solving the problem?


Meta: this took 30mins to write.  All my thoughts were still clear after recently writing part 1, and didn't need any longer to process.

Part 1: http://bearlamp.com.au/applicable-advice/
(part 1 on lesswrong: http://lesswrong.com/r/discussion/lw/nu3/applicable_advice/)

A Rational Altruist Punch in The Stomach

8 Neotenic 01 April 2013 12:42AM

 

Robin Hanson wrote, five years ago

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it. 

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is "moral" to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

So why do many people seem to care about policy that effects far future folk?   I suspect our paternalistic itch pushes us to control the future, rather than to enrich it.  We care that the future celebrates our foresight, not that they are happy. 

 

In the comments  some people gave counterarguments. For those in a rush, the best ones are Toby Ord's. But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100. I have some trouble conceiving of what would beat a consistent argument a googol fold.  

Things that changed my behavior significantly over the last few years have not been many, but I think I'm facing one of them. Understanding biological immortality was one, it meant 150 000 non-deaths per day. Understanding the posthuman potential was another. Then came the 10^52 potential lives lost in case of X-risk, or if you are conservative and think only biological stuff can have moral lives on it, 10^31. You can argue about which movie you'll watch, which teacher would be best to have, who should you marry. But (if consequentialist) you can't argue your way out of 10^31 or 10^52. You won't find a counteracting force that exactly matches, or really reduces the value of future stuff by

3 000 000 634 803 867 000 000 000 000 000 000 777 000 000 000 999  fold 

Which is way less than 10^52 

You may find a fundamental and qualitative counterargument "actually I'd rather future people didn't exist", but you won't find a quantitative one. Thus I spend a lot of time on X-risk related things. 

Back to Robin's argument: so unless someone gives me a good argument against investing some money in the far future (and discovering some vague techniques of how to do it that will make it at least one in a millionth possibility) I'll set aside a block of money X, a block of time Y, and will invest in future people 12 thousand years from now. If you don't think you can beat 10^100, join me. 

And if you are not in a rush, read this also, for a bright reflection on similar issues. 

 

 

Falsifiable and non-Falsifiable Ideas

-1 shaih 19 February 2013 02:24AM


I have been talking to some people (few specific people I thought would benefit and appreciate it) in my dorm and teaching them rationality. I have been thinking of skills that should be taught first and it made me think about what skill is most important to me as a rationalist.

I decided to start with the question “What does it mean to be able to test something with an experiment?” which could also mean “What does it mean to be falsifiable?”

To help my point I brought up the thought experiment with a dragon in Carl Sagan’s garage which is as follows

Carl: There is a dragon in my garage
Me: I thought dragons only existed in legends and I want to see for myself
Carl: Sure follow me and have a look
Me: I don’t see a dragon in there
Carl: My dragon is invisible
Me: Let me throw some flour in so I can see where the dragon is by the disruption of the flour 
Carl: My dragon is incorporeal

And so on

The answer that I was trying to bring about was along the lines that if something could be tested by an experiment then it must have at least one different effect if it were true than if it were false. Further if something had at least one effect different if it were true than if it was false then I could at least in theory test it with an experiment.

This led me to the statement:
If something cannot at least in theory be tested by experiment then it has no effect on the world and lacks meaning from a truth stand point therefore rational standpoint.

Anthony (the person I was talking to at the time) started his counter argument with any object in a thought experiment cannot be tested for but still has a meaning.

So I revised my statement any object that if brought into the real world cannot be tested for has no meaning. Under the assumption that if an object could not be tested for in the real world it also has no effect on anything in the thought experiment. i.e. the story with the dragon would have gone the same way independent of its truth values if it were in the real world.

Then the discussion continued into could it be rational to have a belief that could not even in theory be tested. It became interesting when Anthony gave the argument that if believing in a dragon in your garage gave you happiness and the world would be the same either way besides the happiness combined with the principle that rationality is the art of systematized winning it is clearly rational to believe in the dragon.

I responded with truth trumps happiness and believing the dragon would force you to believe the false belief which is not worth the amount of happiness received by believing it. Even further I argued that it would in fact be a false belief because p(world) > p(world)p(impermeable invisible dragon) which is a simple occum’s razor argument.

My intended direction for this argument with Anthony from this point was to apply these points to theology but we ran out of time and we have not had time again to talk so that may be a future post.

 

Today however Shminux pointed out to me that I held beliefs that were themselves non-falsifiable. I realized then that it might be rational to believe non-falsifiable things for two reasons (I’m sure there’s more but these are the main one’s I can think of please comment your own)

1)   The belief has a beauty to it that flows with falsifiable beliefs and makes known facts fit more perfectly. (this is very dangerous and should not be used lightly because it focuses to closely on opinion)

2)   You believe that the belief will someday allow you to make an original theory which will be falsifiable.

Both of these reasons if not used very carefully will allow false beliefs. As such I myself decided that if a belief or new theory sufficiently meets these conditions enough to make me want to believe them I should put them into a special category of my thoughts (perhaps conjectures).  This category should be below beliefs in power but still held as how the world works and anything in this category should always strive to leave it, meaning that I should always strive to make any non-falsifiable conjecture no longer be a conjecture through making it a belief or disproving it. 

 

Note: This is my first post so as well as discussing the post, critiques simply to the writing are deeply welcomed in PM to me. 

 

View more: Next