Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] On the statistical properties and tail risk of violent conflicts

1 morganism 23 February 2017 03:46AM

[Link] Marginal Revolution Thoughts on Black Lives Matter Movement

1 scarcegreengrass 18 January 2017 06:12PM

Addendum to applicable advice

-8 Elo 16 August 2016 12:59AM

Original post: http://bearlamp.com.au/addendum-to-applicable-advice/
(part 1: http://bearlamp.com.au/applicable-advice/)


If you see advice in the wild and think somethings along the lines of "that can't work for me", that's a cached thought.  It could be a true cached thought or it could be a false one.  Some of these thoughts should be examined thoroughly and defeated.

If you can be any kind of person - being the kind of person that advice works for - is an amazing skill to have.  This is hard.  You need to examine the advice and decide how that advice happened to work, and then you need to modify yourself to make that advice applicable to you.

All too often in this life we think of ourselves as immutable.  And our problems fixed, with the only hope of solving them to find a solution that works for the problem.  I propose it's the other way around.  All too often the solutions are immutable, we are malleable and the problems can be solved by applying known advice and known knowledge in ways that we need to think of and decide on.


Is it really the same problem if the problem isn't actually the problem any more, but rather the problem is a new method of applying a known solution to a known problem?

(what does this mean) Example: Dieting - is an easy example.

This week we have been talking about Calories in/Calories out.  It's pretty obvious that CI/CO is true on a black-box system level.  If food goes (calories in) in and work goes out (calories out - BMR, incidental exercise, purposeful exercise), that is what determines your weight.  Ignoring the fact that drinking a litre of water is a faster way to gain weight than any other way I know of.  And we know that weight is not literally health but a representation of what we consider healthy because it's the easiest way to track how much fat we store on our body (for a normal human who doesn't have massive bulk muscle mass).

CICO makes for terrible advice.  On one level, yes.  To modify the weight of our black box, we need to modify the weight going in and the weight going out so that it's not in the same feedback loop as it was (the one that caused the box to be fat).  On one level CICO is exactly all the advice you need to change the weight of a black box (or a spherical cow in a vacuum).  

On the level of human systems: People are not spherical cows in a vacuum.  Where did spherical cows in a vacuum come from?  It's a parody of what we do in physics.  We simplify a system down to it's basic of parts and generate rules that make sense.  Then we build up to a complicated model and try to find how to apply that rule.  It's why we can work out where projectiles are going to land because we have projectile motion physics (even though often air resistance and wind direction end up changing where our projectile lands, we still have a good guess.  And we later build estimation systems based on using those details for prediction too).  

So CICO is a black-box system, a spherical cow system.  It's wrong.  It's so wrong when you try to apply it to the real world.  But that doesn't matter!  It's significantly better than nothing.  Or the blueberry diet.


The applicable advice of CICO

The point of applicable advice is to look at spherical cows and not say, "I'm no spherical cow!".  Instead think of ways in which you are a spherical cow.  Ways in which the advice is applicable.  Places where - actually if I do eat less, that will improve the progress of my weight loss in cases where my problem is that I eat too much (which I guarantee is relevant for lots of people).  CICO might not be your silver bullet for whatever reason.  It might be grandma, it might be Chocolate bars, It might be really really really delicious steak.  Or dinner with friends.  Or "looking like you are able to eat forever in front of other people".  If you take your problem.  Add in a bit of CICO, and ask, "how can I make this advice applicable to me?".  Today you might make progress on your problem.


And now for some fun from Grognor:  Have you tried solving the problem?


Meta: this took 30mins to write.  All my thoughts were still clear after recently writing part 1, and didn't need any longer to process.

Part 1: http://bearlamp.com.au/applicable-advice/
(part 1 on lesswrong: http://lesswrong.com/r/discussion/lw/nu3/applicable_advice/)

Black box knowledge

2 Elo 03 March 2016 10:40PM

When we want to censor an image we put a black box over it.  Over the area we want to censor.  In a similar sense we can purposely censor our knowledge.  This comes in particular handiness when thinking about things that might be complicated but we don't need to know.


A deliberate black box around how toasters work would look like this:  

bread -> black box -> toast

Not all processes need knowing, for now a black box can be a placeholder for the future.  


With the power provided to us by a black box, we can identify what we don't know.  We can say; Hey!  I don't know what a toaster is but it would be about 2 hours to work it out.  if I ever did want to work it out, I could just spend two hours to do it.  Until then; I saved myself two hours.  If we take other more time-burdensome fields it works even better.  Say tax.

Need to file tax -> black box accountant -> don't need to file my tax because I got the accountant to do it for me.

I know I can file my own tax, but that might be 100-200 hours of knowing everything an accountant knows about tax.  (It also might be 10 hours depending on your country and their tax system).  For now I can assume that hiring an accountant saved me a number of hours in doing it myself.  So - Winning!


Take car repairs.  On the one hand; you could do it yourself and unpack the black box, or you could trade your existing currency  $$ (which you already traded your time to earn) for someone else's skills and time to repair the car.  The system looks like this:

Broken car -> black box mechanic -> working car

By deliberately not knowing how it works; we can tap out of even trying to figure it out for now.  The other advantage is that we can look at; not just what we know in terms of black boxes but more importantly what we don't know.  We can build better maps by knowing what we don't know.


Computers:

Logic gates -> Black box computeryness -> www.lesswrong.com

Or maybe it's like this: (for more advanced users)

Computers: 

Logic gates -> flip flops -> Black box CPU -> black box GPU -> www.lesswrong.com


The black-box system happens to also have a meme about it:

Step 1. Get out of bed

Step 2. Build AGI

Step 3. ?????

Step 4. Profit

Only now we have a name for deliberately skipping finding out how step 3 works.


Another useful system:

Dieting

Food in (weight goes up) -> black box human body -> energy out (weight goes down)


Make your own black box systems in the comments.


Meta: short post, 1.5 hour to write, edit and publish. Felt it was an idea that provides useful ways to talk about things.  Needed it to explain something to someone, now all can enjoy!

My Table of contents has my other writings in it.

All suggestions and improvements welcome!