How to Measure Anything

50 lukeprog 07 August 2013 04:05AM

Douglas Hubbard’s How to Measure Anything is one of my favorite how-to books. I hope this summary inspires you to buy the book; it’s worth it.

The book opens:

Anything can be measured. If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods.

The sciences have many established measurement methods, so Hubbard’s book focuses on the measurement of “business intangibles” that are important for decision-making but tricky to measure: things like management effectiveness, the “flexibility” to create new products, the risk of bankruptcy, and public image.

 

Basic Ideas

A measurement is an observation that quantitatively reduces uncertainty. Measurements might not yield precise, certain judgments, but they do reduce your uncertainty.

To be measured, the object of measurement must be described clearly, in terms of observables. A good way to clarify a vague object of measurement like “IT security” is to ask “What is IT security, and why do you care?” Such probing can reveal that “IT security” means things like a reduction in unauthorized intrusions and malware attacks, which the IT department cares about because these things result in lost productivity, fraud losses, and legal liabilities.

Uncertainty is the lack of certainty: the true outcome/state/value is not known.

Risk is a state of uncertainty in which some of the possibilities involve a loss.

Much pessimism about measurement comes from a lack of experience making measurements. Hubbard, who is far more experienced with measurement than his readers, says:

  1. Your problem is not as unique as you think.
  2. You have more data than you think.
  3. You need less data than you think.
  4. An adequate amount of new data is more accessible than you think.


Applied Information Economics

Hubbard calls his method “Applied Information Economics” (AIE). It consists of 5 steps:

  1. Define a decision problem and the relevant variables. (Start with the decision you need to make, then figure out which variables would make your decision easier if you had better estimates of their values.)
  2. Determine what you know. (Quantify your uncertainty about those variables in terms of ranges and probabilities.)
  3. Pick a variable, and compute the value of additional information for that variable. (Repeat until you find a variable with reasonably high information value. If no remaining variables have enough information value to justify the cost of measuring them, skip to step 5.)
  4. Apply the relevant measurement instrument(s) to the high-information-value variable. (Then go back to step 3.)
  5. Make a decision and act on it. (When you’ve done as much uncertainty reduction as is economically justified, it’s time to act!)

These steps are elaborated below.

continue reading »

The Robots, AI, and Unemployment Anti-FAQ

47 Eliezer_Yudkowsky 25 July 2013 06:46PM

Q.  Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A.  Conventional economic theory says this shouldn't happen.  Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns.  If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns.  On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

Q.  Sounds like a lovely theory.  As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact.  Experiment trumps theory and in reality, unemployment is rising.

A.  Sure.  Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries).  We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away.  The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution.  The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries.  Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should.  The idea that there's a limited amount of work which is destroyed by automation is known in economics as the "lump of labour fallacy".

Q.  But now people aren't being reemployed.  The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.

A.  Yes.  And that's a new problem.  We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence.  The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

Baxter robot

continue reading »

How To Have Things Correctly

57 Alicorn 17 October 2012 06:10AM

I think people who are not made happier by having things either have the wrong things, or have them incorrectly.  Here is how I get the most out of my stuff.

Money doesn't buy happiness.  If you want to try throwing money at the problem anyway, you should buy experiences like vacations or services, rather than purchasing objects.  If you have to buy objects, they should be absolute and not positional goods; positional goods just put you on a treadmill and you're never going to catch up.

Supposedly.

I think getting value out of spending money, owning objects, and having positional goods are all three of them skills, that people often don't have naturally but can develop.  I'm going to focus mostly on the middle skill: how to have things correctly1.

continue reading »

Causal Diagrams and Causal Models

61 Eliezer_Yudkowsky 12 October 2012 09:49PM

Suppose a general-population survey shows that people who exercise less, weigh more. You don't have any known direction of time in the data - you don't know which came first, the increased weight or the diminished exercise. And you didn't randomly assign half the population to exercise less; you just surveyed an existing population.

The statisticians who discovered causality were trying to find a way to distinguish, within survey data, the direction of cause and effect - whether, as common sense would have it, more obese people exercise less because they find physical activity less rewarding; or whether, as in the virtue theory of metabolism, lack of exercise actually causes weight gain due to divine punishment for the sin of sloth.

 vs. 

The usual way to resolve this sort of question is by randomized intervention. If you randomly assign half your experimental subjects to exercise more, and afterward the increased-exercise group doesn't lose any weight compared to the control group [1], you could rule out causality from exercise to weight, and conclude that the correlation between weight and exercise is probably due to physical activity being less fun when you're overweight [3]. The question is whether you can get causal data without interventions.

For a long time, the conventional wisdom in philosophy was that this was impossible unless you knew the direction of time and knew which event had happened first. Among some philosophers of science, there was a belief that the "direction of causality" was a meaningless question, and that in the universe itself there were only correlations - that "cause and effect" was something unobservable and undefinable, that only unsophisticated non-statisticians believed in due to their lack of formal training:

"The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm." -- Bertrand Russell (he later changed his mind)

"Beyond such discarded fundamentals as 'matter' and 'force' lies still another fetish among the inscrutable arcana of modern science, namely, the category of cause and effect." -- Karl Pearson

The famous statistician Fisher, who was also a smoker, testified before Congress that the correlation between smoking and lung cancer couldn't prove that the former caused the latter.  We have remnants of this type of reasoning in old-school "Correlation does not imply causation", without the now-standard appendix, "But it sure is a hint".

This skepticism was overturned by a surprisingly simple mathematical observation.

continue reading »

Ritual Report: NYC Less Wrong Solstice Celebration

83 Raemon 20 December 2011 08:37PM

Last Friday, the NYC Less Wrong community held their first Winter Solstice Celebration. Approximately twenty of us gathered for dinner and a night of ritual. We sang songs, told stories, and recited litanies. The night celebrated ancient astronomers, and the work that humanity has done for the past 5000 years. It paid tribute to the harshness of the universe, respecting it as worthy opponent. We explored Lovecraftian mythology, which intersects with our beliefs in interesting ways.

And finally, we looked to the future, vowing to give a gift to tomorrow.

This is the first of 2-3 posts on this subject. In this one, I'm telling a story about what we did and why I wanted to. In the followup(s), I’ll explain the design principles that went into planning such an event, and what we learned from our first execution of it. I’ll also be posting a PDF of a ritual book, similar to the one we read from but with a few changes based on initial, obvious observations.

Why exactly did we do this? Doesn’t this smack of organized religion? Who the hell is Lovecraft and why do we care?

Depending on your background, this may require the bridging of some inferential distance, as well as emotional distance. Bear with me.

continue reading »

Learned Blankness

130 AnnaSalamon 18 April 2011 06:55PM

Related to: Semantic stopsigns, Truly part of you.

One day, the dishwasher broke. I asked Steve Rayhawk to look at it because he’s “good with mechanical things”.

“The drain is clogged,” he said.

“How do you know?” I asked.

He pointed at a pool of backed up water. “Because the water is backed up.”

We cleared the clog and the dishwasher started working.

I felt silly, because I, too, could have reasoned that out.  The water wasn’t draining -- therefore, perhaps the drain was clogged.  Basic rationality in action.[1]

But before giving it even ten seconds’ thought, I’d classified the problem as a “mechanical thing”.  And I’d remembered I “didn’t know how mechanical things worked” (a cached thought).  And then -- prompted by my cached belief that there was a magical “way mechanical things work” that some knew and I didn’t -- I stopped trying to think at all.  

“Mechanical things” was for me a mental stopsign -- a blank domain that stayed blank, because I never asked the obvious next questions (questions like “does the dishwasher look unusual in any way?  Why is there water at the bottom?”).

When I tutored math, new students acted as though the laws of exponents (or whatever we were learning) had fallen from the sky on stone tablets.  They clung rigidly to the handed-down procedures.  It didn’t occur to them to try to understand, or to improvise.  The students treated math the way I treated broken dishwashers.

Martin Seligman coined the term "learned helplessness" to describe a condition in which someone has learned to behave as though they were helpless. I think we need a term for learned helplessness about thinking (in a particular domain).  I’ll call this “learned blankness”[2].  Folks who fall prey to learned blankness may still take actions -- sometimes my students practiced the procedures again and again, hired a tutor, etc.  But they do so as though carrying out rituals to an unknown god -- parts of them may be trying, but their “understand X” center has given up.

continue reading »

Ability to react

73 Swimmer963 18 February 2011 07:19PM

*Note: this post is based on my subjective observations of myself and a small, likely biased sample of people I know. It may not generalize to everyone.

A few days ago, during my nursing lab, my classmates and I were discussing the provincial exam that we’ll have to sit two years from now, when we’re done our degree, in order to work as registered nurses. The Quebec exam, according to our section prof, includes an entire day of simulations, basically acted-out situations where we’ll have to react as we would in real life. The Ontario exam is also a day long, but entirely written.

I made a comment that although the Quebec exam was no doubt a better test of our knowledge, the Ontario exam sounded a lot easier and I was glad I planned to work in Ontario.

“Are you kidding?” said one of the boys in my class. “Simulations are so much easier!”

I was taken aback, reminded myself that my friends and acquaintances are probably weirder than my models of them would predict (thank you AnnaSalamon for that quote), and started dissecting where exactly the weirdness lay. It boiled down to this:

Some people, not necessarily the same people who can ace tests without studying or learn math easily or even do well in sports, are still naturally good at responding to real-life, real-time events. They can manage their stress, make decision on the spot, communicate flexibly, and even have fun while doing it.

This is something I noticed years ago, when I first started taking my Bronze level lifesaving certifications. I am emphatically not good at this. I found doing “sits” (simulated situations) stressful, difficult, and unpleasant, and I dreaded my turn to practice being the rescuer. I had no problem with the skills we learned, as long as they were isolated, but applying them was harder than the hardest tests I’d had at school.

I went on to pass all my certifications, without any of my instructors specifically saying I had a problem. Occasionally I was accused of having “tunnel vision”; they meant that during a sit, treating my victim and simultaneously communicating with my teammates was more multitasking than my brain could handle.

Practice makes perfect, so I joined the competitive lifeguard team (yes, this exists, see https://picasaweb.google.com/lifeguardpete for photos of competitions). We compete in teams of four. In competition, we go into unknown situations and are scored on how we respond. Situations are timed, usually four minutes, and divided into different events; First Aid, Water Rescue, and Priority Assessment, with appropriate score sheets. It was basically my worst nightmare come true. And thanks to sample bias, instead of being slightly above average, I was blatantly worse than everyone else. It wasn’t just a matter of experience; even newcomers to the team scored higher than me. I stubbornly kept going to practice, and went to competitions, and improved somewhat. When I had my first nursing placement, something I had been stressing about all semester, it went effortlessly. There are advantages to setting your bar way, way higher than it needs to be.

continue reading »

Branches of rationality

75 AnnaSalamon 12 January 2011 03:24AM

Related to: Go forth and create the art!, A sense that more is possible.

If you talk to any skilled practitioner of an art, they have a sense of the depths beyond their present skill level.  This sense is important.  To create an art, or to learn one, one must have a sense of the goal.

By contrast, when I chat with many at Less Wrong meet-ups, I often hear a sense that mastering the sequences will take one most of the way to "rationality", and that the main thing to do, after reading the sequences, is to go and share the info with others.  I would therefore like to sketch the larger thing that I hope our rationality can become.  I have found this picture useful for improving my own rationality; I hope you may find it useful too.

To avoid semantic disputes, I tried to generate my picture of "rationality" by asking not "What should 'rationality' be?" but "What is the total set of simple, domain-general hacks that can help humans understand the most important things, and achieve our goals?"  or "What simple tricks can help turn humans -- haphazard evolutionary amalgams that we are -- into coherent agents?"

The branches

The larger "rationality" I have in mind would include some arts that are well-taught on Less Wrong, others that that don't exist yet at all, and others have been developed by outside communities from which we could profitably steal.

Specifically, a more complete art of rationality might teach the following arts:

1.  Having beliefs: the art of having one's near-mode anticipations and far-mode symbols work together, with the intent of predicting the outside world.  (The Sequences, especially Mysterious answers to mysterious questions, currently help tremendously with these skills.)

2.  Making your beliefs less buggy -- about distant or abstract subjects.  This art aims to let humans talk about abstract domains in which the data doesn’t hit you upside the head -- such as religion, politics, the course of the future, or the efficacy of cryonics -- without the conversation turning immediately into nonsense.  (The Sequences, and other discussions of common biases and of the mathematics of evidence, are helpful here as well.)

3. Making your beliefs less buggy -- about yourself.  Absent training, our models of ourselves are about as nonsense-prone as our models of distant or abstract subjects.  We often have confident, false models of what emotions we are experiencing, why we are taking a given action, how our skills and traits compare to those around us, how long a given project will take, what will and won’t make us happy, and what our goals are.  This holds even for many who've studied the Sequences and who are reasonably decent on abstract topics; other skills are needed.[1]

continue reading »

Confidence levels inside and outside an argument

129 Yvain 16 December 2010 03:06AM

Related to: Infinite Certainty

Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?

Mine would be significantly less than 999,999,999 in a billion.

When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in "But that still leaves a one in a billion chance, right?". The majority of the probability is in "That argument is flawed". Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.


More than one in a billion times a political scientist writes a model, ey will get completely confused and write something with no relation to reality. More than one in a billion times a programmer writes a program to crunch political statistics, there will be a bug that completely invalidates the results. More than one in a billion times a staffer at a website publishes the results of a political calculation online, ey will accidentally switch which candidate goes with which chance of winning.

So one must distinguish between levels of confidence internal and external to a specific model or argument. Here the model's internal level of confidence is 999,999,999/billion. But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.

continue reading »

Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

105 patrissimo 14 September 2010 04:17PM

Introduction

Less Wrong is explicitly intended is to help people become more rational.  Eliezer has posted that rationality means epistemic rationality (having & updating a correct model of the world), and instrumental rationality (the art of achieving your goals effectively).  Both are fundamentally tied to the real world and our performance in it - they are about ability in practice, not theoretical knowledge (except inasmuch as that knowledge helps ability in practice).  Unfortunately, I think Less Wrong is a failure at instilling abilities-in-practice, and designed in a way that detracts from people's real-world performance.

It will take some time, and it may be unpleasant to hear, but I'm going to try to explain what LW is, why that's bad, and sketch what a tool to actually help people become more rational would look like.

(This post was motivated by Anna Salomon's Humans are not automatically strategic and the response, more detailed background in footnote [1].)

Update / Clarification in response to some comments: This post is based on the assumption that a) the creators of Less Wrong wish Less Wrong to result in people becoming better at achieving their goals (instrumental rationality, aka "efficient productivity"), and b) Some (perhaps many) readers read it towards that goal.  It is this I think is self-deception.  I do not dispute that LW can be used in a positive way (read during fun time instead of the NYT or funny pictures on Digg), or that it has positive effects (exposing people to important ideas they might not see elsewhere).  I merely dispute that reading fun things on the internet can help people become more instrumentally rational.  Additionally, I think instrumental rationality is really important and could be a huge benefit to people's lives (in fact, is by definition!), and so a community value that "deliberate practice towards self-improvement" is more valuable and more important than "reading entertaining ideas on the internet" would be of immense value to LW as a community - while greatly decreasing the importance of LW as a website.

Why Less Wrong is not an effective route to increasing rationality.

Definition:

Work: time spent acting in an instrumentally rational manner, ie forcing your attention towards the tasks you have consciously determined will be the most effective at achieving your consciously chosen goals, rather than allowing your mind to drift to what is shiny and fun.

By definition, Work is what (instrumental) rationalists wish to do more of.  A corollary is that Work is also what is required in order to increase one's capacity to Work.  This must be true by the definition of instrumental rationality - if it's the most efficient way to achieve one's goals, and if one's goal is to increase one's instrumental rationality, doing so is most efficiently done by being instrumentally rational about it. [2]

That was almost circular, so to add meat, you'll notice in the definition an embedded assumption that the "hard" part of Work is directing attention - forcing yourself to do what you know you ought to instead of what is fun & easy.  (And to a lesser degree, determining your goals and the most effective tasks to achieve them).  This assumption may not hold true for everyone, but with the amount of discussion of "Akrasia" on LW, the general drift of writing by smart people about productivity (Paul Graham: Addiction, Distraction, Merlin Mann: Time & Attention), and the common themes in the numerous productivity/self-help books I've read, I think it's fair to say that identifying the goals and tasks that matter and getting yourself to do them is what most humans fundamentally struggle with when it comes to instrumental rationality.

Figuring out goals is fairly personal, often subjective, and can be difficult.  I definitely think the deep philosophical elements of Less Wrong and it's contributions to epistemic rationality [3] are useful to this, but (like psychedelics) the benefit comes from small occasional doses of the good stuff.  Goals should be re-examined regularly, but occasionally (roughly yearly, and at major life forks).  An annual retreat with a mix of close friends and distant-but-respected acquaintances (Burning Man, perhaps) will do the trick - reading a regularly updated blog is way overkill.

And figuring out tasks, once you turn your attention to it, is pretty easy.  Once you have explicit goals, just consciously and continuously examining whether your actions have been effective at achieving those goals will get you way above the average smart human at correctly choosing the most effective tasks.  The big deal here for many (most?) of us, is the conscious direction of our attention.

What is the enemy of consciously directed attention?  It is shiny distraction.  And what is Less Wrong?  It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people.  As Merlin Mann says: "Joining a Facebook group about creative productivity is like buying a chair about jogging".  Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity.  It's the opposite of this classic piece of advice.

continue reading »

View more: Next