Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Purposeful Anti-Rush

4 Elo 08 March 2016 07:34AM

Why do we rush?

Things happen; Life gets in the way, and suddenly we find ourselves trying to get to somewhere with less time than it's possible to actually get there in.  So in the intention to get there sooner; to somehow compensate ourselves for not being on time; we rush.  We run; we get clumsy, we drop things; we forget things; we make mistakes; we scribble instead of writing, we scramble and we slip up.

I am today telling you to stop that.  Don't do that.  It's literally the opposite of what you want to do.  This is a bug I have.

Rushing has a tendency to do the opposite of what I want it to do.  I rush with the key in the lock; I rush on slippery surfaces and I fall over, I rush with coins and I drop them.  NO!  BAD!  Stop that.  This is one of my bugs.

What you (or I) really want when we are rushing is to get there sooner, to get things done faster.  

Instrumental experiment: Next time you are rushing I want you to experiment and pay attention; try to figure out what you end up doing that takes longer than it otherwise would if you weren't rushing.

The time after that when you are rushing; instead try slowing down, and this time observe to see if you get there faster.

Run as many experiments as you like.

Experimenter’s note: Maybe you are really good at rushing and really bad at slowing down.  Maybe you don't need to try this.  Maybe slowing down and being nervous about being late together are entirely unhelpful for you.  Report back.

When you are rushing, purposefully slow down. (or at least try it)

Meta: Time to write 20mins

My Table of contents contains other things I have written.

Feedback welcome.

Covariance in your sample vs covariance in the general population

27 RomeoStevens 16 May 2012 12:17AM

A popular-media take on a subtle problem in sampling.  I found the graph quite illustrative.


Correcting errors and karma

-5 rebellionkid 29 April 2012 05:03PM

An easy way to win cheep karma on LW:

  1. Publicly make a mistake.
  2. Wait for people to call you on it.
  3. Publicly retract your errors and promise to improve.
Post 1) gets you negative karma, post 3) gets you positive karma. Anecdotally the net result is generally very positive.
This doesn't seem quite sane. Yes, it is good for us to reward people for changing their minds based on evidence. But it's still better not to have made the error the first time round. At the very least you should get less net karma for changing your mind towards the correct answer than you would for stating the correct thing the first time.
Is there an advantage to this signalling-approval-for-updates that outweighs the value of karma as indicator-of-general-correctness-of-posts?
If so then can some other signal of general correctness be devised?
If not then what karma etiquette should we impose to ensure this effect doesn't happen?

When programs have to work-- lessons from NASA

27 NancyLebovitz 31 July 2011 03:22PM

They Write the Right Stuff is about software which "never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors."

The programmers work from 8 to 5, with occasional late nights. They wear dressy clothes, not flashy or grungy. I assume there's a dress code, but I have no idea whether conventional clothes are actually an important part of the process. I'm sure that working reasonable numbers of hours is crucial, though I also wonder whether those hours need to be standard office hours.

"And the culture is equally intolerant of creativity, the individual coding flourishes and styles that are the signature of the all-night software world. "People ask, doesn't this process stifle creativity? You have to do exactly what the manual says, and you've got someone looking over your shoulder," says Keller. "The answer is, yes, the process does stifle creativity." " I have no idea what's in the manual, or if there can be a manual for something as new as self-optimizing AI. I assume there could be a manual for some aspects.

What follows is main points quoted from the article:

The important thing is the process: The product is only as good as the plan for the product. About one-third of the process of writing software happens before anyone writes a line of code.

2. The best teamwork is a healthy rivalry. The central group breaks down into two key teams: the coders - the people who sit and write code -- and the verifiers -- the people who try to find flaws in the code. The two outfits report to separate bosses and function under opposing marching orders. The development group is supposed to deliver completely error-free code, so perfect that the testers find no flaws at all. The testing group is supposed to pummel away at the code with flight scenarios and simulations that reveal as many flaws as possible. The result is what Tom Peterson calls "a friendly adversarial relationship."

I note that it's rivalry between people who are doing different things, not people competing to get control of a project.

3. The database is the software base.

One is the history of the code itself -- with every line annotated, showing every time it was changed, why it was changed, when it was changed, what the purpose of the change was, what specifications documents detail the change. Everything that happens to the program is recorded in its master history. The genealogy of every line of code -- the reason it is the way it is -- is instantly available to everyone.

The other database -- the error database -- stands as a kind of monument to the way the on-board shuttle group goes about its work. Here is recorded every single error ever made while writing or working on the software, going back almost 20 years. For every one of those errors, the database records when the error was discovered; what set of commands revealed the error; who discovered it; what activity was going on when it was discovered -- testing, training, or flight. It tracks how the error was introduced into the program; how the error managed to slip past the filters set up at every stage to catch errors -- why wasn't it caught during design? during development inspections? during verification? Finally, the database records how the error was corrected, and whether similar errors might have slipped through the same holes.

The group has so much data accumulated about how it does its work that it has written software programs that model the code-writing process. Like computer models predicting the weather, the coding models predict how many errors the group should make in writing each new version of the software. True to form, if the coders and testers find too few errors, everyone works the process until reality and the predictions match.

4. Don't just fix the mistakes -- fix whatever permitted the mistake in the first place.

The process is so pervasive, it gets the blame for any error -- if there is a flaw in the software, there must be something wrong with the way its being written, something that can be corrected. Any error not found at the planning stage has slipped through at least some checks. Why? Is there something wrong with the inspection process? Does a question need to be added to a checklist?

Importantly, the group avoids blaming people for errors. The process assumes blame - and it's the process that is analyzed to discover why and how an error got through. At the same time, accountability is a team concept: no one person is ever solely responsible for writing or inspecting code. "You don't get punished for making errors," says Marjorie Seiter, a senior member of the technical staff. "If I make a mistake, and others reviewed my work, then I'm not alone. I'm not being blamed for this."