Dilbert creator and bestselling author Scott Adams recently wrote a LessWrong compatible advice book that even contains a long list of cognitive biases.   Adams told me in a phone interview that he is a lifelong consumer of academic studies, which perhaps accounts for why his book jibes so well with LessWrong teachings.  Along with HPMOR, How to Fail at Almost Everything and Still Win Big should be among your first choices when recommending books to novice rationalists.  Below are some of the main lessons from the book, followed by a summary of my conversation with Adams about issues of particular concern to LessWrong readers.

 

My favorite passage describes when Adams gave a talk to a fifth-grade class and asked everyone to finish the sentence “If you play a slot machine long enough, eventually you will…”  The students all shouted “WIN!” because, Adams suspects, they had the value of persistence drilled into them and confused it with success.  

“WIN!” would have been the right answer if you didn’t have to pay to play but the machine still periodically gave out jackpots.  Adams thinks you can develop a system to turn your life into a winning slot machine that doesn’t require money but does require “time, focus, and energy” to repeatedly pull the lever.  

Adams argues that maximizing your energy level through proper diet, exercise, and sleep should take priority over everything else.  Even if your only goal is to help others, be selfish with respect your energy level because it will determine your capacity for doing good.  Adams has convinced me that appropriate diet, exercise, and sleep should be the starting point for effective altruists.  Adams believes we have limited willpower and argues that if you make being active every single day a habit, you won’t have to consume any precious willpower to motivate yourself to exercise.

Since most pulls of the life slot machine will win you nothing, Adams argues that lack of fear of embarrassment is a key ingredient for success.  Adams would undoubtedly approve of CFAR’s comfort zone expansion exercises.

Adams lists skills that increase your chances of success.  These include knowledge of public speaking, psychology, business writing, accounting, design, conversation, overcoming shyness, second language, golf, proper grammar, persuasion, technology (hobby level), and proper voice technique.  He gives a bit of actionable advice on each, basically ideas for becoming more awesome.  I wish my teenage self had been told of Adams’ theory that a shy person can frequently achieve better social outcomes by pretending that he is an actor playing the part of an extrovert.  

Adams believes we should rely on systems rather than goals, and indeed he thinks that “goals are for losers.”  If, when playing the slot machine of life, your goal is winning a jackpot, then you will feel like a loser each time you don’t win. But if, instead, you are systems oriented then you can be in the constant state of success, something that will probably make you happier.

Adams claims that happiness “isn’t as dependent on your circumstances as you might think,” and “anyone who has experienced happiness probably has the capacity to spend more time at the top of his or her personal range and less time near the bottom.”  His suggestions for becoming happier include improving your exercise, diet, and sleep; having a flexible work schedule, being able to imagine a better future, and being headed towards a brighter future.

The part of the book most likely to trouble LessWrong readers is when Adams recommends engaging in self-delusion.  He writes: 

“Athletes are known to stop shaving for the duration of a tournament or to wear socks they deem lucky.  These superstitions probably help in some small way to bolster their confidence, which in turn can influence success.  It’s irrelevant that lucky socks aren’t a real thing…Most times our misconceptions about reality are benign and sometimes, even helpful.  Other times, not so much.”

For me, being rational means having accurate beliefs and useful emotions.  But what if these two goals conflict?  For example, a college friend of mine who used to be a book editor wrote “Most [authors] would do better by getting a minimum wage job and spending their entire paycheck on lottery tickets.”  I know this is true for me, yet I motivated myself to write my last book in part by repeatedly dreaming of how it would be a best seller, and such willful delusions did, at the very least, make writing the book more enjoyable.

To successfully distort reality you probably need to keep two separate mental books: the false one designed to motivate yourself, and the accurate one to keep you out of trouble.  If you forget that you don’t really have a “reality distortion field”, that you can’t change the territory by falsifying your map, you might make a Steve Jobs level error by, say, voluntarily forgoing lifesaving medical care because you think you can wish your problem away. 

The strangest part of the book concerns affirmations, which Adams defines as the “practice of repeating to yourself what you want to achieve while imagining the outcome you want.”  Adams describes his fantastic success with achieving his affirmations, which included his becoming a famous cartoonist, having a seemingly hopeless medical problem fixed, and scoring at exactly the 94th percentile on the GMATs.  Adams writes that the success of affirmations for him and others seems to go beyond what could be achieved by positive thinking.  Thankfully he rules out magic as a possible solution and suggests that the success of affirmations might be due to selective memories, false memories, optimists tending to notice opportunities, selection effect of people who make affirmations, and mysterious science we don’t understand.  His support of affirmations seems to contradict his dislike of goals.

 

Our Phone Conversation

I took advantage of this offer to get a 15-minute phone interview with Adams.  

He has heard of LessWrong, but doesn’t have any specific knowledge of us.  He thinks the Singularity is a very probable outcome for mankind.  He believes it will likely turn out all right due to what he calls “Adams’ Law of Slow-Moving Disasters” which says that “any disaster we see coming with plenty of advance notice gets fixed.”  To the extent that Adams is correct, we will owe much to Eliezer and associates for providing us with loud early warnings of how the Singularity might go wrong.

I failed in my brief attempt to get Adams interested in cryonics.  He likes the idea of brain uploading and thinks it will be unnecessary to save the biological part of us.  I was unable to convince him that cryonics is a good intermediate step until someone develops the technology for uploading.  He mentioned that so long as the Internet survives a huge amount of him basically will as well.   

Recalling Adams’ claim that having a high tolerance for embarrassment is a key attribute for success,  I asked him about my theory of how American dating culture, in which it’s usually the man who risks rejection by asking the woman to go on the first date, gives men an entrepreneurial advantage because it eventually makes men more tolerant of rejection.  Adams didn’t directly respond to my theory, but brought up evolutionary psychology by saying that men would encounter much rejection because of their preference for variety and role as hunters.  Adams stressed that this was just speculation and not back up by evidence.

Adams has heard of MetaMed.  He is very optimistic about the ability of medicine to become more rational and to handle big data.  When I pointed out that doctors often don’t know basic statistics, he said that this probably doesn’t impede their ability to treat patients.

After I explained the concept of akrasis to Adams, he mentioned the book “The Power of Habit” and told me that we can use science to develop better habits.  (Kaj_Sotala recently wrote a highly upvoted LessWrong post on the “The Power of Habit.”) 

Adams suggested that if you have trouble accomplishing a certain task, just focus on the part that you can do:  for example if spending an hour at the gym seems too difficult, then just think about putting on your sneakers.  

Although he didn’t use these exact words, Adams basically defended himself against the charge of “other-optimizing.”  He explained that it would be very difficult to describe to an alien what a horse is.  But once you succeeded describing the horse, it would be much easier to describe a zebra because you could do so in part by making references to the horse.  Adams said he knows his advice isn’t ideal for everyone, but it provides a useful template you can use to develop a plan better optimized for yourself.  

At the end of the interview Adams said he was surprised I had not brought up assisted suicide, given his recent blog post on the topic.  In the post Adams wrote:

“My father, age 86, is on the final approach to the long dirt nap (to use his own phrase). His mind is 98% gone, and all he has left is hours or possibly months of hideous unpleasantness in a hospital bed. I'll spare you the details, but it's as close to a living Hell as you can get...”

“If you're a politician who has ever voted against doctor-assisted suicide, or you would vote against it in the future, I hate your fucking guts and I would like you to die a long, horrible death. I would be happy to kill you personally and watch you bleed out. I won't do that, because I fear the consequences. But I'd enjoy it, because you motherfuckers are responsible for torturing my father. Now it's personal.”

Based on this blog post I suspect that Yvain would agree with Adams about the magnitude and intensity of the evil of outlawing assisted suicide for the terminally ill.

If you are unwilling to buy Adams’ book, I strongly recommend you at least read his blog, which normally has a much softer and less angry tone than the passage I just cited.

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 7:06 PM

I really wanted to like this review, because I've been meaning to write such a review for LW on the same book: it's really a great book.

I found this review pretty unreadable, though, and I think it actually leaves out most of the things I'd expect LWers to find useful in the book, such as:

  • The concept of "moist robots"
  • Why happiness mostly depends on stupid little things like eating right and getting enough sleep
  • How to trick yourself into eating more healthy foods

...and a long laundry list of other hacks and unique perspectives. Even though you briefly mention some of the happiness connections, there's nothing in the review that shows what's different or interesting or LW-ish about Adams' approach to the topic.

On the readability of your review, it would be strongly helped by breaking out your points topically, and not beginning every other sentence with "Adams". ;-)

Thanks for the constructive criticism (upvoted). I started lots of sentences with "Adams" because I always wanted it to be clear to the reader if I was referring to an idea of Adams, my idea, or something from another LR writer. A better writer, however, probably could have accomplished this without being so repetitive.

There is certainly a lot of valuable stuff in the book that I didn't cover and LessWrong would benefit from you writing a review as well.

I always wanted it to be clear to the reader if I was referring to an idea of Adams, my idea, or something from another LR writer.

It's a book review: the default is going to be Adams. If you're talking about your idea or something from another writer, then you should call it out explicitly, and you shouldn't do it much since you're reviewing Adams.

A better writer, however, probably could have accomplished this without being so repetitive.

Taboo "better writer", and look at it from an information theory point of view: you want an efficient data encoding, so the shortest code will not be the one where you call something out in most sentences. Writing in the large depends on structures whose purpose is to let you refer to something once as a default context, so you don't have to say it over and over.

For example, you could use subheadings, of "Adams says:" followed by a series of ideas, followed by other subheads for other people. Or, if you are trying to compare and contrast lots of ideas in this way, use a tabular structure, or a "dueling quotes" style where you do something like:

Adams: "some provocative thought"

Yudkowsky: "similar provocative thought"

Adams: "random musing"

Yvain: "parallel musing"

In essence, it's not so much about better "writing" as better organization or presentation. Which is technically part of writing, but easier to learn if you taboo "better writing" and approach it from a data structures perspective.

(Huh. Funny, I never actually thought of it this way before, but that's really where I think about this kind of stuff from: I think of it the way I think of data structures in programming, which are all about optimizing your representation format for a particular performance goal. The same is true in writing, in that the structure of a piece needs to reflect the change you want to make in your reader's thinking. So, if you want them to think, "Adams' advice parallels LW thinking in a lot of areas", the natural structure of the review should reflect those parallels as directly as possible, rather than talking around them.)

Any disaster we see coming with plenty of advance notice gets fixed.

There's quite a bit of wiggle room in that.

Examples of disasters that didn't get fixed in time:

The Mongols (Europe escaped through sheer dumb luck)
Hitler
Deforestation in pre-industrial times
Tobacco
Overfishing
Antibiotic resistance (the jury's still out on this one)
The African slave trade

Anyone want to add some more?

Well hold on a second; what does "didn't get fixed in time" even mean for most of these examples?

Was Hitler not "fixed in time" because he killed as many people as he did, or did he "get fixed" before he could kill the much larger number of people he would have preferred to kill in Eastern Europe? Was the (European; presumably we're ignoring the Arab slave trade) Slave Trade in Africa stopped "in time" for guys like the Mende tribesmen freed in the Amistad case, or not "in time" from the perspective of those already enslaved? Does it count as a "fix" if everyone smoking tobacco now knows the health risks or will it not count as fixed unless it is completely eliminated, and again when is the cutoff for being in time?

A lot of this looks like complaining that these things happened at all rather than whether the responses to them were reasonably prompt and effective.

Was the (European; presumably we're ignoring the Arab slave trade) Slave Trade in Africa stopped "in time" for guys like the Mende tribesmen freed in the Amistad case, or not "in time" from the perspective of those already enslaved?

Enslaving people is something of a special case, since the negative consequences of enslaving someone would've been pretty obvious from the get-go. So "stopped 'in time'" would presumably mean refraining from transatlantic slave trading entirely, or at least not enslaving more and more people from 1515ish until the 1790s.

Does it count as a "fix" if everyone smoking tobacco now knows the health risks or will it not count as fixed unless it is completely eliminated, and again when is the cutoff for being in time?

That question seems to me to miss the point. It's obvious that the problem of people dying because of smoking isn't "fixed" for any reasonable definition of "fixed", given that the global number of smokers — and the rate at which smoking is killing people — continues to rise. Since the deadliness of tobacco smoking has been established for at least six decades, I'd say this is a legitimate example of a disaster we saw coming that isn't "fixed".

A lot of this looks like complaining that these things happened at all rather than whether the responses to them were reasonably prompt and effective.

Likewise, a lot of that looks like nitpicking. Even if there's disagreement about when a problem should be said to be "fixed", a prerequisite for a problem being "fixed" is that it's not getting worse.

Likewise, a lot of that looks like nitpicking. Even if there's disagreement about when a problem should be said to be "fixed", a prerequisite for a problem being "fixed" is that it's not getting worse.

The thing is, that's sort of the problem; a lot of these disasters it's not clear what the parameters we're counting even are or even whose response we're looking at. I'm not trying to nitpick (I cut a lot out of my first comment's examples for that reason), I honestly don't know how we're supposed to slice most of these. And that seems rather important if we're going to judge whether issues are fixed in a timely manner.

Like, for example, "the Mongols"; the Mamluks did a really excellent job of putting together a defense once it was clear that Cairo would be next in line after Baghdad, the Song sat there and watched for decades as Genghis put his horde together before bothering to defend themselves, and the Mongols themselves did nothing to prevent their own wonky system of succession from predictably breaking their empire apart in between. That's three different Mongol disasters with three different responses by three different groups, each with different outcomes, and I have no idea which one we're even talking about (or if we're talking about a fourth one entirely).

The ones you pointed out from my previous comment, (European) slavery in Africa and smoking, have similar issues; what exactly is the disaster, how long is too long for a solution, and who is responsible for stopping it?

The Quakers decided slavery was immoral in 1783, founded the 'Society for Effecting the Abolition of the Slave Trade' in 1787, and twenty years later had killed the slave trade in the British Empire (with the rest of the Europe's slave trade crumbling soon after). It's tough to see how they could have been more prompt once they had invented the modern concept of abolitionism, and it's pretty odd to call out earlier Christians for not responding to something only an abolitionist would even call a disaster in the first place. Sure we're all abolitionists now, but that's largely an accident of history; the idea is fairly non-obvious on it's own, especially from a consequentialist point of view.

With smoking, the death rates are increasing but primarily in the developing world where cigarette smoking is still pretty new. In the US, our regulatory incentives and education have done a good job reducing the death toll and nowadays people generally know the risks when they pick up a pack (as do their insurance companies) all in just a few decades; domestically, it looks like the main disaster now is that the people who do choose to risk their health are increasingly able to externalize the cost of that decision through the government. My guess is that those developing countries with functioning governments will probably follow our example and we'll see falling rates globally pretty soon as well, but even so it's not far-fetched to say the disaster here is dealt with and theirs are separate (albeit similar) crises.

If we're going to say people haven't responded to a disaster quickly enough, actually defining said disaster the timescale and who the responders are is fairly crucial. Slicing out big chunks of time and space where things we don't like are happening is easy, but for the purposes of understanding how people tend to respond to crises it makes more sense to try to cut as closely to the issue as possible.

Like, for example, "the Mongols"; [...]

I'll give you the Mongols (and Hitler) since I find them harder to call.

The ones you pointed out from my previous comment, (European) slavery in Africa and smoking, have similar issues; what exactly is the disaster, how long is too long for a solution, and who is responsible for stopping it?

For deciding whether smoking and transatlantic slavery are counterexamples to "Adams' Law of Slow-Moving Disasters" (I'll just call it ALoSMD), the third question is irrelevant and the first question doesn't actually need a full, comprehensive answer. I can give just enough of a description of the problem to allow us to eyeball the problem's magnitude over time. If it became visibly worse during some period, that suffices to show it wasn't fixed during that period, and a complete description of the problem is not necessary. For smoking, we can just see when the rate at which people died from smoking was/is increasing; for transatlantic enslavement, we can ask how long the enslavement rate trended up.

(Why do I say the third question's irrelevant here? Because ALoSMD doesn't say anything about who fixes the problem or how it's fixed; it just says the problem gets fixed. Whether it's fixed by the people "responsible" or someone else is an issue the Law pushes aside.)

The Quakers decided slavery was immoral in 1783, founded the 'Society for Effecting the Abolition of the Slave Trade' in 1787, and twenty years later had killed the slave trade in the British Empire (with the rest of the Europe's slave trade crumbling soon after). It's tough to see how they could have been more prompt once they had invented the modern concept of abolitionism, and it's pretty odd to call out earlier Christians for not responding to something only an abolitionist would even call a disaster in the first place.

It's not clear to me why one would zero in on the Quakers specifically (since ALoSMD doesn't care about the who or the how), nor why one should only start the clock running from 1783. The slavers were hardly oblivious to what they were doing, and they (if no one else) could've acknowledged & avoided negative consequences of their actions from the beginning.

I grant it's morally anachronistic to criticize historical people for failing to meet current moral standards, but I don't believe that's relevant. If your best judgement, or my best judgement, says transatlantic slavery was a disaster, then as far as you or I are concerned, it simply was a disaster; that people from 400 years ago would disagree would merely make them wrong by our lights, and doesn't mean transatlantic slavery wasn't a disaster after all.

With smoking, the death rates are increasing but primarily in the developing world where cigarette smoking is still pretty new.

That's true.

In the US, our regulatory incentives and education have done a good job reducing the death toll and nowadays people generally know the risks when they pick up a pack (as do their insurance companies) all in just a few decades;

I'd dispute the idea that US smokers generally know the risks (most of them presumably know about the risk of lung cancer, but probably not about the risks of e.g. erectile dysfunction, or giving other people cardiovascular disease through secondhand smoke) but admittedly that's a side point.

domestically, it looks like the main disaster now is that the people who do choose to risk their health are increasingly able to externalize the cost of that decision through the government.

Another side point, but a Fermi estimate suggests otherwise. tobaccofreekids.org estimates that each year, in the US, smoking leads to $71 billion of taxpayer-funded government spending, and causes about 400,000 smokers' deaths. I handwavingly convert the latter number into dollars by multiplying it by the average years of life a smoker loses (13) and a $50,000 guesstimate for the value of a year of life, giving $260 billion. Setting the two dollar values side by side, the loss of life in itself appears to be the bigger problem.

My guess is that those developing countries with functioning governments will probably follow our example and we'll see falling rates globally pretty soon as well, but even so it's not far-fetched to say the disaster here is dealt with and theirs are separate (albeit similar) crises.

I could treat them as separate, but since CronoDAS picked out tobacco in general as a disaster, I'm inclined to look at global deaths. (And that seems like a natural thing to do. From the perspective of a dispassionate observer, why weigh the deaths of those in developing countries on a different scale? Also, if I did treat the developing world tobacco disaster as a separate problem, and if the best I could say about that separate problem were that it'd start improving "pretty soon", that'd indicate the problem is getting worse, hence not fixed.)

I still haven't addressed the second question out of your three ("how long is too long for a solution"), but since I can give it reasonable answers, I don't think it invalidates smoking and transatlantic slavery as counterexamples.

With respect to slavery, I can only repeat that its negative consequences were not subtle or accidental side effects; the slavers knew what they were doing from the start. As such, any delay in its prohibition would be "too long". Deliberately setting my house on fire would be a foreseeable disaster, and even if I subsequently called the fire brigade early enough to save most of the house, I could hardly say the disaster was averted in time!

As for smoking, I'd set the year 2000 as an upper bound for when smoking death rates should've peaked. Why 2000? Because by that point they'd started falling in developed countries, and I see no compelling reason why that couldn't have been the case worldwide. A sufficient reason why that wasn't the case worldwide is, as far as I can tell, inadequate tobacco control.

If we're going to say people haven't responded to a disaster quickly enough, actually defining said disaster the timescale and who the responders are is fairly crucial. [...] for the purposes of understanding how people tend to respond to crises it makes more sense to try to cut as closely to the issue as possible.

It is of course a good idea to do these things if we are trying to maximize our understanding of a problem and how people respond to it. But that's not what CronoDAS & I have tried to do here. We've set ourselves the less onerous task of identifying counterexamples to ALoSMD. To carry out this narrower task we need not do all you say we need to.

I may be coming off as bullet-headed here, but we really shouldn't dismiss likely counterexamples to ALoSMD prematurely. If ALoSMD is wrong — and I reckon it is — it's a good idea to underline that fact before the ALoSMD meme lodges in people's heads and makes them complacent about civilizational & existential risks.

It may partly be because these were not recognized as disaster when they were approaching. Either because the responsible ones lacked the big view, because they lacked some fundamental knowledge or thought it was a good thing.
Anyway I think that the law of slow-moving disasters is too weak to stand against the Planning Fallacy.

If by 'a bit of wiggle room' you mean 'a number of counter examples', yup, you're right.

Bought the book based on these recommendations. Now I'd be curious to hear what people think about the things that Adams says about goals vs. systems:

Throughout my career I’ve had my antennae up, looking for examples of people who use systems as opposed to goals. In most cases, as far as I can tell, the people who use systems do better. The systems-driven people have found a way to look at the familiar in new and more useful ways.

To put it bluntly, goals are for losers. That’s literally true most of the time. For example, if your goal is to lose ten pounds, you will spend every moment until you reach the goal— if you reach it at all— feeling as if you were short of your goal. In other words, goal-oriented people exist in a state of nearly continuous failure that they hope will be temporary. That feeling wears on you. In time, it becomes heavy and uncomfortable. It might even drive you out of the game.

If you achieve your goal, you celebrate and feel terrific, but only until you realize you just lost the thing that gave you purpose and direction. Your options are to feel empty and useless, perhaps enjoying the spoils of your success until they bore you, or set new goals and reenter the cycle of permanent presuccess failure. [...]

Goal-oriented people exist in a state of continuous presuccess failure at best, and permanent failure at worst if things never work out. Systems people succeed every time they apply their systems, in the sense that they did what they intended to do. The goals people are fighting the feeling of discouragement at each turn. The systems people are feeling good every time they apply their system. That’s a big difference in terms of maintaining your personal energy in the right direction.

The system-versus-goals model can be applied to most human endeavors. In the world of dieting, losing twenty pounds is a goal, but eating right is a system. In the exercise realm, running a marathon in under four hours is a goal, but exercising daily is a system. In business, making a million dollars is a goal, but being a serial entrepreneur is a system.

For our purposes, let’s say a goal is a specific objective that you either achieve or don’t sometime in the future. A system is something you do on a regular basis that increases your odds of happiness in the long run. If you do something every day, it’s a system. If you’re waiting to achieve it someday in the future, it’s a goal.

Language is messy, and I know some of you are thinking that exercising every day sounds like a goal. The common definition of goals would certainly allow that interpretation. For our purposes, let’s agree that goals are a reach-it-and-be-done situation, whereas a system is something you do on a regular basis with a reasonable expectation that doing so will get you to a better place in your life. Systems have no deadlines, and on any given day you probably can’t tell if they’re moving you in the right direction.

My proposition is that if you study people who succeed, you will see that most of them follow systems, not goals. When goal-oriented people succeed in big ways, it makes news, and it makes an interesting story. That gives you a distorted view of how often goal-driven people succeed. When you apply your own truth filter to the idea that systems are better than goals, consider only the people you know personally. If you know some extra successful people, ask some probing questions about how they got where they did. I think you’ll find a system at the bottom of it all, and usually some extraordinary luck.

This seems to somewhat contradict the advice in the massively-upvoted Humans are not automatically strategic, in which Anna Salamon suggests that we should:

(a) Ask ourselves what we’re trying to achieve;
(b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress;
(c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal;
(d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past);
(e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work;
(f) Focus most of the energy that isn’t going into systematic exploration, on the methods that work best;
(g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
(h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;

On the other hand, some of the advice in "not automatically strategic" could also been seen as suggestions of how to evaluate your systems and set them up in a way that actually serves your aims... so they're not necessarily as contradictory as they might seem like at first.

Given that people untrained in the art of rationality don't do well with goals because they are not automatically strategic the possible solutions are to forgot about goals and instead use systems, or to take a more rational approach towards goals.

Take a rational approach towards goals by making plans and systems that make you actually follow the plans. Then use the systems.

I would start by an assumption that 99% of time (which is probably an understatement) I am not strategic. Therefore in the remaining 1% of time, instead of running towards my goals directly, I should quickly think about something that will make me somewhat more likely to contribute to the goals during the remaining 99%. If it is something that could change the 99:1 ratio, that's even better!

About

The part of the book most likely to trouble LessWrong readers is when Adams recommends engaging in self-delusion.

and

Adams defines [affirmations] as the “practice of repeating to yourself what you want to achieve while imagining the outcome you want.”

Having by random chance just read Stuck in the middle with bruce the 'obvious' explanation why this might work is that it balances one subconscious effect (the inner self-deprecating excuse searching Bruce) with another one: the affirmative positive delusion.

This may work as a heuristic and for people who cannot deal with their inner Bruce in other ways but doesn't sound like a good general advice for rationalists. It surely doesn't for me. But then my Bruce is no frightend traumatic voice but an aware risk aversive plan B agent. If I looe I will know why I make excuses.

I read only the summary. I don't like interviews (they are often too slow).

Any disaster we see coming with plenty of advance notice gets fixed.

IIRC, there was pretty good advanced warning about 9/11, but the information was spread out among various organizations. So it depends on what one means by "advanced warning". I guess I'm unsatisfied with this quote since it seems to be a fully general counterargument for planning for any potential disaster.

I work for the government, so I've read and been briefed on a lot of horror stories about impending acquisition/engineering disasters that were vaguely "known", but were marched into full steam because the overall culture was unaware of biases like sunk cost; think of the Obamacare website. Let's just say that its rollout with all of its bugs was not a surprise to me.

As someone who is familiar with Adams' writing, when he talks about his law its pretty clear that he is using a definition of known that doesn't include either of the scenarios you mentioned. Unfortunately, this review doesn't include his standard examples and other clarifications.

As someone who is a fan of Adams, but has a long reading list I found this very helpful. It convinced me to move the book up several slots in said list, but not to run out and buy it immediately.

It convinced me to move the book up several slots in said list, but not to run out and buy it immediately.

That's because the review isn't sufficient praise of the actual book. If a person were only going to read one self-help book in their lifetime, with the expectation that it should help them on nearly every topic that might be termed "self-help", this is the one book they should read.

There are other self-help books that do better on individual topics discussed in this book, and it is not perfect by any means, but there is nothing like it out there for comprehensive coverage of the subjects of health, happiness, and success in life -- at least, nothing written from Adams' strictly reductionist and extremely pragmatic point of view. (i.e. Adams bites the bullet of seeing people as "moist robots" who can be manipulated effectively in simple ways, and advocates using this to manipulate yourself into doing what needs to be done.)

Its main drawback is that it has a bit too much of Adams' spasmodic dysphonia story woven into it, or really, grafted onto it. Those chapters don't add much value to the book, in my opinion, except to add a largely unnecessary dramatic arc.

But as Adams himself explains in the book, the book itself was designed to sell, and both the humorous and biographical content that would be unnecessary if the book were aimed at the self-help section of the bookstore were necessary in order to keep the book in the "humor" or "Dilbert author biography" marketing categories, where it would be financially successful.

(One of Adams' points about success in life is that one is never judged on an absolute basis or on one's own merits, but by comparison to competitors in a category, a category which you often have limited ability to choose. The book was therefore designed so as to be the best self-help book in the humor section of the bookstore, rather than the best humor book in the self-help section.)

Well, I just bumped it very near the top of my list -- a lot of the optimizations covered in the original post I've already implemented, but it can't hurt to see if I've missed any relatively low hanging fruit. Thanks!

It's an interesting book. May even be useful, if I stir myself to read it again and actually act upon a damn thing in it.

(I've read a ton of self-help books of various sorts; they tend to have similar prescriptions. So I suspect the hard part is doing any of it, not just reading it and applauding.)

One problem might be that you just forget about the self-help ideas by the time a situation where they would be useful comes up. Has anyone with an ongoing Anki habit tried making flashcards of useful-looking self-help material? Does it seem to help?

Yep, I've been doing so since about October, for the reasons you state (I was tired of reading stuff and forgetting it). Any time I run into a useful-seeming piece of advice I add it to Anki. I've also been systematically entering info from some productivity-related posts here into Anki (which is actually how I found myself here rereading this thread :D).

And looking back, it seemed to have helped quite a bit. I'll write a post about it when I consider my productivity has stabilized.

And so many of those books seem to have that same piece of advice: "Actually go out and do things; don't just read about them and forget to do them!"

Now that you wrote that, I also remember reading that advice somewhere. I just forgot it.

I wonder if this time it will be different... and perhaps, what could I do to prevent it.

Would a Memento-style tattoo be too extreme? :D

To successfully distort reality you probably need to keep two separate mental books: the false one designed to motivate yourself, and the accurate one to keep you out of trouble. If you forget that you don’t really have a “reality distortion field”, that you can’t change the territory by falsifying your map, you might make a Steve Jobs level error by, say, voluntarily forgoing lifesaving medical care because you think you can wish your problem away.

Thanks for this paragraph, I feel like you finished a thought I've been stuck in the middle of having for a year. Instead of framing the problem as a dilemma - "do I deceive myself, or not?" - simply choose whether or not to turn on your internal cheerleader as needed.

I read the chapter on diet. Adams claims that "Science has demonstrated that humans have a limited supply of willpower." This idea is important for at least this chapter. However as Robert Kurzban has noted it is a weak theory that cannot be falsified. I would prefer Adams to use the concept of willpower as another self-delusion to help optimize one's systems.

Adams claims that "Science has demonstrated that humans have a limited supply of willpower." This idea is important for at least this chapter. However as Robert Kurzban has noted it is a weak theory that cannot be falsified.

We know that humans act roughly as if they have a limited supply of willpower. That is, it is a reasonably functional model for predicting a variety of experimental outcomes. As a rule of thumb for purposes of providing self-help advice that applies in most cases, this seems pretty sufficient.

I would prefer Adams to use the concept of willpower as another self-delusion to help optimize one's systems.

IIRC, I think this falls under his general admonition not to treat the contents of the book as true, but rather as useful.

We know that humans act roughly as if they have a limited supply of willpower. That is, it is a reasonably functional model for predicting a variety of experimental outcomes. As a rule of thumb for purposes of providing self-help advice that applies in most cases, this seems pretty sufficient.

The problem is that there is some evidence that indicates believing that willpower is in limited makes it a limited resource. http://www.nytimes.com/2011/11/27/opinion/sunday/willpower-its-in-your-head.html?_r=0

Holding a belief that reduces your willpower in turn isn't good for self-help purposes.

Very interesting to hear this book receive so much praise, both from the OP as well as pjeby. I must admit that I have my doubts about Scott Adams; he is generally not seen as a very serious sceptic, at least in my experience. I'm not sure that "he believes many things that people on Less Wrong also believe" is a very good indication of rationality. Optimism regarding the future and liking technology does a better job of explaining much of that, I think. Still, even if he isn't a great rationalist, the book could still be really good, and you've piqued my curiosity.

I'm not sure that "he believes many things that people on Less Wrong also believe" is a very good indication of rationality. Optimism regarding the future and liking technology does a better job of explaining much of that, I think.

Actually, I don't think this is where his views and those of LW overlap so much. It's much more in the area of his Hansonian cynicism about people and their ability to know why they do what they do, and his attitude of biting the bullet regarding unpleasant truths. Those two things, I would say, are the major philosophical overlaps between what he writes in this book, and LW's outlook about things. (As opposed to mere agreement about specific facts or strategies.)

Hm, fair enough. Having read some more of his blog I agree that he does have essential rationalist traits such as being genuinely interested in the truth. He also seems to rely more on basic clear-thinking than overly theoretical arguments, which I am a fan of. However, he does occasionally say things like this:

The Karma Hypothesis is that releasing this idea to the world will put me in a good position with the universe when my book comes out and my start-up launches in beta any day now. I could use some good luck. And if karma isn't a real thing, I hope the idea will make the world a better place because I'm part of that too.|

It's possible it's just a joke, but off-hand remarks like that make me feel reluctant to take anything he says at face value. Believing or even seriously considering warm fuzzy things that don't have evidence to back them up is a major red flag for me.

I agree that he does have essential rationalist traits such as being genuinely interested in the truth.

I was more under the impression that he's genuinely interested in useful.

It's possible it's just a joke

He's a professional humorist: it's his job to make jokes.

Believing or even seriously considering warm fuzzy things that don't have evidence to back them up is a major red flag for me.

Depending on your definition of "evidence" you'll either love him or hate him then. The book makes a big deal about one of his strategies for happiness, which is to deliberately spend a lot of time thinking about awesome outcomes that actually have very little probability of happening, as if they were likely to happen. The object isn't to convince himself that these things will happen, but rather to trick the "moist robot" into feeling good about the future, so that he will have the motivation to stick to his systems.

[Edit to add: I should note that I do not endorse Adams or his views in general -- I just think that this one particular book of his is extremely valuable, and more than worth working around any jokes or epistemically problematic theories. Most self-help books have far more epistemic issues than this one does, after all, and in general it's a serious mistake to overlook good instrumental advice attached to bad theories. Bad theories are the default condition for new knowledge: most good theories evolve as alternate explanations for a reasonably predictive model attached to a bad theory.]

Making jokes is fine, and I do like his style of writing for the most part (I have read earlier books of his). The issue is whether or not I can trust his claims on face value. If I read advise of his which is based on scientific claims I don't want to be left wondering if perhaps his advise is terrible because the current body of academic knowledge points in the opposite direction. Giving self-help advise based on common sense/experience alone is not as useful to me if I have no reason to believe his common sense on the matter is any better than mine. In that case, I have to evaluate the trustworthiness of his self-help advise in terms of his overall rationality, in which case believing in nice things because they are nice to believe in (I am firmly on board with the Bayesian view of evidence, FYI) is a very bad sign. It means that if he says things like "you have to believe in yourself" I have to ask myself if he is just saying that because it sounds nice, or because it is known to be an effective strategy.

So basically, what I would like to know is how you determine that his advise is good. Is it basically a good summary of existing thought on the matter, and is that why you recommend it? Or is it that it just jibes well with your own intuition? Or does it fair well on objective measures of quality such as accuracy of scientific claims?

I don't know what point you're really trying to make here; I find it irritating when people basically say, "I'm not convinced; convince me," because it puts social pressure on me to overstate my case. (It's also an example of the trap mentioned in HPMOR where continually answering someone's interrogatories leads to the impression of subordinate status.)

I don't agree with your arguments in Adams' case, for a number of reasons, but because of the adversarial position you're taking, an onlooker would likely confuse my attacks on your errors to be in support of Adams, which isn't really my intent.

As I said before, I support the book, not Adams' writing, beliefs, or opinions in general. It contains many practical points that are highly in agreement with the LW zeitgeist, backed with extensive study citations, along with many non-obvious and unique suggestions that appear to make more sense than the usual sort of suggestions.

Many of those suggestions could be thought of as rooted in a "shut up and multiply" frame of mind, like Adams notion that it's worth using small amounts of "bad" or high-calorie foods to tempt one to eat more good foods -- like dipping carrots in ranch dressing or cooking broccoli in regular butter -- if one would otherwise not have eaten the "good" food.

This is the type of idea one usually doesn't see in diet literature, because it appears to be against a deontological morality of "good" and "bad" foods, whereas Adams is making a consequentialist argument.

Quite a lot of the book is like that, actually, in the sense that Adams presents ideas that should occur to people -- but usually don't -- due to biases of these sorts. He talks a lot about how a big purpose of the book is to give people permission to do these things, or to set an example. (He mentions that the example of one of his coworkers becoming published was a huge influence on his future path, and that his example of being published inspired coworkers at his next job. "Permission", in the sense of peer examples or explicit encouragement, is a powerful tool of influence.)

At this point, I think I've said all I'm willing to on this subthread. If you want to know more, read the book and look at the citations yourself. The book is physically available in hundreds of libraries, and electronically available from dozens of library systems, so you needn't spend a penny to look at the references (or advice) for yourself.

I think your point about status is a bit silly: I am asking you these questions because I defer to your judgement and value your expertise highly, which should raise rather than lower your status. Nonetheless I appreciate that it's annoying to be put in the position of having to convince people to do something that's good for them, so thank you very much for taking the time to answer my questions. I think your arguments are good and it's helped me and hopefully other people reading this to decide whether the book is worth reading.

The issue is whether or not I can trust his claims on face value. If I read advise of his which is based on scientific claims I don't want to be left wondering if perhaps his advise is terrible because the current body of academic knowledge points in the opposite direction

The whole point of being a rationalist is to avoid taking things at face value and always thinking critically about stuff you read.

For a lot of question in that realm the scientific data isn't conclusive.

He explained that it would be very difficult to describe to an alien what a horse is. But once you succeeded describing the horse, it would be much easier to describe a zebra because you could do so in part by making references to the horse. Adams said he knows his advice isn’t ideal for everyone, but it provides a useful template you can use to develop a plan better optimized for yourself.

I think this is a good analogy for the general usefulness of developing frameworks and vocabularies that line up with a. reality and b. your actual philosophy

Seriously wondering if Scott has visited LW, based on List of Cognitive Biases and Simulation Argument he drops in there. Any guesses?

I would guess he has briefly visited us.