Share Your Checklists!
Checklists are powerful, and I don't use them enough. You probably don't, either.
Below are some of my own checklists. Please share your own!
I don't know how to do X.
- Check eHow, Google.
- Skim-read the For Dummies book on the subject.
- Check my social network for somebody who knows how to do X, ask the expert how to do X.
I don't understand X.
- Check Wikipedia, BetterExplained, WiseGeek.
- Read the relevant chapter(s) in a recent textbook, or find a recent review article. (See here.)
- Check my social network for someone who understands X, ask for a tutorial. Offer to buy them coffee or lunch if necessary.
I feel mentally exhausted but can't afford to sleep right now.
- Take a shower.
- Watch 10 minutes of wimp.com, cats on YouTube, IGN video reviews, or movie trailers.
- Go for a walk and listen to awesome music on high-quality headphones.
I don't want to get out of bed, but I should.
- Imagine how good a hot shower will feel, then try again to get out of bed.
- Set my phone alarm to go off in 5 minutes, then slide it across the floor to the other side of the room.
I'm procrastinating on task X.
- Give the task to someone else. (Usually, this isn't possible, because I've always delegated away as much as possible.)
- Think about which part of the procrastination equation is likely causing me the most trouble, and use one of the techniques aimed at tackling that specific problem that has worked best for me in the past.
- Procrastinate on task X by doing a different task that is slightly less urgent/important but still productive. (See structured procrastination.)
I'm about to send an email / post a comment of some significance.
- Is there criticism in the email or comment? Use the sandwich technique.
- Emulate my reader(s) and predict what reaction they will have. If it's not the reaction I am aiming for with this communication, restructure the communication.
(I don't do these ones nearly enough! D'oh!)
I feel sad about not doing a better job at X.
- Figure out something I can do better with regard to X, simulate in my head the steps required to execute that improvement, and if feasible then execute the improvement.
- Think about all the things I'm doing pretty well despite running on fucked-up ape-brain software and hardware.
I'm about to make a decision of some significance.
- Check consequentialism.
- Check VoI. Can I improve my decision by purchasing some piece of information relatively cheaply? (This includes running checks against various biases that may be at play, performing a more formal cost-benefit analysis, etc.)
- Sanity-check the decision with a couple people who have good decision-making skills and possess much of the relevant information.
I could go on, but... what are yours? (Now is also a good opportunity to make some checklists for yourself, based on what you think tends to work for you.)
Malice, Stupidity, or Egalité Irréfléchie?
Anyone who has decided to strike off the mainstream path has experienced this: Strong admonitions and warnings against what they were doing, and pressures not do it.
It doesn’t really matter what it is you’re trying to change. If you’re trying to become a nondrinker in a drinking culture, if you’re trying to quit eating junk food, if you’re trying to become a vegetarian or otherwise have a different diet, this will have happened to you.
If you decide to pursue a nontraditional career path (artist, entrepreneur, etc), you will have experienced this.
If you try to live a different lifestyle than the people around you – for instance, rising each day at 4:30AM and sleeping early instead of partying, you will have experienced this.
People will pressure and cajole you in many different ways to keep doing it the old way. Almost always, it will be phrased as though they’re looking after your best interest.
The specifics will vary. It could be phrased as cautious prudence – “What if your business doesn’t succeed and you don’t have a college degree? That could be really bad for you.”
It could be phrased as desiring for you to have the best way in life – “Go on, live a little, a beer won’t kill you.”
It could be encouraging you to do whatever you’ve set out to change without any specific reasoning at all.
I used to wonder why this is so common. Are people stupid? Or malicious? They must be one of those two.
If someone has a preference that has an expected value of a better life for them and they really want to live that preference, then why would someone that’s in their peer group or family want to discourage them? Is it because they have different calculations of what’s valuable, even when pursuing obvious no-brainer decisions like quitting the lowest quality junk foods? Is it because they’re malicious and want to hold you back and tear you down?
I think now – neither. Rather, I think it’s an uncritical, unexamined form of desire for equality.
Attention control is critical for changing/increasing/altering motivation
I’ve just been reading Luke’s “Crash Course in the Neuroscience of Human Motivation.” It is a useful text, although there are a few technical errors and a few bits of outdated information (see [1], updated information about one particular quibble in [2] and [3]).
There is one significant missing piece, however, which is of critical importance for our subject matter here on LW: the effect of attention on plasticity, including the plasticity of motivation. Since I don’t see any other texts addressing it directly (certainly not from a neuroscientific perspective), let’s cover the main idea here.
Summary for impatient readers: focus of attention physically determines which synapses in your brain get stronger, and which areas of your cortex physically grow in size. The implications of this provide direct guidance for alteration of behaviors and motivational patterns. This is used for this purpose extensively: for instance, many benefits of the Cognitive-Behavioral Therapy approach rely on this mechanism.
Beyond the Reach of God
Followup to: The Magnitude of His Own Folly
Today's post is a tad gloomier than usual, as I measure such things. It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me. Those readers sympathetic to arguments like, "It's important to keep our biases because they help us stay happy," should consider not reading. (Unless they have something to protect, including their own life.)
So! Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong. Not as the result of any explicit propositional verbal belief. More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.
Some would account this a virtue (zettai daijobu da yo), and others would say that it's a thing necessary for mental health.
But we don't live in that world. We live in the world beyond the reach of God.
Fallacies as weak Bayesian evidence
Abstract: Exactly what is fallacious about a claim like ”ghosts exist because no one has proved that they do not”? And why does a claim with the same logical structure, such as ”this drug is safe because we have no evidence that it is not”, seem more plausible? Looking at various fallacies – the argument from ignorance, circular arguments, and the slippery slope argument - we find that they can be analyzed in Bayesian terms, and that people are generally more convinced by arguments which provide greater Bayesian evidence. Arguments which provide only weak evidence, though often evidence nonetheless, are considered fallacies.
As a Nefarious Scientist, Dr. Zany is often teleconferencing with other Nefarious Scientists. Negotiations about things such as ”when we have taken over the world, who's the lucky bastard who gets to rule over Antarctica” will often turn tense and stressful. Dr. Zany knows that stress makes it harder to evaluate arguments logically. To make things easier, he would like to build a software tool that would monitor the conversations and automatically flag any fallacious claims as such. That way, if he's too stressed out to realize that an argument offered by one of his colleagues is actually wrong, the software will work as backup to warn him.
Unfortunately, it's not easy to define what counts as a fallacy. At first, Dr. Zany tried looking at the logical form of various claims. An early example that he considered was ”ghosts exist because no one has proved that they do not”, which felt clearly wrong, an instance of the argument from ignorance. But when he programmed his software to warn him about sentences like that, it ended up flagging the claim ”this drug is safe, because we have no evidence that it is not”. Hmm. That claim felt somewhat weak, but it didn't feel obviously wrong the way that the ghost argument did. Yet they shared the same structure. What was the difference?
The argument from ignorance
Related posts: Absence of Evidence is Evidence of Absence, But Somebody Would Have Noticed!
One kind of argument from ignorance is based on negative evidence. It assumes that if the hypothesis of interest were true, then experiments made to test it would show positive results. If a drug were toxic, tests of toxicity of reveal this. Whether or not this argument is valid depends on whether the tests would indeed show positive results, and with what probability.
With some thought and help from AS-01, Dr. Zany identified three intuitions about this kind of reasoning.
1. Prior beliefs influence whether or not the argument is accepted.
A) I've often drunk alcohol, and never gotten drunk. Therefore alcohol doesn't cause intoxication.
B) I've often taken Acme Flu Medicine, and never gotten any side effects. Therefore Acme Flu Medicine doesn't cause any side effects.
Both of these are examples of the argument from ignorance, and both seem fallacious. But B seems much more compelling than A, since we know that alcohol causes intoxication, while we also know that not all kinds of medicine have side effects.
2. The more evidence found that is compatible with the conclusions of these arguments, the more acceptable they seem to be.
C) Acme Flu Medicine is not toxic because no toxic effects were observed in 50 tests.
D) Acme Flu Medicine is not toxic because no toxic effects were observed in 1 test.
C seems more compelling than D.
3. Negative arguments are acceptable, but they are generally less acceptable than positive arguments.
E) Acme Flu Medicine is toxic because a toxic effect was observed (positive argument)
F) Acme Flu Medicine is not toxic because no toxic effect was observed (negative argument, the argument from ignorance)
Argument E seems more convincing than argument F, but F is somewhat convincing as well.
"Aha!" Dr. Zany exclaims. "These three intuitions share a common origin! They bear the signatures of Bayonet reasoning!"
"Bayesian reasoning", AS-01 politely corrects.
"Yes, Bayesian! But, hmm. Exactly how are they Bayesian?"
How to Fix Science
Like The Cognitive Science of Rationality, this is a post for beginners. Send the link to your friends!

Science is broken. We know why, and we know how to fix it. What we lack is the will to change things.
In 2005, several analyses suggested that most published results in medicine are false. A 2008 review showed that perhaps 80% of academic journal articles mistake "statistical significance" for "significance" in the colloquial meaning of the word, an elementary error every introductory statistics textbook warns against. This year, a detailed investigation showed that half of published neuroscience papers contain one particular simple statistical mistake.
Also this year, a respected senior psychologist published in a leading journal a study claiming to show evidence of precognition. The editors explained that the paper was accepted because it was written clearly and followed the usual standards for experimental design and statistical methods.
Science writer Jonah Lehrer asks: "Is there something wrong with the scientific method?"
Yes, there is.
This shouldn't be a surprise. What we currently call "science" isn't the best method for uncovering nature's secrets; it's just the first set of methods we've collected that wasn't totally useless like personal anecdote and authority generally are.
As time passes we learn new things about how to do science better. The Ancient Greeks practiced some science, but few scientists tested hypotheses against mathematical models before Ibn al-Haytham's 11th-century Book of Optics (which also contained hints of Occam's razor and positivism). Around the same time, Al-Biruni emphasized the importance of repeated trials for reducing the effect of accidents and errors. Galileo brought mathematics to greater prominence in scientific method, Bacon described eliminative induction, Newton demonstrated the power of consilience (unification), Peirce clarified the roles of deduction, induction, and abduction, and Popper emphasized the importance of falsification. We've also discovered the usefulness of peer review, control groups, blind and double-blind studies, plus a variety of statistical methods, and added these to "the" scientific method.
In many ways, the best science done today is better than ever — but it still has problems, and most science is done poorly. The good news is that we know what these problems are and we know multiple ways to fix them. What we lack is the will to change things.
This post won't list all the problems with science, nor will it list all the promising solutions for any of these problems. (Here's one I left out.) Below, I only describe a few of the basics.
'Facing the Singularity' podcast
My online mini-book Facing the Singularity now has a podcast. Ratings and reviews on iTunes will be much appreciated, so as to direct people toward a rationality-informed approach to intelligence explosion.
Jaan Tallinn has been passing my chapters around to people because they are concise explanations of key lemmas in the standard arguments on the need for Friendly AI. This is gratifying because it's exactly the purpose for which I'm writing them, and I encourage others to send people to these chapters as well.
(I'm currently writing the final two chapters of the online book and recording readings of the other chapters for the podcast. A volunteer is doing the audio editing.)
Brazilians, unite! and what is IERFH (portuguese)
Hi anglophones, this topic is only for brazilians, so someone may post in portuguese and part of this is in portuguese (We will translate it to english if necessary when the time comes).
Hello Brazilians, I'm creating this topic because some misallocated questions were posed on this one.
First, the numbers.In Less Wrong:
Me, Gust, Paulovsk, zecaurubu, Gracunha, Mexamark, dyokomizo.
From IERFH (Instituto Ética, Racionalidade, e o Futuro da Humanidade)
Leo Arruda, João Fabiano, me, Pierre , Jonatas Muller, Pablo, Lauro (paralelo), Rafael, plus 3 others.
This makes us at least 17, almost one every 10 million people (not great...)
Some of us are in São Paulo. About 10.
Eventually, this topic may attract some others, and we can create a meeting.
Now, regarding the question a few of you did, IERFH's mission:
Gerar alto impacto positivo no longo prazo, produzindo conhecimentos e reunindo pessoas que contribuam para melhor pensar as questões éticas que irão definir o futuro da humanidade.
About: Somos um time comprometido com fazer o mundo melhor, agora e no futuro. Para isso, estamos reunindo a comunidade brasileira de racionalistas, utilitaristas, transhumanistas, e outros entusiastas e tranformando ideais e teorias em ações. Ao lado de grandes organizações internacionais de caridade, tecnologia e ética, nos propomos a ser o vetor dos esforcos brasileiros nesses campos. O IERFH opera em 3 frentes: o estudo do que é bom e deve ser buscado e preservado: a Ética Pura e Aplicada; as maneiras mais eficientes de raciocínio, tomada de decisões e os seus erros mais comuns: a Racionalidade Epistêmica e Prática; e por fim como aplicar estes campos para garantir a plena realização de todo o potencial humano: o Futuro da Humanidade
Racionalidade: Ser racional é conseguir conquistar, com pouco dispêndio de recursos, aquele que se deseja, entre todos os cenários possíveis que poderiam ter ocorrido. Racionalidade epistêmica é a capacidade de entender com o mundo atual; Racionalidade prática, a capacidade de guiar o mundo atual em direção ao mundo desejado. Para ser racional, duas capacidades são fundamentais, a capacidade de desviar dos bias cognitivos, falhas sistemáticas da nossa cognição, e permitir que o conhecimento adquirido atinja todos os campos de nosso conhecimento, integrando a informação aprendida e garantindo que ela tenha um efeito proporcional em nossas vidas. Essa comunidade do IERFH pretende nos guiar nesse sentido.
Futuro da Humanidade: Para guiar o futuro da humanidade numa direção desejável, é necessário, antes de tudo, desviar dos grandes riscos catastróficos de origem tecnológica que estamos criando conforme criamos novas tecnologias. Para tal, é também necessário compreender e corrigir os bias cognitivos aos quais nossos cérebros estão propensos. Finalmente, garantidas a segurança de nossos valores fundamentais, e corrigidos os nossos desvios de racionalidade, podemos seguir adiante na realização de todo o potencial futuro humano, através de biotecnologia, nanotecnologia, inteligência artificial e coordenação global.
The missing part "Ética" isn't written yet. But I think you get the general idea.Think Bostrom, think Utilitarianism.
Se acharam interessante a descrição, talvez gostem desse post. Ou, para os familiarizados com Yudkowsky desse, sobre CEV.
We are developing our website, that is why we still don't have one.
Given the broadness of our scope, we are, of course, in need of new people (specially to translate) but we will only post to less wrong about the group in detail (in english) in a few months. If you are interested, contact me in the private message section. We meet on skype, and rarely in Pinheiros, São Paulo.
I hope we can start to form a Brazilian rationalist community, both of less wrongers general, and within IERFH and thank Gust for the initiative of creating the meeting topic that made me write this one.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)