Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I came across a 2015 blog post by Vitalik Buterin that contains some ideas similar to Paul Christiano's recent Crowdsourcing moderation without sacrificing quality. The basic idea in both is that it would be nice to have a panel of trusted moderators carefully pore over every comment and decide on its quality, but since that is too expensive, we can instead use some tools to predict moderator decisions, and have the trusted moderators look at only a small subset of comments in order to calibrate the prediction tools. In Paul's proposal the prediction tool is machine learning (mainly using individual votes as features), and in Vitalik's proposal it's prediction markets where people bet on what the moderators would decide if they were to review each comment.
It seems worth thinking about how to combine the two proposals to get the best of both worlds. One fairly obvious idea is to let people both vote on comments as an expression of their own opinions, and also place bets about moderator decisions, and use ML to set baseline odds, which would reduce how much the forum would have to pay out to incentivize accurate prediction markets. The hoped for outcome is that the ML algorithm would make correct decisions most of the time, but people can bet against it when they see it making mistakes, and moderators would review comments that have the greatest disagreements between ML and people or between different bettors in general. Another part of Vitalik's proposal is that each commenter has to make an initial bet that moderators would decide that their comment is good. The article notes that such a bet can also be viewed as a refundable deposit. Such forced bets / refundable deposits would help solve a security problem with Paul's ML-based proposal.
Are there better ways to combine these prediction tools to help with forum moderation? Are there other prediction tools that can be used instead or in addition to these?
The rules I follow
Kind of cards.
This leads me to another problem. An hypothesis may have many implications. And a property may be implied by many hypothesis. Therefore, it is better to state:
Order of the words
Introduction: Here's a misconception about World War II that I think is harmful and I don't see refuted often enough.
Misconception: In 1941, Hitler was sitting pretty with most of Europe conquered and no huge difficulties on the horizon. Then, due to his megalomania and bullshit ideology, he decided to invade Russia. This was an unforced error of epic proportions. It proved his undoing, like that of Napoleon before him.
Rebuttal: In hindsight, we think of the Soviet Union as a superpower and military juggernaut which you'd be stupid to go up against. But this is not how things looked to the Germans in 1941. Consider World War I. In 1917–1918, Germany and Austria had defeated Russia at the same time as they were fighting a horrifyingly bloody war with France and Britain - and another devastating European war with Italy. In 1941, Italy was an ally, France had been subdued and Britain wasn't in much of a position to exert its strength. Seemingly, the Germans had much more favorable conditions than in the previous round. And they won the previous round.
In addition, the Germans were not crazy to think that the Red Army was a bit of a joke. The Russians had had their asses handed to them by Poland in 1920 and in 1939–1940 it had taken the Russians three months and a ridiculous number of casualties to conquer a small slice of Finland.
Nevertheless, Russia did have a lot of manpower and a lot of equipment (indeed, far more than the Germans had thought) and was a potential threat. The Molotov-Ribbentrop pact was obviously cynical and the Germans were not crazy to think that they would eventually have to fight the Russians. Being the first to attack seemed like a good idea and 1941 seemed like a good time to do it. The potential gains were very considerable. Launching the invasion was a rational military decision.
Why this matters: The idea that Hitler made his most fatal decision for irrational reasons feeds into the conception that evil and irrationality must go hand in hand. It's the same kind of thinking that makes people think a superintelligence would automatically be benign. But there is no fundamental law of the universe which prevents a bad guy from conquering the world. Hitler lost his war with Russia for perfectly mundane and contingent reasons like, “the communists had been surprisingly effective at industrialization.”
I've always appreciated the motto, "Raising the sanity waterline." Intentionally raising the ambient level of rationality in our civilization strikes me as a very inspiring and important goal.
It occurred to me some time ago that the "sanity waterline" could be more than just a metaphor, that it could be quantified. What gets measured gets managed. If we have metrics to aim at, we can talk concretely about strategies to effectively promulgate rationality by improving those metrics. A "rationality intervention" that effectively improves a targeted metric can be said to be effective.
It is relatively easy to concoct or discover second-order metrics. You would expect a variety of metrics to respond to the state of ambient sanity. For example, I would expect that, all things being equal, preventable deaths should decrease when overall sanity increases, because a sane society acts to effectively prevent the kinds of things that lead to preventable deaths. But of course other factors may also cause these contingent measures to fluctuate whichever way, so it's important to remember that these are only indirect measures of sanity.
The UN collects a lot of different types of data. Perusing their database, it becomes obvious that there are a lot of things that are probably worth caring about but which have only a very indirect relationship with what we could call "sanity". For example, one imagines that GDP would increase under conditions of high sanity, but that'd be a pretty noisy measure.
Take five minutes to think about how one might measure global sanity, and maybe brainstorm some potential metrics. Part of the prompt, of course, is to consider what we could mean by "sanity" in the first place.
This is my first pass at brainstorming metrics which may more-or-less directly indicate the level of civilizational sanity:
- (+) Literacy rate
- (+) Enrollment rates in primary/secondary/tertiary education
- (-) Deaths due to preventable disease
- (-) QALYs lost due to preventable causes
- (+) Median level of awareness about world events
- (-) Religiosity rate
- (-) Fundamentalist religiosity rate
- (-) Per-capita spent on medical treatments that have not been proven to work
- (-) Per-capita spent on medical treatments that have been proven not to work
- (-) Adolescent fertility rate
- (+) Human development index
It's potentially more productive (and probably more practically difficult) to talk concretely about how best to improve one or two of these metrics via specific rationality interventions, than it is to talk about popularizing abstract rationality concepts.
Sidebar: The CFAR approach may yield something like "trickle down rationality", where the top 0.0000001% of rational people are selected and taught to be even more rational, and maybe eventually good thinking habits will infect everybody in the world from the top down. But I wouldn't bet on that being the most efficient path to raising the global sanity waterline.
As to the question of the meaning of "sanity", it seems to me that this indicates a certain basic package of rationality.
In Eliezer's original post on the topic, he seems to suggest a platform that boils down to a comprehensive embrace of probability-based reasoning and reductionism, with enough caveats and asterisks applied to that summary that you might as well go back and read his original post to get his full point. The idea was that with a high enough sanity waterline, obvious irrationalities like religion would eventually "go underwater" and cease to be viable. I see no problem with any of the "curricula" Eliezer lists in his post.
It has become popular within the rationalsphere to push back against reductionism, positivism, Bayesianism, etc. While such critiques of "extreme rationality" have an important place in the discourse, I think for the sake of this discussion, we should remember that the median human being really would benefit from more rationality in their thinking, and that human societies would benefit from having more rational citizens. Maybe we can all agree on that, even if we continue to disagree on, e.g., the finer points of positivism.
"Sanity" shouldn't require dogmatic adherence to a particular description of rationality, but it must include at least a basic inoculation of rationality to be worthy of the name. The type of sanity that I would advocate for promoting is this more "basic" kind, where religion ends up underwater, but people are still socially allowed to be contrarian in certain regards. After all, a sane society is aware of the power of conformity, and should actively promote some level of contrarianism within its population to promote a diversity of ideas and therefor avoid letting itself become stuck on local maxima.
I'm writing this to get information about the lesswrong community and whether it worth engaging. I'm a bit out of the loop in terms of what the LW community is like and whether it can maintain multiple view points (and how known the criticisms are).
The TL;DR is I have problems with treating computation in an overly formal fashion. The more pragmatic philosophy suggested implies (but doesn't prove) that AI will not be as powerful as expected as the physicality of computation is important and instantiating computing in a physical fashion is expensive.
I think all the things I will talk about are interesting, but I don't see the sufficiency of them when considering AI running in the real world in real computers.
I'm growing increasingly convinced the unfortunate correlations between types of people and types of arguments leads to persistent biases in uncovering actual knowledge.
As an example, MR wrote this article (they just linked to again today) on Ben Carson in 2015/11. Cowen's argument is that, while perhaps implausible (although it may have tenuous support), the idea that Carson believes that the pyramids were used as grain storage isn't in any possible way less unrealistic than any other religious beliefs. If anything, that singular belief is relatively realistic compared to more widely accepted miracles in Christianity, or similar religions.
So why does he get so much flak for it? Cowen argues that he shouldn't, that it's unfounded and irrational/inconsistent. Is it? He obviously has a fair point. The downside is that despite the belief when analyzed not being particularly ridiculous, we all have a shared estimation/expectation that people who hold this type of belief (let's call them Class B religious beliefs) ARE particularly ridiculous.
This then creates a new equilibrium, where only those people who take their Class B religious beliefs *very* seriously will share them. As a result when Carson says the pyramids have grain, our impulse is "wacky!" But when Obama implies he believes Jesus rose from the dead, our impulse is "Boilerplate -- he probably doesn't give it too much thought -- it's a typical belief, which he might not even believe."
As a result we get this constant mismatch between the type of person to hold a belief, and the truth value of the belief itself. I don't mean to only bring up controversial examples, but it's no surprise that this is where these examples thrive. HBD is another common one. While there is something there, which after a fair amount of reading I suspect is overlooked, the type of person to be really passionate about HBD is (more often than not, with exceptions), not the type of person you want over for dinner.
This can suck for people like us. On one hand we want to evaluate individual pieces of information, models, or arguments, based on how they map to reality. On the other hand, if we advocate or argue for information that is correlated with an unsavory type of person, we are classified as that type of person. In this sense, for someone who values a good social standing and no risk to their career as primary objectives, it would be irrational to publicly blog about controversial topics. It's funny, Scott Alexander was retweeted by Anne Coulter for his SCC blog on Trump. He was thrilled, but imagine if he was an aspiring professor? I think he would probably still be fine, because his unique level of genius would still shine through, but lately professors I know who have non-mainstream political views have stopped publicly sharing for fear of controversy.
This is a topic I think about a lot, and now notice becoming a bigger issue in the US. And I wonder directly how to respond. The contradiction between rationally evaluating an idea and irrationally sharing analysis is growing.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
As a child I decided to do a philosophy course as an extracurricular activity. In it the teacher explained to us the notion of schools of philosophical thought. According to him classifying philosophers as adhering either to school A or school B, is typical for Anglo thought.
It deeply annoys me when Americans talk about Democrat and Republican political thought and suggest that you are either a Democrat or a Republican. The notion that allegiance to one political camp is supposed to dictate your political beliefs feels deeply wrong.
A lot of Anglo high schools do policy debating. The British do it a bit differently than the American but in both cases it boils down to students having to defend a certain side.
Traditionally there's nearly no debating at German high schools.
When writing political essays in German school there’s a section where it's important to present your own view. Your own view isn't supposed to be one that you simply copy from another person. Good thinking is supposed to provide a sophisticated perspective on the topic that is the synthesis of arguments from different sources instead of following a single source.
That’s part of the German intellectual thought has the ideal of 'Bildung'. In Imprisoned in English Anna Wierzbicka tells me that 'Bildung' is a particularly German construct and the word isn't easily translatable into other languages. The nearest English word is 'education'. 'Bildung' can also be translated as 'creation'. It's about creating a sophisticated person, that's more developed than the average person on the street who doesn't have 'Bildung'. Having 'Bildung' signals having a high status.
According to this ideal you learn about different viewpoints and then you develop a sophisticated opinion. Not having a sophisticated opinion is low class. In liberal social circles in the US a person who agrees with what the Democratic party does at every point in time would have a respectable political opinion. In German intellectual life that person would be seen as a credulous low status idiot that failed to develop a sophisticated opinion. A low status person isn't supposed to be able to fake being high status by memorizing the teacher's password.
If you ask me the political question "Do you support A or B?", my response is: "Well, I neither want A or B. There are these reasons for A, there are those reasons for B. My opinion is that we should do C which solves those problems better and takes more concerns into account." A isn’t the high status option so that I can signal status by saying that I'm in favour of A.
How does this relate to non-political opinions? In Anglo thought philosophic positions belong to different schools of thought. Members belonging to one school are supposed to fight for their school being right and being better than the other schools.
If we take the perspective of hardcore materialism, a statement like: "One of the functions of the heart is to pump blood" wouldn't be a statement that can be objectively true because it's teleology. The notion of function isn't made up of atoms.
From my perspective as a German there's little to be gained in subscribing to the hardcore materialist perspective. It makes a lot of practical sense to say that such as statement can be objectively true. I have gotten the more sophisticated view of the world, that I want to have. Not only statements that are about arrangements of atoms can be objectively true but also statements about the functions of organs. That move is high status in German intellectual discourse but it might be low status in Anglo-discourse because it can be seen as being a traitor to the school of materialism.
Of course that doesn't mean that no Anglo accepts that the above statement can be objectively true. On the margin German intellectual norms make it easier to accept the statement as being objectively true. After Hegel you might say that thesis and antithesis come together to a synthesis instead of thesis or antithesis winning the argument.
The German Wikipedia page for "continental philosophy" tells me that the term is commonly used in English philosophy. According to the German Wikipedia it's mostly used derogatorily. From the German perspective the battle between "analytic philosophy" and "continental philosophy" is not a focus of the debate. The goal isn't to decide which school is right but to develop sophisticated positions that describe the truth better than answers that you could get by memorizing the teacher's password.
One classic example of an unsophisticated position that's common in analytic philosophy is the idea that all intellectual discourse is supposed to be based on logic. In Is semiotics bullshit? PhilGoetz stumbles about a professor of semiotics who claims: "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis."
That's seen as a strong violation of how reasoning based on logical positivism is supposed to work. It violates the memorized teachers password. But is it true? To answer that we have to ask what 'logical basis' means. David Chapman analysis the notion of logic in Probability theory does not extend logic. In it he claims that in academic philosophical discourse the phrase logic means predicate logic.
Predicate logic can make claims such:
(a) All men are mortal.
(b) Socrates is a man.
(c) Socrates is mortal.
According to Chapman the key trick of predicate logic is logical quantification. That means every claim has to be able to be evaluated as true or false without looking at the context.
We want to know whether a chemical substance is safe for human use. Unfortunately our ethical review board doesn't let us test the substance on humans. Fortunately they allow us to test the substance on rats. Hurray, the rats survive.
(a) The substance is safe for rats.
(b) Rats are like humans
(c) The substance is safe for humans.
The problem with `Rats are like humans` is that it isn’t a claim that’s simply true or false.
The truth value of the claim depends on what conclusions you want to draw from it. Propositional calculus can only evaluate the statement as true or false and can’t judge whether it’s an appropriate analogy because that requires looking at the deeper meaning of the statement `Rats are like humans` to decide whether `Rats are like humans` in the context we care about.
Do humans sometimes make mistakes when they try to reason by analogy? Yes, they do. At the same time they also come to true conclusions by reasoning through analogy. Saying "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis." sounds fancy, but if we reasonably define the term logical basis as being about propositional calculus, it's true.
Does that mean that you should switch from the analytic school to the school of semiotics? No, that's not what I'm arguing. I argue that just as you shouldn't let tribalism influence yourself in politics and identify as Democrat or Republican you should keep in mind that philosophical debates, just as policy debates, are seldom one-sided.
Daring to slay another sacred cow, maybe we also shouldn't go around thinking of ourselves as Bayesian. If you are on the fence on that question, I encourage you to read David Chapman's splendid article I referenced above:
[Epistemic status: quite speculative. I've attended a CFAR workshop including a lesson on double crux, and found it wore counterintuitive than I expected. I ran my own 3-day event going through the CFAR courses with friends, including double crux, but I don't think anyone started doing double crux based on my attempt to teach it. I have been collecting notes on my thoughts about double crux so as to not lose any; this is a synthesis of some of those notes.]
This is a continuation of my attempt to puzzle at Double Crux until it feels intuitive. While I think I understand the _algorithm_ of double crux fairly well, and I _have_ found it useful when talking to someone else who is trying to follow the algorithm, I haven't found that I can explain it to others in a way that causes them to do the thing, and I think this reflects a certain lack of understanding on my part. Perhaps others with a similar lack of understanding will find my puzzling useful.
Here's a possible argument for double crux as a way to avoid certain conversational pitfalls. This argument is framed as a sort of "diff" on my current conversational practices, which are similar to those mentioned by CCC. So, here is approximately what I do when I find an interesting disagreement:
- We somehow decide who states their case first. (Usually, whoever is most eager.) That person gives an argument for their side, while checking for understanding from the other person and looking for points of disagreement with the argument.
- The other person asks questions until they think they understand the whole argument; or, sometimes, skip to step 3 when a high-value point of disagreement is apparent before the full argument is understood.
- Recurse into step 1 for the most important-seeming point of disagreement in the argument offered. (Again the person whose turn it is to argue their case will be chosen "somehow"; it may or may not switch.)
- If that process is stalling out (the argument is not understood by the other person after a while of trying, or the process is recursing into deeper and deeper sub-points without seeming to get closer to the heart of the disagreement), switch roles; the person who has explained the least of their view should now give an argument for their side.
- In the best case, they accept your argument, perhaps after a little recursion into sub-arguments to clarify.
- In a very good case, the process finds a lot of common ground (in the form of parts of the argument which are agreed upon) and a precise point of disagreement, X, such that if either person changed their mind about X they'd change their mind about the whole. They can now dig into X in the same way they dug into the overall disagreement, with confidence that resolving X is a good way to resolve the disagreement.
- In a slightly less good case, a precise disagreement X is found, but it turns out that the argument you gave wasn't your entire reason for believing what you believe. IE, you've given an argument which you believe to be sufficient to establish the point, but not necessary. This means resolving the point of disagreement X is only potentially changing their mind. At best you may find that your argument fails, in which case you'd give another argument.
- In a partial failure case, all the points of disagreement are right away; IE, you fail to find any common ground for arguments to gain traction. It's still possible to recurse into points of disagreement in this case, and doing so may still be productive, but often this is a sign that you haven't understood the other person well enough or that you've put them on the defensive so that they're biased to disagree.
- In a failure case, you keep digging down into reasons why they don't buy one point after another, and never really get anywhere. You don't contact with anything which would change their mind, because you're digging into your reasons rather than theirs. Your search for common ground is failing.
- In a failure case, you've made a disingenuous argument which your motivated cognition thinks they'll have a hard time refuting, but which is unlikely to convince them. A likely outcome is a long, pointless discussion or an outright rejection of the argument without any attempt to point at specific points of disagreement with it.
I think double crux can be seen as an attempt to modify the process of 1-4 in a way which attempts to make the better outcomes more common. You can still give your same argument in double crux, but you're checking earlier to see whether it will convince the other person. Suppose you have an argument for the disagreement D:
A implies B.
B implies C.
C implies D.
In my algorithm, you start by checking for agreement with "A". You then check for agreement with "A implies B". And so on, until a point of disagreement is reached. In double crux, you are helping the other person find cruxes by suggesting cruxes for them. You can ask "If you believed C, would you believe D?" Then, if so, "If you believed B, would you believe D?" and so on. Going through the argument backwards like this, you only keep going for so long as you have some assurance that you've connected with their model of D. Going through the argument in the forward direction, as in my method, you may recurse into further and further sub-arguments starting at a point of disagreement like "B implies C" and find that you never make contact with something in their model which has very much to do with their disbelief of D. Also, looking for the other person's cruxes encourages honest curiosity about their thinking, which makes the whole process go better.
Furthermore, you're looking for your own cruxes at the same time. So, you're more likely to think about arguments which are critical to your belief, and much less likely to try disingenuous arguments designed to be merely difficult to refute.
A quote from Feynman's Cargo Cult Science:
The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.
I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I’m not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to do when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.
This kind of "bending over backwards to show how maybe you're wrong" (in service of not fooling yourself) is close to double crux. Listing cruxes puts us in the mindset of thinking about ways we could be wrong.
On the other hand, I notice that in a blog post like this, I have a hard time really explaining how I might be wrong before I've explained my basic position. It seems like there's still a role for baking arguments forwards, rather than backwards. In my (limited) experience, double crux still requires each side to explain themselves (which then involves giving some arguments) before/while seeking cruxes. So perhaps double crux can't be viewed as a "pure" technique, and really has to be flexible, mixed with other approaches including the one I gave at the beginning. But I'm not sure what the best way to achieve that mixture is.
The following meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- Boston Secular Solstice: 09 December 2016 08:00PM
- Denver Area LW December Meetup: 06 December 2016 07:00PM
- Moscow social meetup: Party games!: 04 December 2016 02:00PM
- NY Solstice 2016 - The Story of Smallpox: 17 December 2016 06:00PM
- San Francisco Meetup: Cooking: 05 December 2016 06:15PM
- San Jose Meetup: Park Day (X): 04 December 2016 03:00PM
- Seattle Secular Solstice: 10 December 2016 04:00PM
- Sydney Rationality Dojo - December 2016: 04 December 2016 04:00PM
- Vienna Meetup: 17 December 2016 03:00PM
- Washington, D.C.: Fun & Games: 04 December 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Prague, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
In a recent Facebook post, Eliezer said :
You can believe that most possible minds within mind design space (not necessarily actual ones, but possible ones) which are smart enough to build a Dyson Sphere, will completely fail to respond to or care about any sort of moral arguments you use, without being any sort of moral relativist. Yes. Really. Believing that a paperclip maximizer won't respond to the arguments you're using doesn't mean that you think that every species has its own values and no values are better than any other.
And so I think part of the metaethics sequence went over my head.
I should re-read it, but I haven't yet. In the meantime I want to give an summary of my current thinking and ask some questions.
My current take on morality is that, unlike facts about the world, morality is a question of preference. The important caveats are :
- The preference set has to be consistent. Until we develop something akin to CEV, humans are probably stuck with a pre-morality where they behave and think over time in contradictory ways, and at the same time believe they have a perfectly consistent moral system.
- One can be mistaken about morality, but only in the sense that, unknown to them, they actually hold values different from what the deliberative part of their mind thinks it holds. An introspection failure or a logical error can cause the mistake. Once we identify ground values (not that it's effectively feasible), "wrong" is a type error.
- It is OK to fight for one's morality. Just because it's subjective doesn't mean one can't push for it. So "moral relativism" in the strong sense isn't a consequence of morality being a preference. But "moral relativism" in the weak, technical sense (it's subjective) is.
I am curious about the following :
- How does your current view differ from what I've written above?
- How exactly does that differ from the thesis of the metaethics sequence? In the same post, Eliezer also said : "and they thought maybe I was arguing for moral realism...". I did kind of think that, at times.
- I specifically do not understand this : "Believing that a paperclip maximizer won't respond to the arguments you're using doesn't mean that you think that every species has its own values and no values are better than any other.". Unless "better" is used in the sense of "better according to my morality", but that would make the sentence barely worth saying.
There seems to actually be real momentum behind this attempt as reviving Less Wrong. One of the oldest issues on LW has been the lack of content. For this reason, I thought that it might be worthwhile opening a thread where people can suggest how we can expand the scope of what people write about in order for us to have sufficient content.
Does anyone have any ideas about which areas of rationality are underexplored? Please only list one area per comment.
I'll do it at some point.
I'll answer this message later.
I could try this sometime.
For most people, all of these thoughts have the same result. The thing in question likely never gets done - or if it does, it's only after remaining undone for a long time and causing a considerable amount of stress. Leaving the "when" ambiguous means that there isn't anything that would propel you into action.
What kinds of thoughts would help avoid this problem? Here are some examples:
- When I find myself using the words "later" or "at some point", I'll decide on a specific time when I'll actually do it.
- If I'm given a task that would take under five minutes, and I'm not in a pressing rush, I'll do it right away.
- When I notice that I'm getting stressed out about something that I've left undone, I'll either do it right away or decide when I'll do it.
- I'm going to get more exercise.
- I'll spend less money on shoes.
- I want to be nicer to people.
- When I see stairs, I'll climb them instead of taking the elevator.
- When I buy shoes, I'll write down how much money I've spent on shoes this year.
- When someone does something that I like, I'll thank them for it.
- The trigger is clear. The "when" part is a specific, visible thing that's easy to notice. "When I see stairs" is good, "before four o'clock" is bad (when before four exactly?). [v]
- The trigger is consistent. The action is something that you'll always want to do when the trigger is fulfilled. "When I leave the kitchen, I'll do five push-ups" is bad, because you might not have the chance to do five push-ups each time when you leave the kitchen. [vi]
- The TAP furthers your goals. Make sure the TAP is actually useful!
[i] Gollwitzer, P. M. (1999). Implementation intentions: strong effects of simple plans. American psychologist, 54(7), 493.
This is a stopgap measure until admins get visibility into comment voting, which will allow us to find sockpuppet accounts more easily.
The best place to track changes to the codebase is the github LW issues page.
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
View more: Next