Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Previously, I wrote "Why I'm Skeptical About Unproven Causes (And You Should Be Too)" and a follow up essay "What Would It Take to Prove a Speculative Cause?". Both of these sparked a lot of discussion on LessWrong, on the Effective Altruist blog, and my own blog, as well as many hours of in person conversation.
After all this extended conversation with people, I've changed my mind on a few things that I will elaborate here. I hope in doing so I can (1) clarify my original position and (2) explain where I now stand in light of all the debate so people can engage with my current ideas as opposed to the ideas I no longer hold. My opinions on things tend to change quickly, so I think updates like this will help.
My Argument, As It Currently Stands
If I were to communicate one main point of my essay, based on what I believe now, it would be when you're in a position of high uncertainty, the best response is to use a strategy of exploration rather than a strategy of exploitation.
What I mean by this is that given the high uncertainty of impact we see now, especially with regard to the far future, we're better off trying to find more information about impact and reduce our uncertainty (exploration) rather than pursuing whatever we think is best (exploitation).
The implications of this would mean that:
- We should develop more of an attitude that our case for impact is neither clear nor proven.
- We should apply more skepticism to our causes and more self-skepticism to our personal beliefs about impact.
- We should use the language of "information" and "exploration" more often than the language of "impact" and "exploitation".
- We should focus more on finding specific and concrete attempts to ensure we're making progress and figure out our impact (whether it be surveys, experiments, soliciting external review from relevant experts, etc.).
- We should focus more on transparency about what we're doing and thinking and why, when relevant and not exceedingly costly.
And to be clear, here are specific statements that address misconceptions about what I have argued:
- I do think it is wrong to ignore unproven causes completely and stop pursuing them.
- I don't think we should be donating everything to the Against Malaria Foundation instead of speculative causes.
- I don't think the Against Malaria Foundation has the highest impact of all current opportunities to donate.
- I do think we can say useful things about the far future.
- I don't think the correct way to think about high uncertainty and low evidence is to "suspend judgement". Rather, I think we should make a judgement that we expect the estimate to be much lower than initially claimed in light of all the things I've said earlier about the history of past cost-effectiveness estimates.
And, lastly, if I were to make a second important point it would be it's difficult to find good opportunities to buy information. It's easy to think that any donation to an organization will generate good information or that we'll automatically make progress just by working. I think some element of random pursuit is important (see below), but all things considered I think we're doing too much random pursuit right now.
Specific Things I Changed My Mind About
Here are the specific places I changed my mind on:
I used to think donating to AMF, at least in part, was important for me. Now I don't.
I underestimated the power of exploring and the existing opportunities, so I think that 100% of my donations should be going to trying to assess impact. I've been persuaded that there is already quite a lot of money going toward AMF and we might not need more money as quickly as thought, so for the time being it's probably more appropriate to save and then donate to opportunities to buy information as they come up.
I now agree that there are relevant economies of scale in pursuing information that I hadn't taken into account.
What I mean by this is it might not be appropriate for individuals to work on purchasing information themselves. Instead, this could end up splitting up the time of organizations unnecessarily as they provide information to a bunch of different people. Also, many people don't have the time to do this themselves.
I think this has two implications:
- We should put more trust in larger scale organizations who are doing exploring, like GiveWell, and pool our resources.
- Individuals should work harder to put relevant information about information we gather online.
I was partially mistaken in thinking about how to "prove" speculative causes.
I think there was some value in my essay "What Would It Take to Prove a Speculative Cause?" because it talked concretely about strategies some organizations could take to get more information about their impact.
But the overall concept is mistaken -- there is no arbitrary threshold of evidence at which a speculative cause needs to cross and I was wasting my time by trying to come up with one. Instead, I think it's appropriate to continue doing expected value calculations as long as we maintain a self-skeptical, pro-measurement mindset.
I had previously not fully taken into account the cost of acquiring further information.
The important question in value of information is not "what does this information get me in terms of changing my beliefs and actions?" but actually "how valuable is this information?", as in, do the benefits of gathering this information outweigh all the costs? In some cases, I think the benefits of further proving a cause probably don't outweigh the costs.
For one possibly extreme example, while I don't know the rationale for doing a 23rd randomized controlled trial on anti-malaria bednets after performing the previous 22, it's likely that doing that RCT would have to be testing something more specific than the general effectiveness of bednets to justify the high cost of doing an RCT.
Likewise, there are costs on organizations to devoting resources to measuring themselves and being more transparent. I don't think these costs are particularly high or defeat the idea of devoting more resources to this area, but I hadn't really taken them into account before.
I'm slightly more in favor of acting randomly (trial and error).
I still think it's difficult to acquire good value of information and it's very easy to get caught "spinning our wheels" in research, especially when that research has no clear feedback loops. One example, perhaps somewhat controversial, would be to point to the multi-century lack of progress on some problems in philosophy (think meta-ethics) as an example of what can happen to a field when there aren't good feedback loops to ground yourself.
However, I underestimated the amount of information that comes forward just doing ones normal activities. The implication here is that it's more worthwhile than I initially thought to fund speculative causes just to have them continue to scale and operate.
Any intuition-dominant thinker who's struggled with math problems or logic-dominant thinker who's struggled with small-talk knows how difficult and hopeless the experience feels like. For a long time I was an intuition thinker, then I developed a logical thinking style and soon it ended up dominating -- granting me the luxury of experiencing both kinds of struggles. I eventually learned to apply the thinking style better optimized for the problem I was facing. Looking back, I realized why I kept sticking to one extreme.
I hypothesize that one-sided thinkers develop biases and tendencies that prevent them from improving their weaker mode of thinking. These biases cause a positive feedback loop that further skews thinking styles in the same direction.
The reasons why one style might be overdeveloped and the other underdeveloped vary greatly. Genes have a strong influence, but environment also plays a large part. A teacher may have inspired you to love learning science at a young age, causing you to foster to a thinking style better for learning science. Or maybe you grew up very physically attractive and found socializing with your peers a lot more rewarding than studying after school, causing you to foster a thinking style better for navigating social situations. Environment can be changed to help develop certain thinking styles, but it should be supplementary to exposing and understanding the biases you already have. Entering an environment that penalizes your thinking style can be uncomfortable, stressful and frustrating without being prepared. (Such a painful experience is part of why these biases cause a positive feedback loop, by making us avoid environments that require the opposite thinking style.)
Despite genetic predisposition and environmental circumstances, there's room for improvement and exposing these biases and learning to account for them is a great first step.
Below is a list of a few biases that worsen our ability to solve a certain class of problems and keep us from improving our underdeveloped thinking style.
Overlooking crucial details
Details matter in order to understand technical concepts. Overlooking a word or sentence structure can cause complete misunderstanding -- a common blunder for intuition thinkers.
Intuition is really good at making fairly accurate predictions without complete information, enabling us to navigate the world without having a deep understanding of it. As a result, intuition trains us to experience the feeling we understand something without examining every detail. In most situations, paying close attention to detail is unnecessary and sometimes dangerous. When learning a technical concept, every detail matters and the premature feeling of understanding stops us from examining them.
This bias is one that's more likely to go away once you realize it's there. You often don't know what details you're missing after you've missed them, so merely remembering that you tend to miss important details should prompt you to take closer examinations in the future.
Expecting solutions to sound a certain way
The Internship has a great example of this bias (and a few others) in action. The movie is about two middle-aged unemployed salesmen (intuition thinkers) trying to land an internship with Google. Part of Google's selection process has the two men participate in several technical challenges. One challenge required the men and their team to find a software bug. In a flash of insight, Vince Vaughn's character, Billy, shouts "Maybe the answer is in the question! Maybe it has something to do with the word bug. A fly!" After enthusiastically making several more word associations, he turns to his team and insists they take him seriously.
Why is it believable to the audience that Billy can be so confident about his answer?
Billy's intuition made an association between the challenge question and riddle-like questions he's heard in the past. When Billy used his intuition to find a solution, his confidence in a riddle-like answer grew. Intuition recklessly uses irrelevant associations as reasons for narrowing down the space of possible solutions to technical problems. When associations pop in your mind, it's a good idea to legitimize those associations with supporting reasons.
Not recognizing precise language
Intuition thinkers are multi-channel learners -- all senses, thoughts and emotions are used to construct a complex database of clustered knowledge to predict and understand the world. With robust information-extracting ability, correct grammar/word-usage is, more often than not, unnecessary for meaningful communication.
Communicating technical concepts in a meaningful way requires precise language. Connotation and subtext are stripped away so words and phrases can purely represent meaningful concepts inside a logical framework. Intuition thinkers communicate with imprecise language, gathering meaning from context to compensate. This makes it hard for them to recognize when to turn off their powerful information extractors.
This bias explains part of why so many intuition thinkers dread math "word problems". Introducing words and phrases rich with meaning and connotation sends their intuition running wild. It's hard for them to find correspondences between words in the problem and variables in the theorems and formulas they've learned.
The noise intuition brings makes it hard to think clearly. It's hard for intuition thinkers to tell whether their automatic associations should be taken seriously. Without a reliable way to discern, wrong interpretations of words go undetected. For example, without any physics background, an intuition thinker may read the statement "Matter can have both wave and particle properties at once" and believe they completely understand it. Unrelated associations of what matter, wave and particle mean, blindly take precedence over technical definitions.
The slightest uncertainty about what a sentence means should raise a red flag. Going back and finding correspondence between each word and how it fits into a technical framework will eliminate any uncertainty.
Believing their level of understanding is deeper than what it is
Intuition works on an unconscious level, making intuition thinkers unaware of how they know what they know. Not surprisingly, their best tool to learn what it means to understand is intuition. The concept "understanding" is a collection of associations from experience. You may have learned that part of understanding something means being able to answer questions on a test with memorized factoids, or knowing what to say to convince people you understand, or just knowing more facts than your friends. These are not good methods for gaining a deep understanding of technical concepts.
When intuition thinkers optimize for understanding, they're really optimizing for a fuzzy idea of what they think understanding means. This often leaves them believing they understand a concept when all they've done is memorize some disconnected facts. Not knowing what it feels like to have deeper understanding, they become conditioned to always expect some amount of surprise. They can feel max understanding with less confidence than logical thinkers when they feel max understanding. This lower confidence disincentivizes intuition thinkers to invest in learning technical concepts, further keeping their logical thinking style underdeveloped.
One way I overcame this tendency was to constantly ask myself "why" questions, like a curious child bothering their parents. The technique helped me uncover what used to be unknown unknowns that made me feel overconfident in my understanding.
Ignoring information they cannot immediately fit into a framework
Logical thinkers have and use intuition -- problem is they don't feed it enough. They tend to ignore valuable intuition-building information if it doesn't immediately fit into a predictive model they deeply understand. While intuition thinkers don't filter out enough noise, logical thinkers filter too much.
For example, if a logical thinker doesn't have a good framework for understanding human behavior, they're more likely to ignore visual input like body language and fashion, or auditory input like tone of voice and intonation. Human behavior is complicated, there's no framework to date that can make perfectly accurate predictions about it. Intuition can build powerful models despite working with many confounding variables.
Bayesian probability enables logical thinkers to build predictive models from noisy data without having to use intuition. But even then, the first step of making a Bayesian update is data collection.
Combatting this tendency requires you to pay attention to input you normally ignore. Supplement your broader attentional scope with a researched framework as a guide. Say you want to learn how storytelling works. Start by grabbing resources that teach storytelling and learn the basics. Out in the real-world, pay close attention to sights, sounds, and feelings when someone starts telling a story and try identifying sensory input to the storytelling elements you've learned about. Once the basics are subconsciously picked up by habit, your conscious attention will be freed up to make new and more subtle observations.
Ignoring their emotions
Emotional input is difficult to factor, especially because you're emotional at the time. Logical thinkers are notorious for ignoring this kind of messy data, consequently starving their intuition of emotional data. Being able to "go with your gut feelings" is a major function of intuition that logical thinkers tend to miss out on.
Your gut can predict if you'll get along long-term with a new SO, or what kind of outfit would give you more confidence in your workplace, or if learning tennis in your free time will make you happier, or whether you prefer eating a cheeseburger over tacos for lunch. Logical thinkers don't have enough data collected about their emotions to know what triggers them. They tend to get bogged down and mislead with objective, yet trivial details they manage to factor out. A weak understanding of their own emotions also leads to a weaker understanding of other's emotions. You can become a better empathizer by better understanding yourself.
You could start from scratch and build your own framework, but self-assessment biases will impede productivity. Learning an existing framework is a more realistic solution. You can find resources with some light googling and I'm sure CFAR teaches some good ones too. You can improve your gut feelings too. One way is making sure you're always consciously aware of the circumstances you're in when experiencing an emotion.
Making rules too strict
Logical thinkers build frameworks in order to understand things. When adding a new rule to a framework, there's motivation to make the rule strict. The stricter the rule, the more predictive power, the better the framework. When the domain you're trying to understand has multivariable chaotic phenomena, strict rules are likely to break. The result is something like the current state of macroeconomics: a bunch of logical thinkers preoccupied by elegant models and theories that can only exist when useless in practice.
Following rules that are too strict can have bad consequences. Imagine John the salesperson is learning how to make better first impressions and has built a rough framework so far. John has a rule that smiling always helps make people feel welcomed the first time they meet him. One day he makes a business trip to Russia to meet with a prospective client. The moment he meet his russian client, he flashes a big smile and continues to smile despite negative reactions. After a few hours of talking, his client reveals she felt he wasn't trustworthy at first and almost called off the meeting. Turns out that in Russia smiling to strangers is a sign of insincerity. John's strict rule didn't account for cultural differences, blindsiding him from updating on his clients reaction, putting him in a risky situation.
The desire to hold onto strict rules can make logical thinkers susceptible to confirmation bias too. If John made an exception to his smiling rule, he'd feel less confident about his knowledge of making first impressions, subsequently making him feel bad. He may also have to amend some other rule that relates to the smiling rule, which would further hurt his framework and his feelings.
When feeling the urge to add on a new rule, take note of circumstances in which the evidence for the rule was found in. Add exceptions that limit the rule's predictive power to similar circumstances. Another option is to entertain multiple conflicting rules simultaneously, shifting weight from one to the other after gathering more evidence.
David Chapman criticizes "pop Bayesianism" as just common-sense rationality dressed up as intimidating math:
Bayesianism boils down to “don’t be so sure of your beliefs; be less sure when you see contradictory evidence.”
Now that is just common sense. Why does anyone need to be told this? And how does [Bayes'] formula help?
The leaders of the movement presumably do understand probability. But I’m wondering whether they simply use Bayes’ formula to intimidate lesser minds into accepting “don’t be so sure of your beliefs.” (In which case, Bayesianism is not about Bayes’ Rule, after all.)
I don’t think I’d approve of that. “Don’t be so sure” is a valuable lesson, but I’d rather teach it in a way people can understand, rather than by invoking a Holy Mystery.
What does Bayes's formula have to teach us about how to do epistemology, beyond obvious things like "never be absolutely certain; update your credences when you see new evidence"?
I list below some of the specific things that I learned from Bayesianism. Some of these are examples of mistakes I'd made that Bayesianism corrected. Others are things that I just hadn't thought about explicitly before encountering Bayesianism, but which now seem important to me.
Still exploitable, even with defaults
A while ago, I posted a brief picture-proof of the fact that whatever bargaining system you use to reach deals, they are all exploitable, in some situations, by liars (as long as the outcome is Pareto and a few other assumptions).
That included any system with an internally assigned default point. The picture proofs work no matter how you calculate the bargaining outcome: if you use the utility value data to assign a default point, before picking the Nash bargaining equilibrium, then the whole process is susceptible to exploitation by lying.
Is the same thing true for externally assigned default points (i.e. default points that come from outside the data, and are not a mere function of everyone's preferences and the available outcomes)? A moment's thought shows that this is the case. The picture proofs never used translations, or scaling, or anything that would shift an external default point. So having an externally assigned default point does not solve the problem of lying.
But "any Pareto bargaining system is exploitable by lying" is an existence proof: in at least one circumstance, one player may be able to derive a non-zero benefit by lying about their utility function. This doesn't give an impression of the scale of the problem.
The scale of the problem
The problem is very severe, for the Nash Bargaining Solution (NBS), the Kalai-Smorodinsky Bargaining Solution (KSBS) and my Mutual Worth Bargaining Solution (MWBS). Essentially, it's as bad as it can get.
For KSBS and NBS, let's call an outcome admissible if it's Pareto-better than the default. For the MWBS, call an outcome admissible if the combined utility values it more than the default point (as we've seen, this needn't be an improvement for both players). In all three approaches, the bargaining solution must be admissible.
Then the dismal result is:
- Let O be any admissible pure outcome. Then either player can lie, if they know everyone's preferences, to force the bargaining solution to pick O.
Edit: Moved to main at ThrustVectoring's suggestion.
A suggestion as to how to split the gains from trade in some situations.
The problem of Power
A year or so ago, people in the FHI embarked on a grand project: to try and find out if there was a single way of resolving negotiations, or a single way of merging competing moral theories. This project made a lot of progress in finding out how hard this was, but very little in terms of solving it. It seemed evident that the correct solution was to weigh the different utility functions, and then for everyone maximise the weighted sum, but all ways of weighting had their problems (the weighting with the most good properties was a very silly one: use the "min-max" weighting that sets your maximal attainable utility to 1 and your minimal to 0).
One thing that we didn't get close to addressing is the concept of power. If two partners in the negotiation have very different levels of power, then abstractly comparing their utilities seems the wrong solution (more to the point: it wouldn't be accepted by the powerful party).
The New Republic spans the Galaxy, with Jedi knights, battle fleets, armies, general coolness, and the manufacturing and human resources of countless systems at its command. The dull slug, ARthUrpHilIpDenu, moves very slowly around a plant, and possibly owns one leaf (or not - he can't produce the paperwork). Both these entities have preferences, but if they meet up, and their utilities are normalised abstractly, then ARthUrpHilIpDenu's preferences will weigh in far too much: a sizeable fraction of the galaxy's production will go towards satisfying the slug. Even if you think this is "fair", consider that the New Republic is the merging of countless individual preferences, so it doesn't make any sense that the two utilities get weighted equally.
Want to be happier than you already are? Many people look to self-help books as a way to become happy. Sometimes they give good advice and sometimes they dont. However, one of the most robust, enduring findings from psychological studies of increasing people's happiness has been that happiness can be found from journaling, especially when you keep a regular journal of what you're grateful for.
Gratitude is defined as the reliable emotional response that one has to receiving benefits<sup>1</sup>. Gratitude is also known to correlate with subjective levels of happiness1,2,3,4,5, as well as pro-social behavior, self-efficacy, and self-worth6,7. Moreover, this connection with happiness is found in both student and non-student populations, and persists even when controlling for extraversion, neuroticism, and agreeableness8,9. Gratitude also fights stress, materialism, and negative self-comparisons7.
But what if you're not already grateful? Well, there is a solution. Regular practice of gratitude has theological origins -- Judaism, Christianity, and Islam, all consider it a virtue and prescribe approaches for practicing10.
And it appears that religion is right on this one -- gratitude can be trained, and one way to do so is the gratitude journal. And by training in gratitude, one can become lastingly happier.
Writing as a Cure
Studies have found that while talking about one's problems doesn't help one to feel better about them, even if it seems like the talk helped at the time11, writing about the problem does help. In one study, participants who had been recently laid off from work were asked to spend a few minutes each day writing a diary about their feelings regarding the lay off. Doing so produced boosts in happiness, self-esteem, health, and psychological and physical well-being12. Other similar studies found similar results13.
But one doesn't need trauma in order to get these beneficial results. Another study had people assigned to write for 20 minutes a day for four days about one of four topics at random -- either a traumatic life event, their best possible future self, both, or a nonemotional control. A follow up five months later found that writing about either trauma or a positive future lead to reduced illness and increased subjective well-being compared to controls, though writing about trauma induced a short-term negative mood14. Another follow up study found that reduced illness and increased subjective well-being resulted even from writing about intensely positive events15.
Affectionate writing is another type of regular journaling, where you write in your journal about affection for friends, family, or romantic partners. This too has been found to have beneficial effects, such as lower cholesterol16. Another study involved writing a letter of affection to someone and personally delivering it to them, which was found to decrease depressive symptoms for a few months17, but then had no further longer-term effects.
The Gratitude Journal
But suppose you're not recovering from a recent serious problem, but instead just want to boost your happiness in your everyday life. What should you do? Instead, you can get the same benefit of journaling by focusing on gratitude.
In another study, three groups of college students were asked to keep short, daily diaries -- one group would write about what they were grateful for in that day, the second group would write about what annoyed them, and the third group was asked just to keep track of events from a neutral perspective.
Those who kept careful track of what they were grateful for were more happy, more optimistic, and healthier than the other two groups at the end of the study18 after two weeks of journaling and a three-week follow up period. This study was then replicated among another college population19 and replicated a third time among college populations17. Researchers also tested the theory beyond college students -- in middle school classrooms20, among adults with neuromuscular disease18, and among Korean healthcare professionals21. Each time, they found that gratitude journaling produced reliable increases in happiness.
So what should we do if we want to start a gratitude journal? Well, get a journal and start writing! I've been keeping mine on my blog, but you could keep your wherever you like. However, here are some tips to make the implementation better:
It won't work for everyone. These effects only appear in the aggregate. So far, little research has been done to find moderating effects of gratitude journaling, but it is known to work better for women than for men, though it still works for men just fine4,5,7. It's possible that journaling won't work for certain people. Beware of other-optimizing.
It won't work if it annoys you. If you find the journaling tedious or annoying, you'll lose the happiness boost19, so it's important you find some way to keep it fresh. In one experiment, college students were assigned to do a gratitude journal either daily or once a week. While both groups showed a boost, the once-a-week group actually found a higher boost in happiness19, presumably because they didn't get bored with the journal.
Thinking about the subtraction of positive events produces an even bigger boost. While one gains a boost in happiness from reflecting on being grateful for, say, wildflowers, one can get an even higher boost in happiness if instructed to also try and imagine a world where wildflowers don't exist7.
Think about what caused these good events. Thinking not just about what you're grateful for but why things turned out the way they did to inspire gratitude also had better effects17.
It's not all that often that science hands us a definitive self-help practice that has been this well vetted. Maybe it works for you; maybe it doesn't. Maybe it's worth your time; maybe you are happy enough that you can forgo the effort. But it's hopefully at least worth thinking about.
After all, I'm grateful that positive psychology exists.
-(This was also cross-posted on my blog.)
(Note: Links are to PDF files.)
1: McCullough, Michael E., Jo-Ann Tsang, and Robert. A. Emmons. 2004. "Gratitude in Intermediate Affective Terrain: Links of Grateful Moods to Individual Differences and Daily Emotional Experience". Journal of Personality and Social Psychology 86: 295–309.
2: Wood, Alex M., Jeffrey J. Froh, and Adam W. A. Geraghty. 2010. "Gratitude and Well-Being: A Review and Theoretical Integration". Clinical Psychology Review 30 (7): 890-905.
3: Park, Nansook, Christopher Peterson, and Martin E. P. Seligman. 2004. "Strengths of Character and Well-Being". Journal of Social and Clinical Psychology 23 (5): 603-619.
4: Watkins, Phillip C., Katherine Woodward, Tamara Stone, and Russel K. Kolts. 2003. "Gratitude and Happiness: Development of a Measure of Gratitude, and Relationships with Subjective Well-Being". Social Behavior and Personality 31 (5): 431-452.
5: Kashdan, Todd B., Gitendra Uswatteb, and Terri Julian. 2006. "Gratitude and Hedonic and Eudaimonic Well-Being in Vietnam War Veterans". Behaviour Research and Therapy 44: 177–199.
6: Grant, Adam M. and Francesca Gino. 2010. "A Little Thanks Goes a Long Way: Explaining Why Gratitude Expressions Motivate Prosocial Behavior". Journal of Personality and Social Psychology 98 (6): 946–955.
7: Emmons, Robert A. and Anjali Mishra. 2011. "Why Gratitude Enhances Well-Being: What We Know, What We Need to Know" in Kennon M. Sheldon, Todd B. Kashdan, Michael F. Stenger (Eds.). Designing Positive Psychology: Taking Stock and Moving Forward, 248-262. Oxford University Press: Oxford.
8: McCullough, Michael E., Jo-Ann Tsang, and Robert. A. Emmons. 2002. "The Grateful Disposition: A Conceptual and Empirical Topography". Journal of Personality and Social Psychology 82 (1): 112–127.
9: Wood, Alex M., Stephen Joseph, and John Maltby. 2009. "Gratitude Predicts Psychological Well-Being Above the Big Five Facets"</a>. Personality and Individual Differences 46 (4): 443–447.
10: Emmons, Robert A. and Cheryl A. Crumpler. 2000. "Gratitude as a Human Strength: Appraising the Evidence". Journal of Social and Clinical Psychology 19 (1): 56-69
11: Lyubomirsky, Sonja and Chris Tkach. 2003. "The Consequences of Dysphoric Rumination" in Costas Papageorgiou and Adrian Wells (Eds.). Depressive Rumination: Nature, Theory and Treatment, 21-41. Chichester, England: John Wiley & Sons.
12: Spera, Stephanie P., Eric D. Buhrfeind, and James W. Pennebaker. 1994. "Expressive Writing and Coping with Job Loss". Academy of Management Journal 3, 722–733.
13: Lepore, Stephen J. and Joshua Morrison Smyth (Eds.) 2002. The Writing Cure: How Expressive Writing Promotes Health and Emotional Well-Being. Washington, DC: American Psychological Association.
14: King, Laura A. 2001. "The Health Benefits of Writing About Life Goals". Personality and Social Psychology Bulletin 27: 798–807.
15: Burton, Chad M and Laura A. King. 2004. "The Health Benefits of Writing about Intensely Positive Experiences". Journal of Research in Personality 38: 150–163.
16: Floyd, Kory, Alan C. Mikkelson, Colin Hesse, and Perry M. Pauley. 2007. "Affectionate Writing Reduces Total Cholesterol: Two Ranomized, Controlled Trials". Human Communication Research 33: 119–142.
17: Seligman, Martin E. P., Tracy A. Steen, Nansook Park, and Christopher Peterson. "Positive Psychology Progress: Empirical Validation of Interventions". American Psychologist 60: 410-421.
18: Emmons, Robert A. and Michael E. McCullough. 2003. "Counting Blessings versus Burdens: An Experimental Investigation of Gratitude and Subjective Well-Being in Daily Life". Journal of Personality and Social Psychology 84: 377–389.
19: Lyubomirsky, Sonja, Kennon M. Sheldon, and David Schkade. 2005. "Pursuing Happiness: The Architecture of Sustainable Change". Review of General Psychology 9 (2): 111-131.
20: Froh, Jeffrey J., William J. Sefick, and Robert A. Emmons. 2008. "Counting Blessings in Early Adolescents: An Experimental Study of Gratitude and Subjective Well-Being". Journal of School Psychology 46 (2): 213-233.
21: Ki, Tsui Pui. 2009. "Gratitude and Stress of Health-Care Professionals in Hong Kong". Unpublished thesis.
Recently, issues with the way open threads currently work were brought up. Open threads aren't very visible and get crowded with comments quickly. This causes people to post things that belong in open threads in r/discussion, to not post in open threads more than a few days old, or to ignore/be unaware of new comments in open threads. I think we can do better.
Some possible solutions that were pointed out, or that I thought of are:
- Put the most recent open thread at the top of the 'Recent Comments' sidebar.
- Having open threads more often.
- Put a link to it on the main page.
- Make a new subreddit for open threads.
- Create a new medium for open threads.
Note that not all of these are orthogonal.
Having them more often has the advantage of being especially easy to implement. Adding new links seems to be relatively easy to implement as well. As far as I know, making a new subreddit isn't too difficult, but making a new medium would probably be a waste of development resources.
Personally, I like the idea of having a new subreddit for open threads. It would increase visibility, not get overcrowded, and have the right atmosphere for a casual open thread. My evidence for believing this comes from being familiar the way Reddit works. It seems like there is some resistance to creating new subreddits here, so I don't expect this to be implemented. I would like to see the reasoning for this attitude, if it indeed exists.
There are similar issues for the repository threads. For repositories, having them more often defeats the purpose of having one place for a certain type of idea, and a different subreddit doesn't seem right either. Giving them their own wiki pages might be a better medium, with new threads to encourage new ideas every once in a while. The main problem for this is the trivial inconvenience of going to the wiki, and logging in. It would be nice if there was a unified log-in for this part of the site and the wiki, but I realize this may be technically difficult. I might organize a wiki page for some of the repositories myself if people think this is a good idea but no one else feels like doing it (depends on if I feel like doing it too :p ).
While we dither on the planet, are we losing resources in space? Nick Bostrom has an article on astronomical waste, talking about the vast amounts of potentially useful energy that we're simply not using for anything:
As I write these words, suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives.
The rate of this loss boggles the mind. One recent paper speculates, using loose theoretical considerations based on the rate of increase of entropy, that the loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonization.
On top of that, galaxies are slipping away from us because of the exponentially accelerating expansion of the universe (x axis in years since Big Bang, cosmic scale function arbitrarily set to 1 at the current day):
At the rate things are going, we seem to be losing slightly more than one galaxy a year. One entire galaxy, with its hundreds of billions of stars, is slipping away from us each year, never to be interacted with again. This is many solar systems a second; poof! Before you've even had time to grasp that concept, we've lost millions of times more resources than humanity has even used.
So it would seem that the answer to this desperate state of affairs is to rush thing: start expanding as soon as possible, greedily grab every hint of energy and negentropy before they vanish forever.
Not so fast! Nick Bostrom's point was not that we should rush things, but that we should be very very careful:
However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.
Picture a circular road on a map. Let's say that my office is at twelve o'clock, my home is at five o'clock, and the post office is at three o'clock.
Now, suppose I have to leave work, pick up a document at home, and take it to the post office to mail it. I know it's faster to walk clockwise home, passing the post office, and then return to it with the letter. But my gut preference is to go counterclockwise, either because of an aversion to retracing my steps, or because that route just ... feels "cleaner" or more efficient somehow, or ... I can't articulate it any better than that.
Does anyone else share this intuition? Is it a manifestation of one or more known/studied effects?
View more: Next