Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Linkposts now live!

27 Vaniver 28 September 2016 03:13PM

 

You can now submit links to LW! As the rationality community has grown up, more and more content has moved off LW to other places, and so rather than trying to generate more content here we'll instead try to collect more content here. My hope is that Less Wrong becomes something like "the Rationalist RSS," where people can discover what's new and interesting without necessarily being plugged in to the various diaspora communities.

Some general norms, subject to change:

 

  1. It's okay to link someone else's work, unless they specifically ask you not to. It's also okay to link your own work; if you want to get LW karma for things you make off-site, drop a link here as soon as you publish it.
  2. It's okay to link old stuff, but let's try to keep it to less than 5 old posts a day. The first link that I made is to Yudkowsky's Guide to Writing Intelligent Characters.
  3. It's okay to link to something that you think rationalists will be interested in, even if it's not directly related to rationality. If it's political, think long and hard before deciding to submit that link.
  4. It's not okay to post duplicates.

As before, everything will go into discussion. Tag your links, please. As we see what sort of things people are linking, we'll figure out how we need to divide things up, be it separate subreddits or using tags to promote or demote the attention level of links and posts.

(Thanks to James Lamine for doing the coding, and to Trike (and myself) for supporting the work.)

Epistemic Effort

26 Raemon 29 November 2016 04:08PM

Epistemic Effort: Thought seriously for 5 minutes about it. Thought a bit about how to test it empirically. Spelled out my model a little bit. I'm >80% confident this is worth trying and seeing what happens. Spent 45 min writing post.

I've been pleased to see "Epistemic Status" hit a critical mass of adoption - I think it's a good habit for us to have. In addition to letting you know how seriously to take an individual post, it sends a signal about what sort of discussion you want to have, and helps remind other people to think about their own thinking.

I have a suggestion for an evolution of it - "Epistemic Effort" instead of status. Instead of "how confident you are", it's more of a measure of "what steps did you actually take to make sure this was accurate?" with some examples including:

  • Thought about it musingly
  • Made a 5 minute timer and thought seriously about possible flaws or refinements
  • Had a conversation with other people you epistemically respect and who helped refine it
  • Thought about how to do an empirical test
  • Thought about how to build a model that would let you make predictions about the thing
  • Did some kind of empirical test
  • Did a review of relevant literature
  • Ran an Randomized Control Trial
[Edit: the intention with these examples is for it to start with things that are fairly easy to do to get people in the habit of thinking about how to think better, but to have it quickly escalate to "empirical tests, hard to fake evidence and exposure to falsifiability"]

A few reasons I think this (most of these reasons are "things that seem likely to me" but which I haven't made any formal effort to test - they come from some background in game design and reading some books on habit formation, most of which weren't very well cited)
  • People are more likely to put effort into being rational if there's a relatively straightforward, understandable path to do so
  • People are more likely to put effort into being rational if they see other people doing it
  • People are more likely to put effort into being rational if they are rewarded (socially or otherwise) for doing so.
  • It's not obvious that people will get _especially_ socially rewarded for doing something like "Epistemic Effort" (or "Epistemic Status") but there are mild social rewards just for doing something you see other people doing, and a mild personal reward simply for doing something you believe to be virtuous (I wanted to say "dopamine" reward but then realized I honestly don't know if that's the mechanism, but "small internal brain happy feeling")
  • Less Wrong etc is a more valuable project if more people involved are putting more effort into thinking and communicating "rationally" (i.e. making an effort to make sure their beliefs align with the truth, and making sure to communicate so other people's beliefs align with the truth)
  • People range in their ability / time to put a lot of epistemic effort into things, but if there are easily achievable, well established "low end" efforts that are easy to remember and do, this reduces the barrier for newcomers to start building good habits. Having a nice range of recommended actions can provide a pseudo-gamified structure where there's always another slightly harder step you available to you.
  • In the process of writing this very post, I actually went from planning a quick, 2 paragraph post to the current version, when I realized I should really eat my own dogfood and make a minimal effort to increase my epistemic effort here. I didn't have that much time so I did a couple simpler techniques. But even that I think provided a lot of value.
Results of thinking about it for 5 minutes.

  • It occurred to me that explicitly demonstrating the results of putting epistemic effort into something might be motivational both for me and for anyone else thinking about doing this, hence this entire section. (This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.')
  • One failure mode is that people end up putting minimal, token effort into things (i.e. randomly tried something on a couple doubleblinded people and call it a Randomized Control Trial).
  • Another is that people might end up defaulting to whatever the "common" sample efforts are, instead of thinking more creatively about how to refine their ideas. I think the benefit of providing a clear path to people who weren't thinking about this at all outweights people who might end up being less agenty about their epistemology, but it seems like something to be aware of.
  • I don't think it's worth the effort to run a "serious" empirical test of this, but I do think it'd be worth the effort, if a number of people started doing this on their posts, to run a followup informal survey asking "did you do this? Did it work out for you? Do you have feedback."
  • A neat nice-to-have, if people actually started adopting this and it proved useful, might be for it to automatically appear at the top of new posts, along with a link to a wiki entry that explained what the deal was.

Next actions, if you found this post persuasive:


Next time you're writing any kind of post intended to communicate an idea (whether on Less Wrong, Tumblr or Facebook), try adding "Epistemic Effort: " to the beginning of it. If it was intended to be a quick, lightweight post, just write it in its quick, lightweight form.

After the quick, lightweight post is complete, think about whether it'd be worth doing something as simple as "set a 5 minute timer and think about how to refine/refute the idea". If not, just write "thought about it musingly" after Epistemic Status. If so, start thinking about it more seriously and see where it leads.

While thinking about it for 5 minutes, some questions worth asking yourself:
  • If this were wrong, how would I know?
  • What actually led me to believe this was a good idea? Can I spell that out? In how much detail?
  • Where might I check to see if this idea has already been tried/discussed?
  • What pieces of the idea might you peel away or refine to make the idea stronger? Are there individual premises you might be wrong about? Do they invalidate the idea? Does removing them lead to a different idea? 

A Return to Discussion

25 sarahconstantin 27 November 2016 01:59PM

Epistemic Status: Casual

It’s taken me a long time to fully acknowledge this, but people who “come from the internet” are no longer a minority subculture.  Senators tweet and suburban moms post Minion memes. Which means that talking about trends in how people socialize on the internet is not a frivolous subject; it’s relevant to how people interact, period.

There seems to have been an overall drift towards social networks over blogs and forums in general, and in particular things like:

  • the drift of commentary from personal blogs to “media” aggregators like The AtlanticVox, and Breitbart
  • the migration of fandom from LiveJournal to Tumblr
  • Facebook and Twitter as the places where links and discussions go

At the moment I’m not empirically tracking any trends like this, and I’m not confident in what exactly the major trends are — maybe in future I’ll start looking into this more seriously. Right now, I have a sense of things from impression and hearsay.

But one thing I have noticed personally is that people have gotten intimidatedby more formal and public kinds of online conversation.  I know quite a few people who used to keep a “real blog” and have become afraid to touch it, preferring instead to chat on social media.  It’s a weird kind of perfectionism — nobody ever imagined that blogs were meant to be masterpieces.  But I do see people fleeing towards more ephemeral, more stream-of-consciousness types of communication, or communication that involves no words at all (reblogging, image-sharing, etc.)  There seems to be a fear of becoming too visible as a distinctive writing voice.

For one rather public and hilarious example, witness Scott Alexander’s  flight from LessWrong to LiveJournal to a personal blog to Twitter and Tumblr, in hopes that somewhere he can find a place isolated enough that nobody will notice his insight and humor. (It hasn’t been working.)

What might be going on here?

Of course, there are pragmatic concerns about reputation and preserving anonymity. People don’t want their writing to be found by judgmental bosses or family members.  But that’s always been true — and, at any rate, social networking sites are often less anonymous than forums and blogs.

It might be that people have become more afraid of trolls, or that trolling has gotten worse. Fear of being targeted by harassment or threats might make people less open and expressive.  I’ve certainly heard many writers say that they’ve shut down a lot of their internet presence out of exhaustion or literal fear.  And I’ve heard serious enough horror stories that I respect and sympathize with people who are on their guard.

But I don’t think that really explains why one would drift towards more ephemeral media. Why short-form instead of long-form?  Why streaming feeds instead of searchable archives?  Trolls are not known for their patience and rigor.  Single tweets can attract storms of trolls.  So troll-avoidance is not enough of an explanation, I think.

It’s almost as though the issue were accountability.  

A blog is almost a perfect medium for personal accountability. It belongs to you, not your employer, and not the hivemind.  The archives are easily searchable. The posts are permanently viewable. Everything embarrassing you’ve ever written is there.  If there’s a comment section, people are free to come along and poke holes in your posts. This leaves people vulnerable in a certain way. Not just to trolls, but to critics.

You can preempt embarrassment by declaring that you’re doing something shitty anyhow. That puts you in a position of safety. I think that a lot of online mannerisms, like using all-lowercase punctuation, or using really self-deprecating language, or deeply nested meta-levels of meme irony, are ways of saying “I’m cool because I’m not putting myself out there where I can be judged.  Only pompous idiots are so naive as to think their opinions are actually valuable.”

Here’s another angle on the same issue.  If you earnestly, explicitly say what you think, in essay form, and if your writing attracts attention at all, you’ll attract swarms of earnest, bright-but-not-brilliant, mostly young white male, commenters, who want to share their opinions, because (perhaps naively) they think their contributions will be welcomed. It’s basically just “oh, are we playing a game? I wanna play too!”  If you don’t want to play with them — maybe because you’re talking about a personal or highly technical topic and don’t value their input, maybe because your intention was just to talk to your friends and not the general public, whatever — you’ll find this style of interaction aversive.  You’ll read it as sealioning. Or mansplaining.  Or“well, actually”-ing.

I think what’s going on with these kinds of terms is something like:

Author: “Hi! I just said a thing!”

Commenter: “Ooh cool, we’re playing the Discussion game! Can I join?  Here’s my comment!”  (Or, sometimes, “Ooh cool, we’re playing the Verbal Battle game!  I wanna play! Here’s my retort!”)

Author: “Ew, no, I don’t want to play with you.”

There’s a bit of a race/gender/age/educational slant to the people playing the “commenter” role, probably because our society rewards some people more than others for playing the discussion game.  Privileged people are more likely to assume that they’re automatically welcome wherever they show up, which is why others tend to get annoyed at them.

Privileged people, in other words, are more likely to think they live in a high-trust society, where they can show up to strangers and be greeted as a potential new friend, where open discussion is an important priority, where they can trust and be trusted, since everybody is playing the “let’s discuss interesting things!” game.

The unfortunate reality is that most of the world doesn’t look like that high-trust society.

On the other hand, I think the ideal of open discussion, and to some extent the past reality of internet discussion, is a lot more like a high-trust society where everyone is playing the “discuss interesting things” game, than it is like the present reality of social media.

A lot of the value generated on the 90’s and early 2000’s internet was built on people who were interested in things, sharing information about those things with like-minded individuals.  Think of the websites that were just catalogues of information about someone’s obsessions. (I remember my family happily gathering round the PC when I was a kid, to listen to all the national anthems of the world, which some helpful net denizen had collated all in one place.)  There is an enormous shared commons that is produced when people are playing the “share info about interesting stuff” game.  Wikipedia. StackExchange. It couldn’t have been motivated by pure public-spiritedness — otherwise people wouldn’t have produced so much free work.  There are lower motivations: the desire to show off how clever you are, the desire to be a know-it-all, the desire to correct other people.  And there are higher motivations — obsession, fascination, the delight of infodumping. This isn’t some higher plane of civic virtue; it’s just ordinary nerd behavior.

But in ordinary nerd behavior, there are some unusual strengths.  Nerds are playing the “let’s have discussions!” game, which means that they’re unembarrassed about sharing their take on things, and unembarrassed about holding other people accountable for mistakes, and unembarrassed about being held accountable for mistakes.  It’s a sort of happy place between perfectionism and laxity.  Nobody is supposed to get everything right on the first try; but you’re supposed to respond intelligently to criticism. Things will get poked at, inevitably.  Poking is friendly behavior. (Which doesn’t mean it’s not also aggressive behavior.  Play and aggression are always intermixed.  But it doesn’t have to be understood as scary, hostile, enemy.)

Nerd-format discussions are definitely not costless. You get discussions of advanced/technical topics being mobbed by clueless opinionated newbies, or discussions of deeply personal issues being hassled by clueless opinionated randos.  You get endless debate over irrelevant minutiae. There are reasons why so many people flee this kind of environment.

But I would say that these disadvantages are necessary evils that, while they might be possible to mitigate somewhat, go along with having a genuinely public discourse and public accountability.

We talk a lot about social media killing privacy, but there’s also a way in which it kills publicness, by allowing people to curate their spaces by personal friend groups, and retreat from open discussions.   In a public square, any rando can ask an aristocrat to explain himself.  If people hide from public squares, they can’t be exposed to Socrates’ questions.

I suspect that, especially for people who are even minor VIPs (my level of online fame, while modest, is enough to create some of this effect), it’s tempting to become less available to the “public”, less willing to engage with strangers, even those who seem friendly and interesting.  I think it’s worth fighting this temptation.  You don’t get the gains of open discussion if you close yourself off.  You may not capture all the gains yourself, but that’s how the tragedy of the commons works; a bunch of people have to cooperate and trust if they’re going to build good stuff together.  And what that means, concretely, on the margin, is taking more time to explain yourself and engage intellectually with people who, from your perspective, look dumb, clueless, crankish, or uncool.

Some of the people I admire most, including theoretical computer scientist Scott Aaronson, are notable for taking the time to carefully debunk crackpots (and offer them the benefit of the doubt in case they are in fact correct.)  Is this activity a great ROI for a brilliant scientist, from a narrowly selfish perspective?  No. But it’s praiseworthy, because it contributes to a truly open discussion. If scientists take the time to investigate weird claims from randos, they’re doing the work of proving that science is a universal and systematic way of thinking, not just an elite club of insiders.  In the long run, it’s very important that somebody be doing that groundwork.

Talking about interesting things, with friendly strangers, in a spirit of welcoming open discussion and accountability rather than fleeing from it, seems really underappreciated today, and I think it’s time to make an explicit push towards building places online that have that quality.

In that spirit, I’d like to recommend LessWrong to my readers. For those not familiar with it, it’s a discussion forum devoted to things like cognitive science, AI, and related topics, and, back in its heyday a few years ago, it was suffused with the nerdy-discussion-nature. It had all the enthusiasm of late-night dorm-room philosophy discussions — except that some of the people you’d be having the discussions with were among the most creative people of our generation.  These days, posting and commenting is a lot sparser, and the energy is gone, but I and some other old-timers are trying to rekindle it. I’m crossposting all my blog posts there from now on, and I encourage everyone to check out and join the discussions there.

(Cross-posted from my blog, https://srconstantin.wordpress.com/)

Making intentions concrete - Trigger-Action Planning

18 Kaj_Sotala 01 December 2016 08:34PM

I'll do it at some point.

I'll answer this message later.

I could try this sometime.

For most people, all of these thoughts have the same result. The thing in question likely never gets done - or if it does, it's only after remaining undone for a long time and causing a considerable amount of stress. Leaving the "when" ambiguous means that there isn't anything that would propel you into action.

What kinds of thoughts would help avoid this problem? Here are some examples:

  • When I find myself using the words "later" or "at some point", I'll decide on a specific time when I'll actually do it.
  • If I'm given a task that would take under five minutes, and I'm not in a pressing rush, I'll do it right away.
  • When I notice that I'm getting stressed out about something that I've left undone, I'll either do it right away or decide when I'll do it.
Picking a specific time or situation to serve as the trigger of the action makes it much more likely that it actually gets done.

Could we apply this more generally? Let's consider these examples:
  • I'm going to get more exercise.
  • I'll spend less money on shoes.
  • I want to be nicer to people.
These goals all have the same problem: they're vague. How will you actually implement them? As long as you don't know, you're also going to miss potential opportunities to act on them.

Let's try again:
  • When I see stairs, I'll climb them instead of taking the elevator.
  • When I buy shoes, I'll write down how much money I've spent on shoes this year.
  • When someone does something that I like, I'll thank them for it.
These are much better. They contain both a concrete action to be taken, and a clear trigger for when to take it.

Turning vague goals into trigger-action plans

Trigger-action plans (TAPs; known as "implementation intentions" in the academic literature) are "when-then" ("if-then", for you programmers) rules used for behavior modification [i]. A meta-analysis covering 94 studies and 8461 subjects [ii] found them to improve people's ability for achieving their goals [iii]. The goals in question included ones such as reducing the amount of fat in one's diet, getting exercise, using vitamin supplements, carrying on with a boring task, determination to work on challenging problems, and calling out racist comments. Many studies also allowed the subjects to set their own, personal goals.

TAPs were found to work both in laboratory and real-life settings. The authors of the meta-analysis estimated the risk of publication bias to be small, as half of the studies included were unpublished ones.

Designing TAPs

TAPs work because they help us notice situations where we could carry out our intentions. They also help automate the intentions: when a person is in a situation that matches the trigger, they are much more likely to carry out the action. Finally, they force us to turn vague and ambiguous goals into more specific ones.

A good TAP fulfills three requirements [iv]:
  • The trigger is clear. The "when" part is a specific, visible thing that's easy to notice. "When I see stairs" is good, "before four o'clock" is bad (when before four exactly?). [v]
  • The trigger is consistent. The action is something that you'll always want to do when the trigger is fulfilled. "When I leave the kitchen, I'll do five push-ups" is bad, because you might not have the chance to do five push-ups each time when you leave the kitchen. [vi]
  • The TAP furthers your goals. Make sure the TAP is actually useful!
However, there is one group of people who may need to be cautious about using TAPs. One paper [vii] found that people who ranked highly on so-called socially prescribed perfectionism did worse on their goals when they used TAPs. These kinds of people are sensitive to other people's opinions about them, and are often highly critical of themselves. Because TAPs create an association between a situation and a desired way of behaving, it may make socially prescribed perfectionists anxious and self-critical. In two studies, TAPs made college students who were socially prescribed perfectionists (and only them) worse at achieving their goals.

For everyone else however, I recommend adopting this TAP:

When I set myself a goal, I'll turn it into a TAP.

Origin note

This article was originally published in Finnish at kehitysto.fi. It draws heavily on CFAR's material, particularly the workbook from CFAR's November 2014 workshop.

Footnotes

[i] Gollwitzer, P. M. (1999). Implementation intentions: strong effects of simple plans. American psychologist, 54(7), 493.

[ii] Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta‐analysis of effects and processes. Advances in experimental social psychology, 38, 69-119.

[iii] Effect size d = .65, 95% confidence interval [.6, .7].

[iv] Gollwitzer, P. M., Wieber, F., Myers, A. L., & McCrea, S. M. (2010). How to maximize implementation intention effects. Then a miracle occurs: Focusing on behavior in social psychological theory and research, 137-161.

[v] Wieber, Odenthal & Gollwitzer (2009; unpublished study, discussed in [iv]) tested the effect of general and specific TAPs on subjects driving a simulated car. All subjects were given the goal of finishing the course as quickly as possible, while also damaging their car as little as possible. Subjects in the "general" group were additionally given the TAP, "If I enter a dangerous situation, then I will immediately adapt my speed". Subjects in the "specific" group were given the TAP, "If I see a black and white curve road sign, then I will immediately adapt my speed". Subjects with the specific TAP managed to damage their cars less than the subjects with the general TAP, without being any slower for it.

[vi] Wieber, Gollwitzer, et al. (2009; unpublished study, discussed in [iv]) tested whether TAPs could be made even more effective by turning them into an "if-then-because" form: "when I see stairs, I'll use them instead of taking the elevator, because I want to become more fit". The results showed that the "because" reasons increased the subjects' motivation to achieve their goals, but nevertheless made TAPs less effective.

The researchers speculated that the "because" might have changed the mindset of the subjects. While an "if-then" rule causes people to automatically do something, "if-then-because" leads people to reflect upon their motivates and takes them from an implementative mindset to a deliberative one. Follow-up studies testing the effect of implementative vs. deliberative mindsets on TAPs seemed to support this interpretation. This suggests that TAPs are likely to work better if they can be carried out as consistently and as with little thought as possible.

[vii] Powers, T. A., Koestner, R., & Topciu, R. A. (2005). Implementation intentions, perfectionism, and goal progress: Perhaps the road to hell is paved with good intentions. Personality and Social Psychology Bulletin, 31(7), 902-912.

Astrobiology III: Why Earth?

18 CellBioGuy 04 October 2016 09:59PM

After many tribulations, my astrobiology bloggery is back up and running using Wordpress rather than Blogger because Blogger is completely unusable these days.  I've taken the opportunity of the move to make better graphs for my old posts. 

"The Solar System: Why Earth?"

https://thegreatatuin.wordpress.com/2016/10/03/the-solar-system-why-earth/

Here, I try to look at our own solar system and what the presence of only ONE known biosphere, here on Earth, tells us about life and perhaps more importantly what it does not.  In particular, I explore what aspects of Earth make it special and I make the distinction between a big biosphere here on Earth that has utterly rebuilt the geochemistry and a smaller biosphere living off smaller amounts of energy that we probably would never notice elsewhere in our own solar system given the evidence at hand. 

Commentary appreciated.

 

 

Previous works:

Space and Time, Part I

https://thegreatatuin.wordpress.com/2016/09/25/space-and-time-part-i

Space and Time, Part II

https://thegreatatuin.wordpress.com/2016/09/25/space-and-time-part-ii

[Link] On Trying Not To Be Wrong

17 sarahconstantin 11 November 2016 07:25PM

The 12 Second Rule (i.e. think before answering) and other Epistemic Norms

17 Raemon 05 September 2016 11:08PM

Epistemic Status/Effort: I'm 85% confident this is a good idea, and that the broader idea is at least a good direction. Have gotten feedback from a few people and spend some time actively thinking through ramifications of it. Interested in more feedback.

TLDR:

1) When asking a group a question, i.e. "what do you think about X?", ask people to wait 12 seconds, to give each other time to think. If you notice someone else ask a question and people immediately answering, suggest people pause the conversation until people have had some time to think. (Probably specific mention "12 second rule" to give people a handy tag to remember)

2) In general, look for opportunities to improve or share social norms that'll help your community think more clearly, and show appreciation when others do so (i.e. "Epistemic Norms")

(this was originally conceived for the self-described "rationality" community, but I think is a good idea any group that'd like to improve their critical thinking as well as creativity.)

There are three reasons the 12-second rule seems important to me:

  • On an individual level, it makes it easier to think of the best answer, rather than going with your cached thought.
  • On the group level, it makes it easier to prevent anchoring/conformity/priming effects.
  • Also on the group level, it means that people take longer to think of answers get to practice actually thinking for themselves
If you're using it with people who aren't familiar with it, make sure to briefly summarize what you're doing and why.

Elaboration:

While visiting rationalist friends in SF, I was participating in a small conversation (about six participants) in which someone asked a question. Immediately, one person said "I think Y. Or maybe Z." A couple other people said "Yeah. Y or Z, or... maybe W or V?" But the conversation was already anchored around the initial answers.

I said "hey, shouldn't we stop to each think first?" (this happens to be a thing my friends in NYC do). And I was somewhat surprised that the response was more like "oh, I guess that's a good idea" than "oh yeah whoops I forgot."

It seemed like a fairly obvious social norm for a community that prides itself on rationality, and while the question wasn't *super* important, I think its helpful to practice this sort of social norm on a day-to-day basis.

This prompted some broader questions - it occurred to me there were likely norms and ideas other people had developed in their local networks that I probably wasn't aware of. Given that there's no central authority on "good epistemic norms", how do we develop them and get them to spread? There's a couple people with popular blogs who sometimes propose new norms which maybe catch on, and some people still sharing good ideas on Less Wrong, effective-altruism.com, or facebook. But it doesn't seem like those ideas necessarily reach saturation.

Atrophied Skills

The first three years I spent in the rationality community, my perception is that my strategic thinking and ability to think through complex problems actually *deteriorated*. It's possible that I was just surrounded by smarter people than me for the first time, but I'm fairly confident that I specifically acquired the habit of "when I need help thinking through a problem, the first step is not to think about it myself, but to ask smart people around me for help."

Eventually I was hired by a startup, and I found myself in a position where the default course for the company was to leave some important value on the table. (I was working in an EA-adjaecent company, and wanted to push it in a more Effective Altruism-y direction with higher rigor). There was nobody else I could turn to for help. I had to think through what "better epistemic rigor" actually meant and how to apply it in this situation.

Whether or not my rationality had atrophied in the past 3 years, I'm certain that for the first time in long while, certain mental muscles *flexed* that I hadn't been using. Ultimately I don't know whether my ideas had a noteworthy effect on the company, but I do know that I felt more empowered and excited to improve my own rationality. 

I realized that, in the NYC meetups, quicker-thinking people tended to say what they thought immediately when a question was asked, and this meant that most of the people in the meetup didn't get to practice thinking through complex questions. So I started asking people to wait for a while before answering - sometimes 5 minutes, sometimes just a few seconds.

"12 seconds" seems like a nice rule-of-thumb to avoid completely interrupting the flow of conversation, while still having some time to reflect, and make sure you're not just shouting out a cached thought. It's a non-standard number which is hopefully easier to remember.

(That said, a more nuanced alternative is "everyone takes a moment to think until they feel like they're hitting diminishing returns on thinking or it's not worth further halting the conversation, and then raising a finger to indicate that they're done")

Meta Point: Observation, Improvement and Sharing

The 12-second rule isn't the main point though - just one of many ways this community could do a better job of helping both newcomers and old-timers hone their thinking skills. "Rationality" is supposed to be our thing. I think we should all be on the lookout for opportunities to improve our collective ability to think clearly. 

I think specific conversational habits are helpful both for their concrete, immediate benefits, as well as an opportunity to remind everyone (newcomers and old-timers alike) that we're trying to actively improve in this area.

I have more thoughts on how to go about improving the meta-issues here, which I'm less confident and will flesh out in future posts.

Downvotes temporarily disabled

16 Vaniver 01 December 2016 05:31PM

This is a stopgap measure until admins get visibility into comment voting, which will allow us to find sockpuppet accounts more easily.

 

The best place to track changes to the codebase is the github LW issues page.

[Link] If we can't lie to others, we will lie to ourselves

15 paulfchristiano 26 November 2016 10:29PM

A Child's Petrov Day Speech

15 James_Miller 28 September 2016 02:27AM

30 years ago, the Cold War was raging on. If you don’t know what that is, it was the period from 1947 to 1991 where both the U.S and Russia had large stockpiles of nuclear weapons and were threatening to use them on each other. The only thing that stopped them from doing so was the knowledge that the other side would have time to react. The U.S and Russia both had surveillance systems to know of the other country had a nuke in the air headed for them.

On this day, September 26, in 1983, a man named Stanislav Petrov was on duty in the Russian surveillance room when the computer notified him that satellites had detected five nuclear missile launches from the U.S. He was told to pass this information on to his superiors, who would then launch a counter-strike.


He refused to notify anyone of the incident, suspecting it was just an error in the computer system.


No nukes ever hit Russian soil. Later, it was found that the ‘nukes’ were just light bouncing off of clouds which confused the satellite. Petrov was right, and likely saved all of humanity by stopping the outbreak of nuclear war. However, almost no one has heard of him.

We celebrate men like George Washington and Abraham Lincoln who win wars. These were great men, but the greater men, the men like Petrov who stopped these wars from ever happening - no one has heard of these men.


Let it be known, that September 26 is Petrov Day, in honor of the acts of a great man who saved the world, and of who almost no one has heard the name of.

 

 

 

My 11-year-old son wrote and then read this speech to his six grade class.

What's the most annoying part of your life/job?

13 Liron 23 October 2016 03:37AM

Hi, I'm an entrepreneur looking for a startup idea.

In my experience, the reason most startups fail is because they never actually solve anyone's problem. So I'm cheating and starting out by identifying a specific person with a specific problem.

So I'm asking you, what's the most annoying part of your life/job? Also, how much would you pay for a solution?

[Link] Less costly signaling

12 paulfchristiano 22 November 2016 09:11PM

Neutralizing Physical Annoyances

12 SquirrelInHell 12 September 2016 04:36PM

Once in a while, I learn something about a seemingly unrelated topic - such as freediving - and I take away some trick that is well known and "obvious" in that topic, but is generally useful and NOT known by many people outside. Case in point, you can use equalization techniques from diving to remove pressure in your ears when you descend in a plane or a fast lift. I also give some other examples.

Ears

Reading about a few equalization techniques took me maybe 5 minutes, and after reading this passage once I was able to successfully use the "Frenzel Maneuver":

The technique is to close off the vocal cords, as though you are about to lift a heavy weight. The nostrils are pinched closed and an effort is made to make a 'k' or a 'guh' sound. By doing this you raise the back of the tongue and the 'Adam's Apple' will elevate. This turns the tongue into a piston, pushing air up.

(source: http://freedivingexplained.blogspot.com.mt/2008/03/basics-of-freediving-equalization.html)

Hiccups

A few years ago, I started regularly doing deep relaxations after yoga. At some point, I learned how to relax my throat in such a way that the air can freely escape from the stomach. Since then, whenever I start hiccuping, I relax my throat and the hiccups stop immediately in all cases. I am now 100% hiccup-free.

Stiff Shoulders

I've spent a few hours with a friend who is doing massage, and they taught me some basics. After that, it became natural for me to self-massage my shoulders after I do a lot of sitting work etc. I can't imagine living without this anymore.

Other?

If you know more, please share!

Recent AI control posts

11 paulfchristiano 29 November 2016 06:53PM


Over at medium, I’m continuing to write about AI control; here’s a roundup from the last month.

Strategy

  • Prosaic AI control argues that AI control research should first consider the case where AI involves no “unknown unknowns.”
  • Handling destructive technology tries to explain the upside of AI control, if we live in a universe where we eventually need to build a singleton anyway.
  • Hard-core subproblems explains a concept I find helpful for organizing research.

Building blocks of ALBA

Terminology and concepts

Matching donation fundraisers can be harmfully dishonest.

11 Benquo 11 November 2016 09:05PM

Anna Salamon, executive director of CFAR (named with permission), recently wrote to me asking for my thoughts on fundraisers using matching donations. (Anna, together with co-writer Steve Rayhawk, has previously written on community norms that promote truth over falsehood.) My response made some general points that I wish were more widely understood:

  • Pitching matching donations as leverage (e.g. "double your impact") misrepresents the situation by overassigning credit for funds raised.
  • This sort of dishonesty isn't just bad for your soul, but can actually harm the larger world - not just by eroding trust, but by causing people to misallocate their charity budgets.
  • "Best practices" for a charity tend to promote this kind of dishonesty, because they're precisely those practices that work no matter what your charity is doing.
  • If your charity is impact-oriented - if you care about outcomes rather than institutional success - then you should be able to do substantially better than "best practices".

So I'm putting an edited version of my response here.

continue reading »

[Link] Crony Beliefs

11 ete 03 November 2016 08:54PM

*How* people shut down thought because of high-status respectable halos

11 NancyLebovitz 20 October 2016 02:09PM

https://srconstantin.wordpress.com/2016/10/20/ra/

A detailed look at the belief that high status social structures can be so much better than anything one can think of that there's no point in even trying to think about the details of what to do, and how debilitating this is.

Discussion of the essay

MIRI AMA plus updates

11 RobbBB 11 October 2016 11:52PM

MIRI is running an AMA on the Effective Altruism Forum tomorrow (Wednesday, Oct. 11): Ask MIRI Anything. Questions are welcome in the interim!

Nate also recently posted a more detailed version of our 2016 fundraising pitch to the EA Forum. One of the additions is about our first funding target:

We feel reasonably good about our chance of hitting target 1, but it isn't a sure thing; we'll probably need to see support from new donors in order to hit our target, to offset the fact that a few of our regular donors are giving less than usual this year.

The Why MIRI's Approach? section also touches on new topics that we haven't talked about in much detail in the past, but plan to write up some blog posts about in the future. In particular:

Loosely speaking, we can imagine the space of all smarter-than-human AI systems as an extremely wide and heterogeneous space, in which "alignable AI designs" is a small and narrow target (and "aligned AI designs" smaller and narrower still). I think that the most important thing a marginal alignment researcher can do today is help ensure that the first generally intelligent systems humans design are in the “alignable” region. I think that this is unlikely to happen unless researchers have a fairly principled understanding of how the systems they're developing reason, and how that reasoning connects to the intended objectives.

Most of our work is therefore aimed at seeding the field with ideas that may inspire more AI research in the vicinity of (what we expect to be) alignable AI designs. When the first general reasoning machines are developed, we want the developers to be sampling from a space of designs and techniques that are more understandable and reliable than what’s possible in AI today.

In other news, we've uploaded a new intro talk on our most recent result, "Logical Induction," that goes into more of the technical details than our previous talk.

See also Shtetl-Optimized and n-Category Café for recent discussions of the paper.

Which areas of rationality are underexplored? - Discussion Thread

10 casebash 01 December 2016 10:05PM

There seems to actually be real momentum behind this attempt as reviving Less Wrong. One of the oldest issues on LW has been the lack of content. For this reason, I thought that it might be worthwhile opening a thread where people can suggest how we can expand the scope of what people write about in order for us to have sufficient content.

Does anyone have any ideas about which areas of rationality are underexplored? Please only list one area per comment.

Using a Spreadsheet to Make Good Decisions: Five Examples

10 peter_hurford 28 November 2016 05:10PM

I've been told that LessWrong is coming back now, so I'm cross-posting this rationality post of interest from the Effective Altruism forum.

-

We all make decisions every day. Some of these decisions are pretty inconsequential, such as what to have for an afternoon snack. Some of these decisions are quite consequential, such as where to live or what to dedicate the next year of your life to. Finding a way to make these decisions better is important.

The folks at Charity Science Health and I have been using the same method to make many of our major decisions for the past for years -- everything from where to live to even deciding to create Charity Science Health. The method isn’t particularly novel, but we definitely think the method is quite underused.

Here it is, as a ten step process:

  1. Come up with a well-defined goal.

  2. Brainstorm many plausible solutions to achieve that goal.

  3. Create criteria through which you will evaluate those solutions.

  4. Create custom weights for the criteria.

  5. Quickly use intuition to prioritize the solutions on the criteria so far (e.g., high, medium, and low)

  6. Come up with research questions that would help you determine how well each solution fits the criteria

  7. Use the research questions to do shallow research into the top ideas (you can review more ideas depending on how long the research takes per idea, how important the decision is, and/or how confident you are in your intuitions)

  8. Use research to rerate and rerank the solutions

  9. Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable

  10. Repeat steps 8 and 9 until sufficiently confident in a decision.

 

Which charity should I start?

The definitive example for this process was the Charity Entrepreneurship project, where our team decided which charity would be the best possible charity to create.

Come up with a well-defined goal: I want to start an effective global poverty charity, where effective is taken to mean a low cost per life saved comparable to current GiveWell top charities.

Brainstorm many plausible solutions to achieve that goal: For this, we decided to start by looking at the intervention level. Since there are thousands of potential interventions, we placed a lot of emphasis on plausibly highly effectve, and chose to look at GiveWell’s priority programs plus a few that we thought were worthy additions.

Create criteria through which you will evaluate those solutions / create custom weights for the criteria: For this decision, we spent a full month of our six month project thinking through the criteria. We weighted criteria based on both importance and the expected varaince that would occur between our options. We decided to strongly value cost-effectiveness, flexibility , and scalability. We moderately valued strength of evidence, metric focus, and indirect effects. We weakly valued logistical possibility and other factors.
 

Come up with research questions that would help you determine how well each solution fits the criteria: We came up with the following list of questions and research process.

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: Since this choice was important and we were pretty uninformed about the different interventions, we did shallow research into all of the choices. We then produced the following spreadsheet:

Afterwards, it was pretty easy to drop 22 out of the 30 possible choices and go with a top eight (the eight that ranked 7 or higher on our scale).

 

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable / Repeat steps 8 and 9 until sufficiently confident in a decision: We then researched the top eight more deeply, with a keen idea to turn them into concrete charity ideas rather than amorphous interventions. When re-ranking, we came up with a top five, and wrote up more detailed reports --SMS immunization reminders,tobacco taxation,iron and folic acid fortification,conditional cash transfers, and a poverty research organization. A key aspect to this narrowing was also talking to relevant experts, which we wish we did earlier on in the process as it could quickly eliminate unpromising options.

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: As we researched further, it became more clear that SMS immunization reminders performed best on the criteria being highly cost-effective, with a high strength of evidence and easy testability. However, the other four finalists are also excellent opportunities and we strongly invite other teams to invest in creating charities in those four areas.

 

Which condo should I buy?

Come up with a well-defined goal: I want to buy a condo that is (a) a good place to live and (b) a reasonable investment.
 

Brainstorm many plausible solutions to achieve that goal: For this, I searched around on Zillow and found several candidate properties.

Create criteria through which you will evaluate those solutions: For this decision, I looked at the purchasing cost of the condo, the HOA fee, whether or not the condo had parking, the property tax, how much I could expect to rent the condo out, whether or not the condo had a balcony, whether or not the condo had a dishwasher, how bright the space was, how open the space was, how large the kitchen was, and Zillow’s projection of future home value.
 

Create custom weights for the criteria: For this decision, I wanted to turn things roughly into a personal dollar value, where I could calculate the benefits minus the costs. The costs were the purchasing cost of the condo turned into a monthly mortgage payment, plus the annual HOA fee, plus the property tax. The benefits were the expected annual rent plus half of Zillow’s expectation for how much the property would increase in value over the next year, to be a touch conservative. I also added some more arbitrary bonuses: +$500 bonus if there was a dishwasher, a +$500 bonus if there was a balcony, and up to +$1000 depending on how much I liked the size of the kitchen. I also added +$3600 if there was a parking space, since the space could be rented out to others as I did not have a car. Solutions would be graded on benefits minus costs model.

Quickly use intuition to prioritize the solutions on the criteria so far: Ranking the properties was pretty easy since it was very straightforward, I could skip to plugging in numbers directly from the property data and the photos.

 

Property

Mortgage

Annual fees

Annual increase

Annual rent

Bonuses

Total

A

$7452

$5244

$2864

$17400

+$2000

+$9568

B

$8760

$4680

$1216

$19200

+$1000

+$7976

C

$9420

$4488

$1981

$19200

+$1200

+$8473

D

$8100

$8400

$2500

$19200

+$4100

+$9300

E

$6900

$4600

$1510

$15000

+$3600

+$8610

  

Come up with research questions that would help you determine how well each solution fits the criteria: For this, the research was just to go visit the property and confirm the assessments.

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: Pretty easy, not much changed as I went to actually investigate.

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: For this, I just ended up purchasing the highest ranking condo, which was a mostly straightforward process. Property A wins! 
 
This is a good example of how easy it is to re-adapt the process and how you can weight criteria in nonlinear ways.
 

How should we fundraise? 

Come up with a well-defined goal: I want to find the fundraising method with the best return on investment. 

Brainstorm many plausible solutions to achieve that goal: For this, our Charity Science Outreach team conducted a literature review of fundraising methods and asked experts, creating a list of the 25 different fundraising ideas. 

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: The criteria we used here was pretty similar to the criteria we later used for picking a charity -- we valued ease of testing, the estimated return on investment, the strength of the evidence, and the scalability potential roughly equally. 

Come up with research questions that would help you determine how well each solution fits the criteria: We created this rubric with questions

  • What research says on it (e.g. expected fundraising ratios, success rates, necessary pre-requisites)

  • What are some relevant comparisons to similar fundraising approaches? How well do they work?

  • What types/sizes of organizations is this type of fundraising best for?

  • How common is this type of fundraising, in nonprofits generally and in similar nonprofits (global health)?

  • How one would run a minimum cost experiment in this area?

  • What is the expected time, cost, and outcome for the experiment?

  • What is the expected value?

  • What is the expected time cost to get best time per $ ratio (e.g., would we have to have 100 staff or huge budget to make this effective)?

  • What further research should be done if we were going to run this approach?

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: After reviewing, we were able to narrow the 25 down to eight finalists: legacy fundraising, online ads, door-to-door, niche marketing, events, networking, peer-to-peer fundraising, and grant writing.
 
Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: We did MVPs of all eight of the top ideas and eventually decided that three of the ideas were worth pursuing full-time: online ads, peer-to-peer fundraising, and legacy fundraising.
 
 

Who should we hire? 

Come up with a well-defined goal: I want to hire the employee who will contribute the most to our organization. 

Brainstorm many plausible solutions to achieve that goal: For this, we had the applicants who applied to our job ad.

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: We thought broadly about what good qualities a hire would have, and decided to heavily weight values fit and prior experience with the job, and then roughly equally value autonomy, communication skills, creative problem solving, the ability to break down tasks, and the ability to learn new skills.
 
Quickly use intuition to prioritize the solutions on the criteria so far: We started by ranking hires based on their resumes and written applications. (Note that to protect the anonymity of our applicants, the following information is fictional.)
 

Person

Autonomy

Communication

Creativity

Break down

Learn new skills

Values fit

Prior experience

A

High

Medium

Low

Low

High

Medium

Low

B

Medium

Medium

Medium

Medium

Medium

Medium

Low

C

High

Medium

Medium

Low

High

Low

Medium

D

Medium

Medium

Medium

High

Medium

Low

High

E

Low

Medium

High

Medium

Medium

Low

Medium

 

Come up with research questions that would help you determine how well each solution fits the criteria: The initial written application was already tailored toward this, but we designed a Skype interview to further rank our applicants. 

Use the research questions to do shallow research into the top ideas, use research to rerate and rerank the solutions: After our Skype interviews, we re-ranked all the applicants. 

 

Person

Autonomy

Communication

Creativity

Break down

Learn new skills

Values fit

Prior experience

A

High

High

Low

Low

High

High

Low

B

Medium

Medium

Medium

Medium

Low

Low

Low

C

High

Medium

Low

High

High

Medium

Medium

D

Medium

Low

Medium

High

Medium

Low

High

E

Low

Medium

High

Medium

Medium

Low

Medium

  

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: While “MVP testing” may not be polite to extend to people, we do a form of MVP testing by only offering our applicants one month trials before converting to a permanent hire.

 

Which television show should we watch? 

Come up with a well-defined goal: Our friend group wants to watch a new TV show together that we’d enjoy the most. 

Brainstorm many plausible solutions to achieve that goal: We all each submitted one TV show, which created our solution pool. 

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: For this decision, the criteria was the enjoyment value of each participant, weighted equally. 

Come up with research questions that would help you determine how well each solution fits the criteria: For this, we watched the first episode of each television show and then all ranked each one. 

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: We then watched the winning television show, which was Black Mirror. Fun! 

 

Which statistics course should I take? 

Come up with a well-defined goal: I want to learn as much statistics as fast as possible, without having the time to invest in taking every course. 

Brainstorm many plausible solutions to achieve that goal: For this, we searched around on the internet and found ten online classes and three books.

Create criteria through which you will evaluate those solutions / Create custom weights for the criteria: For this decision, we heavily weighted breadth and time cost, weighted depth and monetary cost, and weakly weighted how interesting the course was and whether the course provided a tangible credential that could go on a resume.
 
Quickly use intuition to prioritize the solutions on the criteria so far: By looking at the syllabi, table of contents, and reading around online, we came up with some initial rankings:
 
 

Name

Cost

Estimated hours

Depth score

Breadth score

How interesting

Credential level

Master Statistics with R

$465

150

10

9

3

5

Probability and Statistics, Statistical Learning, Statistical Reasoning

$0

150

8

10

4

2

Critically Evaluate Social Science Research and Analyze Results Using R

$320

144

6

6

5

4

http://online.stanford.edu/Statistics_Medicine_CME_Summer_15

$0

90

5

2

7

0

Berkley stats 20 and 21

$0

60

6

5

6

0

Statistical Reasoning for Public Health

$0

40

5

2

4

2

Khan stats

$0

20

1

4

6

0

Introduction to R for Data Science

$0

8

3

1

5

1

Against All Odds

$0

5

1

2

10

0

Hans Rosling doc on stats

$0

1

1

1

11

0

Berkeley Math

$0

60

6

5

6

0

OpenIntro Statistics

$0

25

5

5

2

0

Discovering Statistics Using R by Andy Field

$25

50

7

3

3

0

Naked-Statistics by Charles Wheelan

$17

20

2

4

8

0

 

Come up with research questions that would help you determine how well each solution fits the criteria: For this, the best we could do would be to do a little bit from each of our top class choices, while avoiding purchasing the expensive ones unless free ones did not meet our criteria. 

Pick the top ideas worth testing and do deeper research or MVP testing, as is applicable: Only the first three felt deep enough. Only one of them was free, but we were luckily able to find a way to audit the two expensive classes. After a review of all three, we ended up going with “Master Statistics with R”.

Sample means, how do they work?

10 Benquo 20 November 2016 09:04PM

You know how people make public health decisions about food fortification, and medical decisions about taking supplements, based on things like the Recommended Daily Allowance? Well, there's an article in Nutrients titled A Statistical Error in the Estimation of the Recommended Dietary Allowance for Vitamin D. This paper says the following about the info used to establish the US recommended daily allowance for vitamin D:

The correct interpretation of the lower prediction limit is that 97.5% of study averages are predicted to have values exceeding this limit. This is essentially different from the IOM’s conclusion that 97.5% of individuals will have values exceeding the lower prediction limit.

The whole point of looking at averages is that individuals vary a lot due to a bunch of random stuff, but if you take an average of a lot of individuals, that cancels out most of the noise, so the average varies hardly at all. How much variation there is from individual to individual determines the population variance. How much variation you'd expect in your average due to statistical noise from sample to sample determines what we call the variation of the sample mean.

When you look at frequentist statistical confidence intervals, they are generally expressing how big the ordinary range of variation is for your average. For instance, 90% of the time, your average will not be farther off from the "true" average than it is from the boundaries of your confidence interval. This is relevant for answering questions like, "does this trend look a lot bigger than you'd expect from random chance?" The whole point of looking at large samples is that the errors have a chance to cancel out, leading to a very small random variation in the mean, relative to the variation in the population. This allows us to be confident that even fairly small differences in the mean are unlikely to be due to random noise.

The error here, was taking the statistical properties of the mean, and assuming that they applied to the population. In particular, the IOM looked at the dose-response curve for vitamin D, and came up with a distribution for the average response to vitamin D dosage. Based on their data, if you did another study like theirs on new data, it ought to predict that 600 IU of vitamin D is enough for the average person 97.5% of the time.

They concluded from this that 97.5% of people get enough vitamin D from 600 IU.

This is not an arcane detail. This is confusing the attributes of a population, with the attributes of an average. This is bad. This is real, real bad. In any sane world, this is mathematical statistics 101 stuff. I can imagine that someone who's heard about a margin of error a lot doesn't understand this stuff, but anyone who has to actually use the term should understand this.

Political polling is a simple example. Let's say that a poll shows 48% of Americans voting for the Republican and 52% for the Democrat, with a 5% margin of error. This means that 95% of polls like this one are expected to have an average within 5 percentage points of the true average. This does not mean that 95% of individual Americans have somewhere between a 43% and 53% chance of voting for the Republican. Most of them are almost definitively decided on one candidate, or the other. The average does not behave the same as the population. That's how fundamental this error is – it's like saying that all voters are undecided because the population is split.

Remember the famous joke about how the average family has two and a half kids? It's a joke because no one actually has two and a half kids. That's how fundamental this error is – it's like saying that there are people who have an extra half child hopping around. And this error caused actual harm:

The public health and clinical implications of the miscalculated RDA for vitamin D are serious. With the current recommendation of 600 IU, bone health objectives and disease and injury prevention targets will not be met. This became apparent in two studies conducted in Canada where, because of the Northern latitude, cutaneous vitamin D synthesis is limited and where diets contribute an estimated 232 IU of vitamin D per day. One study estimated that despite Vitamin D supplementation with 400 IU or more (including dietary intake that is a total intake of 632 IU or more) 10% of participants had values of less than 50 nmol/L. The second study reported serum 25(OH)D levels of less than 50 nmol/L for 15% of participants who reported supplementation with vitamin D. If the RDA had been adequate, these percentages should not have exceeded 2.5%. Herewith these studies show that the current public health target is not being met.

Actual people probably got hurt because of this. Some likely died.

This is also an example of scientific journals serving their intended purpose of pointing out errors, but it should never have gotten this far. This is a send a coal-burning engine under the control of a drunk engineer into the Taggart tunnel when the ventilation and signals are broken level of negligence. I think of the people using numbers as the reliable ones, but that's not actually enough – you have to think with them, you have to be trying to get the right answer, you have to understand what the numbers mean.

I can imagine making this mistake in school, when it's low stakes. I can imagine making this mistake on my blog. I can imagine making this mistake at work if I'm far behind on sleep and on a very tight deadline. But if I were setting public health policy? If I were setting the official RDA? I'd try to make sure I was right. And I'd ask the best quantitative thinkers I know to check my numbers.

The article was published in 2014, and as far as I can tell, as of the publication of this blog post, the RDA is unchanged.

(Cross-posted from my personal blog.)

[Link] Putanumonit - Convincing people to read the Sequences and wondering about "postrationalists"

10 Jacobian 28 September 2016 04:43PM

2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)

10 ingres 10 September 2016 03:51AM

Politics

The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.

Political Opinions By Political Affiliation



































Miscellaneous Politics

There were also some other questions in this section which aren't covered by the above charts.

PoliticalInterest

On a scale from 1 (not interested at all) to 5 (extremely interested), how would you describe your level of interest in politics?

1: 67 (2.182%)

2: 257 (8.371%)

3: 461 (15.016%)

4: 595 (19.381%)

5: 312 (10.163%)

Voting

Did you vote in your country's last major national election? (LW Turnout Versus General Election Turnout By Country)
Group Turnout
LessWrong 68.9%
Austrailia 91%
Brazil 78.90%
Britain 66.4%
Canada 68.3%
Finland 70.1%
France 79.48%
Germany 71.5%
India 66.3%
Israel 72%
New Zealand 77.90%
Russia 65.25%
United States 54.9%
Numbers taken from Wikipedia, accurate as of the last general election in each country listed at time of writing.

AmericanParties

If you are an American, what party are you registered with?

Democratic Party: 358 (24.5%)

Republican Party: 72 (4.9%)

Libertarian Party: 26 (1.8%)

Other third party: 16 (1.1%)

Not registered for a party: 451 (30.8%)

(option for non-Americans who want an option): 541 (37.0%)

Calibration And Probability Questions

Calibration Questions

I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.

All my calibration questions were meant to satisfy a few essential properties:

  1. They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
  2. They should, at least to a certain extent, be Fermi Estimable.
  3. They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)

At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.

Probability Questions

Question Mean Median Mode Stdev
Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? 49.821 50.0 50.0 3.033
What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? 44.599 50.0 50.0 29.193
What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? 75.727 90.0 99.0 31.893
...in the Milky Way galaxy? 45.966 50.0 10.0 38.395
What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? 13.575 1.0 1.0 27.576
What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? 15.474 1.0 1.0 27.891
What is the probability that any of humankind's revealed religions is more or less correct? 10.624 0.5 1.0 26.257
What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? 21.225 10.0 5.0 26.782
What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? 25.263 10.0 1.0 30.510
What is the probability that our universe is a simulation? 25.256 10.0 50.0 28.404
What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? 83.307 90.0 90.0 23.167
What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? 76.310 80.0 80.0 22.933

 

Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.

Futurology

This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.

Cryonics

Cryonics

Are you signed up for cryonics?

Yes - signed up or just finishing up paperwork: 48 (2.9%)

No - would like to sign up but unavailable in my area: 104 (6.3%)

No - would like to sign up but haven't gotten around to it: 180 (10.9%)

No - would like to sign up but can't afford it: 229 (13.8%)

No - still considering it: 557 (33.7%)

No - and do not want to sign up for cryonics: 468 (28.3%)

Never thought about it / don't understand: 68 (4.1%)

CryonicsNow

Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?

Yes: 106 (6.6%)

Maybe: 1041 (64.4%)

No: 470 (29.1%)

Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.

sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";

14

sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");

34

CryonicsPossibility

Do you think cryonics works in principle?

Yes: 802 (49.3%)

Maybe: 701 (43.1%)

No: 125 (7.7%)

LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.

The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.

Singularity

SingularityYear

By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.

Mean: 8.110300081581755e+16

Median: 2080.0

Mode: 2100.0

Stdev: 2.847858859055733e+18

I didn't bother to filter out the silly answers for this.

Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.

Genetic Engineering

ModifyOffspring

Would you ever consider having your child genetically modified for any reason?

Yes: 1552 (95.921%)

No: 66 (4.079%)

Well that's fairly overwhelming.

GeneticTreament

Would you be willing to have your child genetically modified to prevent them from getting an inheritable disease?

Yes: 1387 (85.5%)

Depends on the disease: 207 (12.8%)

No: 28 (1.7%)

I find it amusing how the strict "No" group shrinks considerably after this question.

GeneticImprovement

Would you be willing to have your child genetically modified for improvement purposes? (eg. To heighten their intelligence or reduce their risk of schizophrenia.)

Yes : 0 (0.0%)

Maybe a little: 176 (10.9%)

Depends on the strength of the improvements: 262 (16.2%)

No: 84 (5.2%)

Yes I know 'yes' is bugged, I don't know what causes this bug and despite my best efforts I couldn't track it down. There is also an issue here where 'reduce your risk of schizophrenia' is offered as an example which might confuse people, but the actual science of things cuts closer to that than it does to a clean separation between disease risk and 'improvement'.

 

This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.

sqlite> select count(*) from data where GeneticImprovement="Yes";

1100

>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217

67.8% are willing to genetically engineer their children for improvements.

GeneticCosmetic

Would you be willing to have your child genetically modified for cosmetic reasons? (eg. To make them taller or have a certain eye color.)

Yes: 500 (31.0%)

Maybe a little: 381 (23.6%)

Depends on the strength of the improvements: 277 (17.2%)

No: 455 (28.2%)

These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.


GeneticOpinionD

What's your overall opinion of other people genetically modifying their children for disease prevention purposes?

Positive: 1177 (71.7%)

Mostly Positive: 311 (19.0%)

No strong opinion: 112 (6.8%)

Mostly Negative: 29 (1.8%)

Negative: 12 (0.7%)

GeneticOpinionI

What's your overall opinion of other people genetically modifying their children for improvement purposes?

Positive: 737 (44.9%)

Mostly Positive: 482 (29.4%)

No strong opinion: 273 (16.6%)

Mostly Negative: 111 (6.8%)

Negative: 38 (2.3%)

GeneticOpinionC

What's your overall opinion of other people genetically modifying their children for cosmetic reasons?

Positive: 291 (17.7%)

Mostly Positive: 290 (17.7%)

No strong opinion: 576 (35.1%)

Mostly Negative: 328 (20.0%)

Negative: 157 (9.6%)

All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.

Technological Unemployment

LudditeFallacy

Do you think the Luddite's Fallacy is an actual fallacy?

Yes: 443 (30.936%)

No: 989 (69.064%)

We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.

UnemploymentYear

By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.

Mean: 2102.9713740458014

Median: 2050.0

Mode: 2050.0

Stdev: 1180.2342850727339

Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.

Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.

EndOfWork

Do you think the "end of work" would be a good thing?

Yes: 1238 (81.287%)

No: 285 (18.713%)

Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.

EndOfWorkConcerns

If machines end all or almost all employment, what are your biggest worries? Pick two.

Question Count Percent
People will just idle about in destructive ways 513 16.71%
People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst 543 17.687%
The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty 1066 34.723%
The machines won't need us, and we'll starve to death or be otherwise liquidated 416 13.55%
Question is flawed because it demanded the user 'pick two' instead of up to two.

The plurality of worries are about elites who refuse to share their wealth.

Existential Risk

XRiskType

Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?

Nuclear war: +4.800% 326 (20.6%)

Asteroid strike: -0.200% 64 (4.1%)

Unfriendly AI: +1.000% 271 (17.2%)

Nanotech / grey goo: -2.000% 18 (1.1%)

Pandemic (natural): +0.100% 120 (7.6%)

Pandemic (bioengineered): +1.900% 355 (22.5%)

Environmental collapse (including global warming): +1.500% 252 (16.0%)

Economic / political collapse: -1.400% 136 (8.6%)

Other: 35 (2.217%)

Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.

Charity And Effective Altruism

Charitable Giving

Income

What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.

Sum: 66054140.47384

Mean: 64569.052271593355

Median: 40000.0

Mode: 30000.0

Stdev: 107297.53606321265

IncomeCharityPortion

How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000

Sum: 2389900.6530000004

Mean: 2914.5129914634144

Median: 353.0

Mode: 100.0

Stdev: 9471.962766896671

XriskCharity

How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?

Sum: 169300.89

Mean: 1991.7751764705883

Median: 200.0

Mode: 100.0

Stdev: 9219.941506342007

CharityDonations

How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.

Question Sum Mean Median Mode Stdev
Against Malaria Foundation 483935.027 1905.256 300.0 None 7216.020
Schistosomiasis Control Initiative 47908.0 840.491 200.0 1000.0 1618.785
Deworm the World Initiative 28820.0 565.098 150.0 500.0 1432.712
GiveDirectly 154410.177 1429.723 450.0 50.0 3472.082
Any kind of animal rights charity 83130.47 1093.821 154.235 500.0 2313.493
Any kind of bug rights charity 1083.0 270.75 157.5 None 353.396
Machine Intelligence Research Institute 141792.5 1417.925 100.0 100.0 5370.485
Any charity combating nuclear existential risk 491.0 81.833 75.0 100.0 68.060
Any charity combating global warming 13012.0 245.509 100.0 10.0 365.542
Center For Applied Rationality 127101.0 3177.525 150.0 100.0 12969.096
Strategies for Engineered Negligible Senescence Research Foundation 9429.0 554.647 100.0 20.0 1156.431
Wikipedia 12765.5 53.189 20.0 10.0 126.444
Internet Archive 2975.04 80.406 30.0 50.0 173.791
Any campaign for political office 38443.99 366.133 50.0 50.0 1374.305
Other 564890.46 1661.442 200.0 100.0 4670.805
"Bug Rights" charity was supposed to be a troll fakeout but apparently...

This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.

Effective Altruism

Vegetarian

Do you follow any dietary restrictions related to animal products?

Yes, I am vegan: 54 (3.4%)

Yes, I am vegetarian: 158 (10.0%)

Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)

No: 996 (62.9%)

EAKnowledge

Do you know what Effective Altruism is?

Yes: 1562 (89.3%)

No but I've heard of it: 114 (6.5%)

No: 74 (4.2%)

EAIdentity

Do you self-identify as an Effective Altruist?

Yes: 665 (39.233%)

No: 1030 (60.767%)

The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.

EACommunity

Do you participate in the Effective Altruism community?

Yes: 314 (18.427%)

No: 1390 (81.573%)

Same issue as last, taking the numbers at face value community participation went up by 5.727%

EADonations

Has Effective Altruism caused you to make donations you otherwise wouldn't?

Yes: 666 (39.269%)

No: 1030 (60.731%)

Wowza!

Effective Altruist Anxiety

EAAnxiety

Have you ever had any kind of moral anxiety over Effective Altruism?

Yes: 501 (29.6%)

Yes but only because I worry about everything: 184 (10.9%)

No: 1008 (59.5%)


There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.

It certainly appears to be. But is moral anxiety effective? Let's look:

Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574

Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807

Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913

Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312

It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?

Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5

Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.

EAOpinion

What's your overall opinion of Effective Altruism?

Positive: 809 (47.6%)

Mostly Positive: 535 (31.5%)

No strong opinion: 258 (15.2%)

Mostly Negative: 75 (4.4%)

Negative: 24 (1.4%)

EA appears to be doing a pretty good job of getting people to like them.

Interesting Tables

Charity Donations By Political Affilation
Affiliation Income Charity Contributions % Income Donated To Charity Total Survey Charity % Sample Size
Anarchist 1677900.0 72386.0 4.314% 3.004% 50
Communist 298700.0 19190.0 6.425% 0.796% 13
Conservative 1963000.04 62945.04 3.207% 2.612% 38
Futarchist 1497494.1099999999 166254.0 11.102% 6.899% 31
Left-Libertarian 9681635.613839999 416084.0 4.298% 17.266% 245
Libertarian 11698523.0 214101.0 1.83% 8.885% 190
Moderate 3225475.0 90518.0 2.806% 3.756% 67
Neoreactionary 1383976.0 30890.0 2.232% 1.282% 28
Objectivist 399000.0 1310.0 0.328% 0.054% 10
Other 3150618.0 85272.0 2.707% 3.539% 132
Pragmatist 5087007.609999999 266836.0 5.245% 11.073% 131
Progressive 8455500.440000001 368742.78 4.361% 15.302% 217
Social Democrat 8000266.54 218052.5 2.726% 9.049% 237
Socialist 2621693.66 78484.0 2.994% 3.257% 126


Number Of Effective Altruists In The Diaspora Communities
Community Count % In Community Sample Size
LessWrong 136 38.418% 354
LessWrong Meetups 109 50.463% 216
LessWrong Facebook Group 83 48.256% 172
LessWrong Slack 22 39.286% 56
SlateStarCodex 343 40.98% 837
Rationalist Tumblr 175 49.716% 352
Rationalist Facebook 89 58.94% 151
Rationalist Twitter 24 40.0% 60
Effective Altruism Hub 86 86.869% 99
Good Judgement(TM) Open 23 74.194% 31
PredictionBook 31 51.667% 60
Hacker News 91 35.968% 253
#lesswrong on freenode 19 24.675% 77
#slatestarcodex on freenode 9 24.324% 37
#chapelperilous on freenode 2 18.182% 11
/r/rational 117 42.545% 275
/r/HPMOR 110 47.414% 232
/r/SlateStarCodex 93 37.959% 245
One or more private 'rationalist' groups 91 47.15% 193


Effective Altruist Donations By Political Affiliation
Affiliation EA Income EA Charity Sample Size
Anarchist 761000.0 57500.0 18
Futarchist 559850.0 114830.0 15
Left-Libertarian 5332856.0 361975.0 112
Libertarian 2725390.0 114732.0 53
Moderate 583247.0 56495.0 22
Other 1428978.0 69950.0 49
Pragmatist 1442211.0 43780.0 43
Progressive 4004097.0 304337.78 107
Social Democrat 3423487.45 149199.0 93
Socialist 678360.0 34751.0 41

[Link] Expert Prediction Of Experiments

9 Yvain 29 November 2016 02:47AM

Rationality Heuristic for Bias Detection: Updating Towards the Net Weight of Evidence

9 gwern 17 November 2016 02:51AM

Bias tests look for violations of basic universal properties of rational belief such as subadditivity of probabilities or anchoring on randomly-generated numbers. I propose a new one for the temporal consistency of beliefs: agents who believe that the net evidence for a claim c from t1 to t2 is positive or negative must then satisfy the inequalities that P(c, t1)<P(c, t2) & P(c, t1)>P(c, t2), respectively. A failure to update in the direction of the believed net evidence indicates that nonrational reasons are influencing the belief in c; the larger the net evidence without directional updates, the more that nonrational reasons are influencing c. Extended to a population level, this suggests that a heuristic measurement of the nonrational grounds for belief can be conducted using long-term public opinion surveys of important issues combined with contemporary surveys of estimated net evidence since the start of the opinion surveys to compare historical shifts in public opinion on issues with the net evidence on those issues.

continue reading »

Yudkowsky vs Trump: the nuclear showdown.

9 MrMind 11 November 2016 11:30AM

Sorry for the slightly clickbait-y title.

Some commenters have expressed, in the last open thread, their disappointment that figureheads from or near the rationality sphere seemed to have lost their cool when it came to this US election: when they were supposed to be calm and level-headed, they instead campaigned as if Trump was going to be the Basilisk incarnated.

I've not followed many commenters, mainly Scott Alexander and Eliezer Yudkowsky, and they both endorsed Clinton. I'll try to explain what were their arguments, briefly but as faithfully as possible. I'd like to know if you consider them mindkilled and why.

Please notice: I would like this to be a comment on methodology, about if their arguments were sound given what they knew and believed. I most definitely do not want this to decay in a lamentation about the results, or insults to the obviously stupid side, etc.

Yudkowsky made two arguments against Trump: level B incompetence and high variance. Since the second is also more or less the same as Scott's, I'll just go with those.

Level B incompetence

Eliezer attended a pretty serious and wide diplomatic simulation game, that made him appreciate how difficult is to just maintain a global equilibrium between countries and avoid nuclear annihilation. He says that there are three level in politics:

- level 0, where everything that the media report and the politicians say is taken at face value: every drama is true, every problem is important and every cry of outrage deserves consideration;

- level A, where you understand that politics is as much about theatre and emotions as it is about policies: at this level players operate like in pro-wrestling, creating drama and conflict to steer the more gullible viewers towards the preferred direction; at this level cinicism is high and almost every conflict is a farce and probably staged.

But the bucket doesn't stop here. As the diplomacy simulation taught him, there's also:

- level B, where everything becomes serious and important again. At this level, people work very hard at maintaining the status quo (where outside you have mankind extinction), diplomatic relations and subtle international equilibria shield the world from much worse outcomes. Faux pas at this level in the past had resulted in wars, genocides and general widespread badness.

In August fifty Republican security advisors signed a letter condemning Trump for his position on foreign policy: these are, Yudkowsky warned us, exactly those level B player, and they are saying us that Trump is an ill advised choice.
Trump might be a fantastic level A player, but he is an incompetent level B player, and this might very well turn to disaster.

High variance

The second argument is a more general version of the first: if you look at a normal distribution, it's easy to mistake only two possibilities: you either can do worst than the average, or better. But in a three dimensional world, things are much more complicated. Status quo is fragile (see the first argument), surrounded not by an equal amount of things being good or being bad. Most substantial variations from the equilibrium are disasters, and if you put a high-variance candidate, someone whose main point is to subvert the status quo, in charge, then with overwhelming probability you're headed off to a cliff.
People who voted for Trump are unrealistically optimists, thinking that civilization is robust, the current state is bad and variations can definitely help with getting away from a state of bad equilibrium.

[Link] Rebuttal piece by Stuart Russell and FHI Research Associate Allan Dafoe: "Yes, the experts are worried about the existential risk of artificial intelligence."

9 crmflynn 03 November 2016 05:54PM

Astrobiology IV: Photosynthesis and energy

9 CellBioGuy 17 October 2016 12:30AM

Originally I sat down to write about the large-scale history of Earth, and line up the big developments that our biosphere has undergone in the last 4 billion years.  But after writing about the reason that Earth is unique in our solar system (that is, photosynthesis being an option here), I guess I needed to explore photosynthesis and other forms of metabolism on Earth in a little more detail and before I knew it I’d written more than 3000 words about it.  So, here we are, taking a deep dive into photosynthesis and energy metabolism, and trying to determine if the origin of photosynthesis is a rare event or likely anywhere you get a biosphere with light falling on it.  Warning:  gets a little technical.

https://thegreatatuin.wordpress.com/2016/10/17/energy-metabolism-and-photosynthesis/

In short, I think it’s clear from the fact that there are multiple origins of it that phototrophy, using light for energy, is likely to show up anywhere there is light and life.  I suspect, but cannot rigorously prove, that even though photosynthesis of biomass only emerged once it was an early development in life on Earth emerging very near the root of the Bacterial tree and just produced a very strong first-mover advantage crowding out secondary origins of it, and would probably also show up where there is life and light.  As for oxygen-producing photosynthesis, its origin from more mundane other forms of photosynthesis is still being studied.  It required a strange chaining together of multiple modes of photosynthesis to make it work, and only ever happened once as well.  Its time of emergence, early or late, is pretty unconstrained and I don’t think there’s sufficient evidence to say one way or another if it is likely to happen anywhere there is photosynthesis.  It could be subject to the same ‘first mover advantage’ situation that other photosynthesis may have encountered as well.  But once it got going, it would naturally take over biomass production and crowd out other forms of photosynthesis due to the inherent chemical advantages it has on any wet planet (that have nothing to do with making oxygen) and its effects on other forms of photosynthesis.

Oxygen in the atmosphere had some important side effects, one which most people care about being allowing big complicated energy-gobbling organisms like animals – all that energy that organisms can get burning biomass in oxygen lets organisms that do so do a lot of interesting stuff.  Looking for oxygen in the atmospheres of other terrestrial planets would be an extremely informative experiment, as the presence of this substance would suggest that a process very similar to the process that created our huge diverse and active biosphere were underway.

[Link] My Interview with Dilbert creator Scott Adams

9 James_Miller 13 September 2016 05:22AM

In the second half of the interview we discussed several topics of importance to the LW community including cryonics, unfriendly AI, and eliminating mosquitoes. 

https://soundcloud.com/user-519115521/scott-adams-dilbert-interview

 

Jocko Podcast

9 moridinamael 06 September 2016 03:38PM

I've recently been extracting extraordinary value from the Jocko Podcast.

Jocko Willink is a retired Navy SEAL commander, jiu-jitsu black belt, management consultant and, in my opinion, master rationalist. His podcast typically consists of detailed analysis of some book on military history or strategy followed by a hands-on Q&A session. Last week's episode (#38) was particularly good and if you want to just dive in, I would start there.

As a sales pitch, I'll briefly describe some of his recurring talking points:

  • Extreme ownership. Take ownership of all outcomes. If your superior gave you "bad orders", you should have challenged the orders or adapted them better to the situation; if your subordinates failed to carry out a task, then it is your own instructions to them that were insufficient. If the failure is entirely your own, admit your mistake and humbly open yourself to feedback. By taking on this attitude you become a better leader and through modeling you promote greater ownership throughout your organization. I don't think I have to point out the similarities between this and "Heroic Morality" we talk about around here.
  • Mental toughness and discipline. Jocko's language around this topic is particularly refreshing, speaking as someone who has spent too much time around "self help" literature, in which I would partly include Less Wrong. His ideas are not particularly new, but it is valuable to have an example of somebody who reliably executes on his the philosophy of "Decide to do it, then do it." If you find that you didn't do it, then you didn't truly decide to do it. In any case, your own choice or lack thereof is the only factor. "Discipline is freedom." If you adopt this habit as your reality, it become true.
  • Decentralized command. This refers specifically to his leadership philosophy. Every subordinate needs to truly understand the leader's intent in order to execute instructions in a creative and adaptable way. Individuals within a structure need to understand the high-level goals well enough to be able to act in a almost all situations without consulting their superiors. This tightens the OODA loop on an organizational level.
  • Leadership as manipulation. Perhaps the greatest surprise to me was the subtlety of Jocko's thinking about leadership, probably because I brought in many erroneous assumptions about the nature of a SEAL commander. Jocko talks constantly about using self-awareness, detachment from one's ideas, control of one's own emotions, awareness of how one is perceived, and perspective-taking of one's subordinates and superiors. He comes off more as HPMOR!Quirrell than as a "drill sergeant".

The Q&A sessions, in which he answers questions asked by his fans on Twitter, tend to be very valuable. It's one thing to read the bullet points above, nod your head and say, "That sounds good." It's another to have Jocko walk through the tactical implementation of this ideas in a wide variety of daily situations, ranging from parenting difficulties to office misunderstandings.

For a taste of Jocko, maybe start with his appearance on the Tim Ferriss podcast or the Sam Harris podcast.

[Link] Optimizing the news feed

8 paulfchristiano 01 December 2016 11:23PM

[Link] What they don’t teach you at STEM school

8 RomeoStevens 30 November 2016 07:20PM

Seeking better name for "Effective Egoism"

8 DataPacRat 25 November 2016 10:31PM

Aka, coming up with a better term for applying LW-style rationality techniques to 'rational self-interest'.

Aka, in parallel with the current movement of 'Effective Altruism', which seeks the best available ways to fulfill one's values, when those values focus roughly on improving the well-being and reducing the suffering of people in general, seeking the best available ways to fulfill one's values, when those values focus roughly on improving the well-being and reducing the suffering of oneself.

(I find that I may have use for this term both in reality and in my NaNoWriMo attempt.)

[Link] The Post-Virtual-Reality Sadness

8 morganism 16 November 2016 08:17AM

Mental Habits are Procedural

8 lifelonglearner 07 November 2016 02:53PM

Lately, I’ve realized that there’s something I’ve been fundamentally doing wrong in my head when it comes to building good mental architecture:  Whenever I decide to integrate a new habit of mind, I get easily frustrated when it doesn’t stick after a few days.  This has been a recurring occurrence.


I’ve finally realized that my expectations may be the culprits here.


To judge how long it takes to start utilizing a certain heuristic, I appear to have been using an intuitionist approach, classifying such habits under a “mental stuff” label, because it seems like mental notions should be easier to learn.


Perhaps more concretely, I’ve been fooled because mental notions feel like declarative knowledge, but they’re really more procedural.  Knowing about pre-mortems seems easy; I just link it to other concepts under the “planning” label in my head.  But this misses the point that the whole reason I even understand pre-mortems is to actually use it.


I confess that I’ve had a similar experience with mathematics a while back.  For much of the course, I merely reviewed my notes, letting my brain run over the same grooves.  The familiarity of the concepts gave me the illusion of understanding; yes, I could grasp the main ideas, but comprehension and capability are miles apart.  When it came time to independently solve problems, I was totally lost.


What appears needed in these situations where certain topics “masquerade” as declarative knowledge (when you actually care about the procedural part) is to find analogs to concrete procedural skills.  For example, I have much better estimates on how long it will take to learn an instrument, a new magic trick, or a sport.  In my mind, the aforementioned actions feel very “physical”, rather than “mental”.  This may appears to trigger a reframe.


The key, then, is to renormalize my expectations for learning new habits of mind, by drawing parallels to analogous skills where I have good estimates. Reframing the situation in this way makes it less frustrating when I fail to develop agency in a few days.  Learning other skills have timelines of weeks or months, and that’s with solid practice.  


To think otherwise for learning mental skills would be unrealistic.


Similarly, reference class forecasting looks at the “base rate” to make predictions.  Statistically speaking, I’m probably not an outlier, so using the average can be a good predictor of my own performance.  When it comes to habit change, I can see how likely I am to succeed, or how long, by looking at people as a whole.


I just looked up the base rate for habit change.  Looks like lots of people cite the Lally study which had an average length of 66 days to ingrain a new habit.  The data ranged from 18 days to over 250 (the study ran for just 12 weeks, so this was extrapolated data).  


Some scientists surveyed were also fairly pessimistic on the timelines for breaking a habit, from two months to six months.


Welp, I’m definitely going to have to recalibrate now.


Learning new mental tricks aside, there’s a related problem I’ve been bumping into often, regarding my thoughts in general:  I can’t seem to hold all of them in my head at once.


What I’m dubbing the “transience of thought” is basically the where I forget lots of helpful things I read/encounter.  Progress isn’t linear.  Many of my helpful thoughts fall on the wayside, never to be seen again.  Or, I’ll forget most of the great insights from a book I recently read.


Once again, this appears to be a problem of expectations.  I’m sure that with the right amount of reinforcement and repetition, these ideas can be more deeply ingrained.


This has led me to think about what it feels like to have really subsumed a mental heuristic.  I took a look at some mental tools I already use, at a deep level, and tried to describe how they feel:


Upon examining my optimizing mindset:


“Having a mental habit deeply entrenched doesn’t feel like I’ve got a weaponized skill ready to fire off in certain circumstances (as I would have hypothesized).  Rather, it’s merely The Way That I Respond in those circumstances.  It feels like the most natural response, or the only “reasonable” thing to do.  The heuristic isn’t at your disposal; it’s just A Thing You Do.”


If I have unrealistic timeframes for mental habit change, then I’m more likely to get frustrated at not seeing early results.  I’m basically the analog of the dieter who quits after a few days.  (Expectations aside, there’s also the more obvious notion that our mental tools are what we use to respond to situations so of course they’d affect our behavior.)


One recent idea I’ve been flirting with concerns System 1 / 2, or the “rider and the elephant” models of the mind.  In such models, the “rational” side is always portrayed as dominated by the more “primal” side; in any case, there is always an implication of a struggle for dominance between both sides.


Though this may be an accurate depiction of behavior in cases of time-inconsistent preferences, I can’t help but wonder if they set up a self-fulfilling prophecy for our “rational” side to ultimately lose when confronted with “temptations”.  The implied power struggle between the sides, as a whole, also seems damaging.  


I’d like to be able to reconcile all of my different goals, not fight myself at every turn as different urges try to assert themselves.

Map and Territory: a new rationalist group blog

8 gworley 15 October 2016 05:55PM

If you want to engage with the rationalist community, LessWrong is mostly no longer the place to do it. Discussions aside, most of the activity has moved into the diaspora. There are a few big voices like Robin and Scott, but most of the online discussion happens on individual blogs, Tumblr, semi-private Facebook walls, and Reddit. And while these serve us well enough, I find that they leave me wanting for something like what LessWrong was: a vibrant group blog exploring our perspectives on cognition and building insights towards a deeper understanding of the world.

Maybe I'm yearning for a golden age of LessWrong that never was, but the fact remains that there is a gap in the rationalist community that LessWrong once filled. A space for multiple voices to come together in a dialectic that weaves together our individual threads of thought into a broader narrative. A home for discourse we are proud to call our own.

So with a lot of help from fellow rationalist bloggers, we've put together Map and Territory, a new group blog to bring our voices together. Each week you'll find new writing from the likes of Ben Hoffman, Mike Plotz, Malcolm Ocean, Duncan Sabien, Anders Huitfeldt, and myself working to build a more complete view of reality within the context of rationality.

And we're only just getting started, so if you're a rationalist blogger please consider joining us. We're doing this on Medium, so if you write something other folks in the rationalist community would like to read, we'd love to consider sharing it through Map and Territory (cross-positing encouraged). Reach out to me on Facebook or email and we'll get the process rolling.

https://medium.com/map-and-territory

[Recommendation] Steven Universe & cryonics

8 tadrinth 11 October 2016 04:21PM

I've been watching Steven Universe with my fiancee (a children's cartoon on Cartoon Network by Rebecca Sugar), and it wasn't until I got to Season 3 that I realized there's been a cryonics metaphor running in the background since the very first episode. If you want to introduce your kids to the idea of cryonics, this series seems like a spectacularly good way to do it.

If you don't want any spoilers, just go watch it, then come back.

Otherwise, here's the metaphor I'm seeing, and why it's great:

  • In the very first episode, we find out that the main characters are a group called the Crystal Gems, who fight 'gem monsters'. When they defeat a monster, a gem is left behind, which they lock in a bubble-forcefield and store in their headquarters.

  • One of the Crystal Gems is injured in a training accident, and we find out that their bodies are just projections; each Crystal Gem has a gem located somewhere on their body, which contains their minds. So long as their gem isn't damaged, they can project a new body after some time to recover. So we already have the insight that minds and bodies are separate.

  • This is driven home by a second episode where one of the Crystal Gems has their crystal cracked; this is actually dangerous to their mind, not just body, and is treated as a dire emergency instead of merely an inconvenience.

  • Then we eventually find out that the gem monsters are actually corrupted members of the same species as the Crystal Gems. They are 'bubbled' and stored in the temple in hopes of eventually restoring them to sanity and their previous forms.

  • An attempt is made to cure one of the monsters, which doesn't fully succeed, but at least restores them to sanity. This allows them to remain unbubbled and to be reunited with their old comrades (who are also corrupted). This was the episode where I finally made the connection to cryonics.

  • The Crystal Gems are also revealed to be over 5000 years old, and effectively immortal. They don't make a big deal out of this; for them, this is totally normal.

  • This also implies that they've made no progress in curing the gem monsters in 5000 years, but that doesn't stop them from preserving them anyway.

  • Finally, a secret weapon is revealed which is capable of directly shattering gems (thus killing the target permanently), but the use of it is rejected as unethical.

So, all in all, you have a series where when someone is hurt or sick in a way that you can't help, you preserve their mind in a safe way until you can figure out a way to help them. Even your worst enemy deserves no less.

 

Also, Steven Universe has an entire episode devoted to mindfulness meditation.  

[Link] Crowdsourcing moderation without sacrificing quality

7 paulfchristiano 02 December 2016 09:47PM

[Link] Hate Crimes: A Fact Post

7 sarahconstantin 01 December 2016 04:25PM

How can people write good LW articles?

7 abramdemski 29 November 2016 10:40AM

A comment by AnnaSalamon on her recent article:

good intellectual content

Yes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content.

Probably too meta / too clever an idea, but may be worth some individual brainstorms?

I wouldn't presume to write "How To Write Good LessWrong Articles", but perhaps I'm up to the task of starting a thread on it.

To the point: feel encouraged to skip my thoughts and comment with your own ideas.

The thoughts I ended up writing are, perhaps, more of an argument that it's still possible to write good new articles and only a little on how to do so:

Several people have suggested to me that perhaps the reason LessWrong has gone mostly silent these days is that there's only so much to be said on the subject of rationality, and the important things have been thoroughly covered. I think this is easily seen to be false, if you go and look at the mountain of literature related to subjects in the sequences. There is a lot left to be sifted through, synthesized, and explained clearly. Really, there are a lot of things which have only been dealt with in a fairly shallow way on LessWrong and could be given a more thorough treatment. A reasonable algorithm is to dive into academic papers on a subject of interest and write summaries of what you find. I expect there are a lot of interesting things to be uncovered in the existing literature on cognitive biases, economics, game theory, mechanism design, artificial intelligence, algorithms, operations research, public policy, and so on -- and that this community would have an interesting spin on those things.

Moreover, I think that "rationality isn't solved" (simply put). Perhaps you can read a bunch of stuff on here and think that all the answers have been laid out -- you form rational beliefs in accord with the laws of probability theory, and make rational decisions by choosing the policy with maximum expected utility; what else is there to know? Or maybe you admit that there are some holes in that story, like the details of TDT vs UDT and the question of logical uncertainty and so on; but you can't do anything meaningful about that. To such an attitude, I would say: do you know how to put it all into practice? Do you know how to explain it to other people clearly, succinctly, and convincingly? If you try to spell it all out, are there any holes in your understanding? If so, are you deferring to the understanding of the group, or are you deferring to an illusion of group understanding which doesn't really exist? If something is not quite clear to you, there's a decent chance that it's not quite clear to a lot of people; don't make the mistake of thinking everyone understands but you. And don't make the mistake of thinking you understand something that you haven't tried to explain from the start.

I'd encourage a certain kind of pluralistic view of rationality. We don't have one big equation explaining what a rational agent would look like -- there are some good candidates for such an equation, but they have caveats such as requiring unrealistic processing power and dropping anvils on their own heads if offered $10 to do so. The project of specifying one big algorithm -- one unifying decision theory -- is a worthy one, and such concepts can organize our thinking. But what if we thought of practical rationality as consisting more of a big collection of useful algorithms? I'm thinking along the lines of the book Algorithms to Live By, which gives dozens of algorithms which apply to different aspects of life. Like decision theory, such algorithms give a kind of "rational principle" which we can attempt to follow -- to the extent that it applies to our real-life situation. In theory, every one of them would follow from decision theory (or else, would do worse than a decision-theoretic calculation). But as finite beings, we can't work it all out from decision theory alone -- and anyway, as I've been harping on, decision theory itself is just a rag-tag collection of proposed algorithms upon closer inspection. So, we could take a more open-ended view of rationality as an attempt to collect useful algorithms, rather than a project that could be finished.

A second, more introspective way of writing LessWrong articles (my first being "dive into the literature"), which I think has a good track record: take a close look at something you see happening in your life or the world and try to make a model of it, try to explain it at a more algorithmic level. I'm thinking of posts like Intellectual Hipsters and Meta-Contrarianism and Slaves to Fashion Signalling.

View more: Next