Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, February 15-28, 2013

6 Post author: David_Gerard 15 February 2013 11:17PM

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

Comments (345)

Comment author: army1987 16 February 2013 01:35:30PM 12 points [-]

One part of my brain keeps being annoyed by the huge banner saying “Less Wrong will be undergoing maintenance in 1 day, 9 hours” and wishes there were a button to hide it away; another part knows perfectly well that if I did that, I would definitely forget that.

Comment author: Elithrion 17 February 2013 03:05:17AM 2 points [-]

Maybe it could reappear 30 minutes and 9 hours before the maintenance or something?

(This being part of "things that could be done with more web design resources".)

Comment author: Viliam_Bur 17 February 2013 07:33:29PM *  10 points [-]

Did anyone try using the LessWrong web software for their own website? I would like to try it, but I looked at the source code and instructions, and it seemed rather difficult. Probably because I have no experience with Python, Ruby, or with configuring servers (a non-trivial dose of all three seems necessary).

If someone would help me install it, that would be awesome. A list of steps what exactly needs to be done, and what (and which version) needs to be installed on server would be also helpful.

The idea is: I would like to start a rationalist community in Slovakia, and a website would be helpful to attract new people. Although I will recommend all readers to visit LW, reading in a foreign language is a significant incovenience; I expect the localized version to have at least 10 times more readers. Also I would like to discuss local events and coordinate local meetups or other activities.

Seemed to me it would be best to reuse the LW software and just localize the texts; but now it seems the installation is more complicated than the discussion softwares I have used before (e.g. PHPBB). But I really like the LW features (Markdown syntax, karma). I just have no experience with the used technologies, and don't want to spend my next five weekends learning them. So I hope someone already having the skills would help me.

Comment author: JGWeissman 17 February 2013 08:21:14PM 5 points [-]

This sounds like a subreddit of LW would be a good solution. I don't know how much work that would be to set up, but you could ask Matt.

Comment author: ChristianKl 18 February 2013 10:00:43PM 8 points [-]

Over the last month Bitcoin's nearly doubled in value. It's now nearly at hit historical high. http://bitcoincharts.com/charts/mtgoxUSD#tgMzm1g10zm2g25zv

Does anybody know what drives the latest Bitcoin price development?

Comment author: nigerweiss 18 February 2013 10:07:21PM 5 points [-]

The bitcoin market value is predicated mostly upon drug use, pedophilia, nerd paranoia, and rampant amateur speculation. Basically, break out the tea leaves.

Comment author: DaFranker 18 February 2013 10:43:58PM 7 points [-]

drug use, pedophilia, (...), and rampant amateur speculation

Hey, that's almost 2.5% of the world GDP! Can't go wrong with a market this size.

Comment author: Tripitaka 18 February 2013 10:12:05PM 2 points [-]

As of january, the pizza-chain Dominos accepts payment in bitcoins; and as of this week, Kim Dotcoms "Mega" filehosting-service accepts them, too.

Comment author: drethelin 18 February 2013 10:58:21PM 7 points [-]

Dominos does not accept bitcoins. A third party site will order dominos for you, and you pay THEM in bitcoins.

Comment author: beriukay 24 February 2013 07:08:38PM *  7 points [-]

Any bored nutritionists out there? I've put together a list of nutrients, with their USDA recommended quantities/amounts, and scoured amazon for the best deals, in trying to create my own version of Soylent. My search was complicated by the following goals:

  • I want my Soylent to have all USDA recommendations for a person of my age/sex/mass.
  • I want my Soylent to be easy to make (which means a preference for liquid and powder versions of nutrients).
  • My Soylent should be as cheap, per day, as possible (I'd rather have 10 lbs of Vitamin C at $0.00/day than 1lb at $0.01/day).
  • Because I'd like it to be trivially easy to possess a year's supply of Soylent, should I find this to be a good experiment.
  • I want to make it easy for other people to follow my steps, and criticize my mistakes, because I'm totally NOT a nutritionist, but I'm awfully tired of being told that I need X amount of Y in my diet, without citations or actionable suggestions (and it is way easier to count calories with whey protein than at a restaurant).
  • I want the items to be available to anybody in the USA, because I live at the end of a pretty long supply chain, and can't find all this stuff locally.
  • I'm trying not to order things from merchants who practice woo-woo, but if they have the best version of what I need, I won't be too picky.

There's probably other things, but I can't think of them at the moment.

The spreadsheet isn't done yet. I hope to make it possible to try dynamic combinations of multiple nutrients, since most merchants seem to prefer the multivitamin approach. Plus, I'd like for there to be more options for liquid and powder substances, because they are easier to combine. Right now, I'm just an explorer, but eventually I'd like to just have a recipe.

If this all sounds too risky, I've also made contact with Rob, and he says that he's planning on releasing his data in a few weeks, once he's comfortable with his results (I think he's waiting on friends to confirm his findings). I'm planning on showing him my list, so we can compare notes. It has already been noted that his current Soylent formula is a bit lacking in fiber. My Soylent is currently slated to use psyllium husks to make up the difference, but I'm looking into other options.

A brief overview of the options indicates that this isn't much cheaper than other food choices (~$7.20 / day), but it meets all of your needs, and once the routine is down, would be fast and easy to make, and could be stored for a long time. So I'm optimistic.

Comment author: PECOS-9 25 February 2013 09:24:12PM *  3 points [-]

Relevant dinosaur comic. The blog section "What are the haps my friends" below the comic also has some information that might be useful.

As much as I love this idea, I'd be too worried about possible unforeseen consequence to be one of the first people to try it. For example, the importance of gut flora is something that was brought up in the comments to the Soylent blog post that didn't occur to me at all while reading. Even if you can probably get around that, it's just an example of how there are a lot of possible problems you could be completely blind to. As another commenter on his follow up post said:

My overall concern with your idea is that you only eat what is known to be necessary to support life. It used to be that when people set out to sea, they'd develop scurvy because of vitamin C defficiency. You're setting yourself to be a test subject for discovering new vitamins.

Maybe it'd be useful to look up research on people who have to survive on tube feeding for an extended period of time. Of course, there's lots of conflating factors there, but I bet there's some good information out there (I haven't looked).

Also, most of the benefits he described are easily explained as a combination of placebo, losing weight by eating fewer calories, and exercise.

But still, I do like the idea. I bet a kickstarter for something like this would do really well.

Comment author: Qiaochu_Yuan 25 February 2013 09:38:08PM 7 points [-]

I am also worried about possible unforeseen consequences of eating bad diets, but one of those bad diets is my current one, so...

Comment author: gwern 13 March 2013 05:52:11PM 2 points [-]
Comment author: beriukay 03 April 2013 10:24:10AM 1 point [-]

I got in touch with Mr. Rhinehart about my list. Here's his analysis of what I currently have:

Hey Paul,

Looks quite thorough. Note at small scale it is usually more efficient to find a multivitamin that contains many of the >micronutrients than mixing them separately. Also you will exhaust your carb source rather quickly so it may pay to buy >maltodextrin at a slightly higher scale. Otherwise looks pretty good.

I should be getting some money from the Good Judgment project soon. I'll buy the ingredients then.

Comment author: Qiaochu_Yuan 25 February 2013 10:27:37PM *  1 point [-]

The list formatting doesn't seem to have quite worked. Can you try replacing the dashes with asterisks?

Anyway, I wish I could help, but I am not a nutritionist.

Comment author: gwern 25 February 2013 11:47:09PM 1 point [-]

The list formatting doesn't seem to have quite worked. Can you try replacing the dashes with asterisks?

He needs a full empty line between the list and his preceding sentence, I think.

Comment author: beriukay 30 March 2013 11:17:46AM 0 points [-]

Oops, sorry!

Comment author: gwern 20 February 2013 10:07:52PM 7 points [-]

It's been suggested to me that since I don't blog, I start an email newsletter. I ignored the initial suggestions, but following the old maxim* began to seriously consider it on the third or fourth suggestion (who also mentioned they'd even pay for it, which would be helpful for my money woes).

My basic idea is to once a month compile: everything I've shared on Google+, articles excerpted in Evernote or on IRC, interesting LW comments**, and a consolidated version of the changes I've made to gwern.net that month. Possibly also include media I've consumed with reviews for books, anime, music etc akin to the media thread.

I am interested in whether LWers would subscribe:

If I made it a monthly subscription, what does your willingness-to-pay look like? (Please be serious and think about what you would actually do.)

Thanks to everyone voting.

* "Once is chance; twice is coincidence; three times is enemy action." Or in Star Wars terms: "If someone calls you a Hutt, ignore them; if two people call you a Hutt, begin to wonder; and if three do, buy a slobber-pail and start stockpiling glitterstim."

** For example, my recent comments on the SAT (Harvard logistic regression & shrinking to the mean) would count as 'interesting comments', but not the Evangelion joke.

Submitting...

Comment author: gwern 06 December 2013 05:06:38AM 2 points [-]

After some further thought and seeing whether I could handle monthly summaries of my work, I've decided to open up a monthly digest email with Mailchimp. The signup form is at http://eepurl.com/Kc155

Comment author: jsalvatier 23 February 2013 08:37:25AM 2 points [-]

I would turn the email into an RSS.

Comment author: Risto_Saarelma 23 February 2013 05:03:42PM 1 point [-]

I'd be a lot more willing to consider a somewhat larger single payment that gets me a lifetime subscription than a monthly fee. I'm pretty sure I don't want to deal with a monthly fee, even if it's $1, it feels like having to do the buying decision over and over again every month, but I can entertain dropping a one-off $20 for a lifetime subscription. Of course that'd only net less than two years worth of posts even for the $1 monthly price point, so this might not be such a great deal for you.

Comment author: gwern 23 February 2013 09:57:30PM 1 point [-]

I wouldn't do a lifetime subscription simply because I know that there's a very high chance I would stop or the quality would go downhill at some point. Even if people were willing to trust me and pay upfront, I would still consider such a pricing strategy extremely dishonest.

Comment author: curiousepic 22 February 2013 09:33:10PM 1 point [-]

Why do you not blog? The differences between it and this newsletter are ambiguous.

Comment author: gwern 22 February 2013 10:15:35PM *  4 points [-]

Reasons for 'not a blog':

  • I don't have any natural place on gwern.net for a blog
  • I've watched people waste countless hours dealing with regular blog software like Wordpress and don't want to go anywhere near it,

Reasons for email specifically:

  • email lists like Google Groups or MailChimp seem both secure and easy to use for once-a-month updates
  • more people seem to still use email than RSS readers these days
  • patio11 says that geeks/Web people systematically underrate the usefulness of an email newsletter
  • there's much more acceptance of charging for an email newsletter
Comment author: Risto_Saarelma 23 February 2013 08:49:58AM *  4 points [-]

Might be worth noting that the customer base patio11 is probably most familiar with are people who pay money for a program that lets them print bingo cards. They might be a different demographic than people who know what a gwern is.

For a data point, I live in RSS, don't voluntarily follow any newsletters, and have become conditioned to associate the ones I do get from some places I'm registered at as semi-spam. Also if I pay money for something, then it becomes a burdensome Rare and Valuable Possession I Must Now Find a Safe Place For, instead of a neat thing I can go look at, then forget all about, then go look up again after five years based on some vaguely remembered details. So I'll save myself stress if I stick with free stuff.

Comment author: gwern 23 February 2013 03:51:55PM 3 points [-]

They might be a different demographic than people who know what a gwern is.

Maybe. On the other hand, would you entertain for even a second the thought of paying for an RSS feed? Personally, I can think of paying for an email newsletter if it's worth it, but the thought of paying for a blog with an RSS feed triggers an 'undefined' error in my head.

Also if I pay money for something, then it becomes a burdensome Rare and Valuable Possession I Must Now Find a Safe Place For, instead of a neat thing I can go look at, then forget all about, then go look up again after five years based on some vaguely remembered details.

Email is infinitely superior to RSS in this respect; everyone gets a durable copy and many people back up their emails (including you - right? right?). I have emails going back to 2004. In contrast, I'm not sure how I would get my RSS feeds from a year ago since Google Reader seems to expire stuff at random, never mind 2006 or whenever I started using RSS.

Comment author: Risto_Saarelma 23 February 2013 04:46:34PM *  3 points [-]

You're right about the paying part. I don't care to even begin worrying about how setting Google Reader to fetch something from beyond a paywall might work, but e-mail from a paid service makes perfect sense, tech-wise.

And now that you mention it, if I were living in an email client instead of Google Reader, I could probably get along just fine having stuff from my RSS subscriptions get pushed into my mailbox. Unfortunately, after 15 years I still use email so little that I basically consider it a hostile alien environment and haven't had enough interesting stuff go on there so far that I'd ever really felt the need to back up my mails. Setting up a proper email workflow and archiving wouldn't be a very big hurdle if I ever got reason to bother with it though.

An actual thing I would like is an archived log of "I read this thing today and it was interesting", preferrably with an archive of the thing. I currently use Google Reader's starring thing for this, but that's leaving stuff I actually do care about archiving at Google's uncertain mercy, which is bad. Directing RSS to email would get me this for free.

Did I just talk myself into possibly starting to use email properly with an use case where I'd mostly be mailing stuff to myself?

Comment author: chemotaxis101 23 February 2013 06:19:54PM 2 points [-]

I'd recommend using Blogtrottr for turning the content from your RSS feeds into email messages. Indeed, as email is (incidentally) the only web-related tool I can (and must) consistently use throughout the day, I tend to bring a major part of the relevant web content I'm interested in to my email inbox - including twitter status updates, LW Discussion posts, etc.

Comment author: Viliam_Bur 25 February 2013 04:26:17PM *  3 points [-]

I don't have any natural place on gwern.net for a blog

How about "blog.gwern.net" or even "gwernblog.net"?

I've watched people waste countless hours dealing with regular blog software like Wordpress and don't want to go anywhere near it,

If some people are willing to pay for your news, maybe you could find a volunteer (by telling them that creating the blog software is the condition for you to publish) to make the website.

To emulate the (lack of) functionality of an e-mail, you only need to log in as the administrator, and write a new article. The Markdown syntax, as used on LW, could be a good choice. Then the website must display the list of articles, the individual articles, and the RSS feed. That's it; someone could do that in a weekend. And you would get the extra functionality of being able to correct mistakes in already published articles, and make hyperlinks between them.

Then you need functionality to manage users: log in as user, change the password, adding and removing users as admin. There could even be an option for users to enter their e-mails, so the new articles will be sent to them automatically (so they de facto have a choice between web and e-mail format). This all is still within a weekend or two of work.

Comment author: lukeprog 28 February 2013 06:49:08PM 6 points [-]

I'm quite excited that MIRI's new website has launched. My thanks to Louie Helm (project lead), Katie Hartman (designer), and the others who helped with the project: Malo Bourgon, Alex Vermeer, Stephen Barnes, Steven Kaas, and probably some others I don't know about.

Comment author: paper-machine 28 February 2013 07:08:25PM *  1 point [-]

Grats on the URL.

Things we like about the new site:

  • Color scheme - The addition of orange and lime to the usual cobalt blue is quite classy.
  • Dat logo - We love that it degrades to the old logo in monochrome. We love the subtle gradient on the lettering and the good kerning job on the RI, though we regret the harsh M.
  • Navbar - More subtle gradients on the donate button.
  • Dat quote - Excellent choice with the selective bolding and the vertical rule. Not sure how I feel about the italic serif "and" between Tallinn's titles; some italic correction missing there, but the web is a disaster for such things anyway.
  • Headings - Love the idea of bold sans heading with light serif subheading at the same height. Could be more consistent, but variety is good too.
  • Font choices - Quattrocento is such a great font. Wouldn't mind seeing more of it in the sidebar, though. Source Sans Pro is nice but clashes slightly. Normally one thinks of futurist/post-modern sites being totally clean and sans serif everywhere. I'm really happy with the subversion here.
  • Stylized portraits - Love them. Seems a different process was used on the team page as the research page; the team page's process is less stylized, but also holds up better with different face types, IMO.

Overall: exceptionally well done.

Comment author: drethelin 20 February 2013 06:14:18PM 6 points [-]

Conditional Spam (Something we could use a better word for but this will do for now)

In short: Conditional Spam is information that is valuable to <1 percent of people, and a subjective/objective waste of time for >99 percent of people.

Huge proportions of the content generated and shared on the internet is in this category, and this becomes more and more the case as a greater percentage of the population outputs to the internet as well as reading it. In this category are things like people's photos of their cats, stories about day to day anecdotes, baby pictures, but ALSO, and importantly, things like most scientific studies, news articles, and political arguments. People criticize twitter for encouraging seemingly narcissistic pointless microblogging, but in reality it's the perfect engine for distributing conditional spam: Anyone who cares about your dog can follow you, and anyone who doesn't can NOT.

When your twitter or facebook or RSS is full of things you're not informed (or entertained, since this applies to fun as well as usefulness) by, this isn't a failing of the internet. It's a failing of your filter. The internet is a tool optimized to distribute conditional spam as widely as possible, and you can tune your use of it to try and make the less than 1 percent of it you'll inevitably see something you WANT to see, and your less than 1 percent of it you'll MAKE go to the people who actually care about it.

I don't like the phrase conditional spam both because it's not CREATED with sinister motives and it also presents it as a bad thing. 99 percent of all things are not for YOU but that doesn't mean it's not good that they're created. I think coming up with good terminology for this can also help us start to create actual mechanisms by which to optimize it. You can sort of shortcut the information filter process by paying attention to only people who pay attention to similar things as you, but is there an efficient way to set up eg, a news source that only gives you news you are likely to be interested in reading? It might be tuneable by tracking how likely you are to finish reading the articles.

Comment author: Viliam_Bur 25 February 2013 04:56:30PM *  1 point [-]

It would be nice to have some way of adding tags to the information, so that we could specify what information we need, and avoid the rest. Unfortunately, this would not work, because the tagging would be costly, and there would be incentives to tag incorrectly.

For example, I like to be connected with people I like on Facebook. I just don't have to be informed about every time they farted. So I would prefer if some information would be labeled as "important" for the given person, and I would only read those. But that would only give me many links to youtube videos labeled "important"; and even this assumes too optimistically that people would bother to use the tags.

I missed my high-school reunion once because a Facebook group started specifically to notify people about the reunion gradually became a place for idle chat. After a few months of stupid content I learned to ignore the group. And then I missed a short message which was exceptionally on-topic. There was nothing to make it stand out of the rest.

In groups related to one specific goal, a solution could be to mark some messages as "important" and to make the importance a scarce resource. Something like: you can only label one message in a week as important. But even this would be subject to games, such as "this message is obviously important, so someone else is guaranteed to spend their point on it, so I will keep my point for something else".

The proper solution would probably use some personal recommendation system. Such as: there is an information, users can add their labels, and you can decide to "follow" some users which means that you will see what they labeled. Maybe something like Digg, but you would see only the points that your friends gave to the articles. You could have different groups of friends for different filters.

Comment author: Qiaochu_Yuan 18 February 2013 09:11:49AM *  6 points [-]

How much do actors know about body language? Are they generally taught to use body language in a way consistent with what they're saying and expressing with their faces? (If so, does this mean that watching TV shows or movies muted could be a good way to practice reading body language?)

Comment author: shaih 18 February 2013 08:54:04PM 9 points [-]

I do not believe it would be a good way to practice because even with actors acting the way they are supposed (consistent body language and facial expressions) lets say conservatively 90% of the time, you are left with 10% wrong data. This 10% wouldn't be that bad except for the fact that it is actors trying to act correctly (meaning you would interpret what it looks like for a fabricated emotion to be a real emotion). This could be detrimental to many uses of being able to read body language such as telling when other people are lying.

My preferred method has been to watch court cases on YouTube where it has come out afterword whether the person was guilty or innocent. I watch these videos before i know what the truth is make a prediction and then read what the truth is. In this way I am able to get situations where the person is feeling real emotions and is likely to hide what there feeling with fake emotions.

After practicing like this for about a week i found that i could more easily discern whether people were telling the truth or lying, and it was easier to see what emotions they truly felt.

This may not extremely applicable to the real world because emotions felt in court rooms are particularly intense but i found that it allows me to get my mind to the point of being used to looking for emotion which has helped in the real world.

I should also note that i have read many books from Paul Ekman and have used some of his training programs.

If it is important to you how to learn to read faces I largely recommend SETT and METT where if its simply a curiosity you're unwilling to spend much money on i recommend checking out "emotions revealed" in your local library

Comment author: PECOS-9 18 February 2013 10:58:54PM 3 points [-]

My preferred method has been to watch court cases on YouTube where it has come out afterword whether the person was guilty or innocent. I watch these videos before i know what the truth is make a prediction and then read what the truth is. In this way I am able to get situations where the person is feeling real emotions and is likely to hide what there feeling with fake emotions.

After practicing like this for about a week i found that i could more easily discern whether people were telling the truth or lying, and it was easier to see what emotions they truly felt.

That's a really cool idea. Did you record your predictions and do a statistical analysis on them to see whether you definitely improved?

Comment author: shaih 18 February 2013 11:06:39PM 3 points [-]

My knowledge of statistics at the time was very much lacking (that being said i still only have about a semesters worth of stat) so I was not able to do any type of statistical analysis that would be rigorous in any way. I did however keep track of my predictions and was around 60% on the first day (slightly better then guessing probably caused by reading books i mentioned) to around 80% about a week later of practicing every day. I no longer have the exact data though only approximate percentages of how i did.

I remember also that it was difficult tracking down the cases in which truth was known and this was very time consuming, this is the predominant reason that i only practiced like this for a week.

Comment author: gokfar 19 February 2013 10:53:57PM 1 point [-]

Finding such videos without discovering the truth inadvertently seems difficult. Do you have links to share?

Comment author: shaih 20 February 2013 12:58:42AM 1 point [-]

I don't have them any longer. An easy way to do it is have a friend pick out videos for you (or have someone post links to videos here and have someone pm them for the answer). Or while on YouTube look for names that you've heard before but not quite remember clearly which is not really reliable but its better then nothing.

Comment author: gokfar 19 February 2013 11:26:26PM 1 point [-]

In which case would this be preferable to live human interaction? It lacks the immediate salient feedback and strong incentives of a social setting. The editing and narrative would be distracting and watching a muted movie sounds (or rather, looks) quite boring.

Comment author: Tenoke 18 February 2013 02:23:40PM 1 point [-]

They get some training and it depends a lot on what you are watching but you can learn a bit if you don't forget that this is not exactly how people act. A show like 'lie to me' will probably do more good than other shows (Paul Ekman is involved in it) but there are also inaccuracies there. Perhaps you can study the episodes and then read arguments about what was wrong in a certain episode (David Matsumoto used to post sometimes what was inaccurate about some episodes iirc).

Comment author: Dahlen 17 February 2013 04:59:50PM *  6 points [-]

Where on LW is it okay to request advice? (The answers I would expect -- are these right? -- are: maybe, just maybe, in open threads, probably not in Discussion if you don't want to get downvoted into oblivion, and definitely not in Main; possibly (20-ish percent sure) nowhere on the site.)

I'm asking because, even if the discussions themselves probably aren't on-topic for LW, maybe some would rather hear opinions formulated by people with the intelligence, the background knowledge and the debate style common around here.

Comment author: Nisan 17 February 2013 05:55:44PM 13 points [-]

It's definitely okay to post in open threads. It might be acceptable to post to discussion, if your problem is one that other users may face or if you can make the case that the subsequent discussion will produce interesting results applicable to rational decisionmaking generally.

Comment author: beoShaffer 17 February 2013 06:30:21PM 2 points [-]

It depends on what you're asking about, but generally open threads are your best bet.

Comment author: ChristianKl 17 February 2013 06:51:56PM 2 points [-]

Advice is a fairly broad category. Different calls for advice are likely to be treated differently.

If you want that your call is well received, start by describing your problem in specific terms. How do your current utility calculations look like? If you make assumption, give us p values about your confidence that your assumptions are true.

Comment author: Mitchell_Porter 16 February 2013 01:37:36AM 6 points [-]

Two papers from last week: "The universal path integral" and "Quantum correlations which imply causation".

The first defines a quantum sum-over-histories "over all computable structures... The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures."

The second, despite its bland title, is actually experimenting with a new timeless formalism, a "pseudo-density matrix which treats space and time indiscriminately".

I don't believe in timeless physics, computation as fundamental, or quantum mechanics as fundamental, but many people here do, and it amused me to see two such papers coming out on the same day.

Comment author: PhilipL 16 February 2013 09:39:54PM 5 points [-]

With all the mental health issues coming up recently, I thought I'd link Depression Quest, a text simulation of what it's like to live with depression.

Trigger warning: Please read the introduction page thoroughly before clicking Start. If you are or have been depressed, continue at your own risk.

Comment author: radical_negative_one 17 February 2013 09:22:33PM *  2 points [-]

In the past I went through a period that felt like depression, though I never talked about it to anyone so of course I wasn't diagnosed at any point. I went against your warning and played the game. The protagonist started off with more social support than I did. I chose the responses that I think I would have given when I felt depressed. This resulted the protagonist never seeking therapy or medication, and what is labeled "endingZero".

Depression Quest seems accurate. Now I feel bad. (edit: But I did get better.)

Comment author: shminux 16 February 2013 09:45:42PM 3 points [-]

Warning: the link starts playing bad music without asking.

Comment author: EvelynM 17 February 2013 12:05:55AM 1 point [-]

That's depressing.

Comment author: Elithrion 17 February 2013 03:13:43AM 1 point [-]

On the bright side, there's actually a button to pause it just above "restart the game". Although annoyingly, it's white on grainy white/gray and took me a little while to notice.

Comment author: TimS 04 March 2013 05:37:33PM 1 point [-]

Trigger warning: Please read the introduction page thoroughly before clicking Start. If you are or have been depressed, continue at your own risk.

I found it very helpful, actually. It encouraged healthy activity like talking about your concerns with others, recognizing that some folks are not emotionally safe to talk to, and expanding one's social safety net. But I'm more anxious than depressed, so YMMV.

Comment author: PhilipL 04 March 2013 07:40:13PM 0 points [-]

I've had experiences with both, and I wouldn't mind discussing specifics through PM.

Comment author: rev 16 February 2013 05:01:49PM 12 points [-]

Are there any mechanisms on this site for dealing with mental health issues triggered by posts/topics (specifically, the forbidden Roko post)? I would really appreciate any interested posters getting in touch by PM for a talk. I don't really know who to turn to.

Sorry if this is an inappropriate place to post this, I'm not sure where else to air these concerns.

Comment author: shaih 18 February 2013 09:54:11PM *  0 points [-]

I was not hear for the roko post and i only have a general idea of what its about, that being said i experienced a bout of depression when applying rationality to the second law of thermodynamics.

Two things helped me, 1 i realized that while dealing with a future that is either very unlikely or inconceivably far away it is hard to properly diminish the emotional impact by what is rationally required. knowing that the emotions felt completely out way what is cause for them, you can hopefully realize that acting in the present towards those beliefs is irrational and ignoring those beliefs would actually help you be more rational. Also realize that giving weight to an improbable future more then it deserves is in its self irrational. With this i realized that by trying to be rational i was being irrational and found that it was easier to resolve this paradox then simply getting over the emotional weight it took to think about the future rationally to begin with.

2 I meditated on the following quote

People can stand what is true, for they are already enduring it.

-Gendlin nothing has changed after you read a post on this website besides what is in your brain. Becoming more rational should never make you lose, after all Rationality is Systematized Winning so instead if you find that a belief you have is making you lose it is clearly a irrational one or is being thought of in a irrational way.

Hope this helps

Comment author: David_Gerard 28 February 2013 12:15:12AM *  2 points [-]

Treating it as you would existential depression may be useful, I would think. There are not really a lot of effective therapies for philosophy-induced existential depression - the only way to fix it seems to be to increase your baseline happiness, which is as easy to say as it is hard to do - but it occurred to me that university student health therapist may see a lot of it and may at least be able to provide an experienced ear. I would be interested in any anecdotes on the subject (I'm assuming there's not a lot of data).

Comment author: drethelin 15 February 2013 11:49:32PM 12 points [-]

Does anyone else believe in deliberate alienation? Forums and organizations like Lesswrong often strive to be and claim to want to be more (and by extension indefinitely) inclusive but I think excluding people can be very useful in terms of social utilons and conversation, if not so good for $$$. There's a lot of value in having a pretty good picture of who you're talking to in a given social group, in terms of making effective use of jargon and references as well as appeals to emotion that actually appeal. I think thought should be carefully given as to who exactly you let in or block out with any given form of inclusiveness or insensitivity.

On a more personal note, I think looking deliberately weird is a great way to make your day to day happenstance interactions more varied and interesting.

Comment author: RomeoStevens 16 February 2013 07:05:37AM 15 points [-]

Yes, insufficient elitism is a failure mode of people who were excluded at some point in their life.

Comment author: Nornagest 16 February 2013 07:24:19AM 13 points [-]

This seems like a good time to link the Five Geek Social Fallacies, one of my favorite subculture sociology articles.

(Insufficient elitism as a failure mode is #1.)

Comment author: WingedViper 16 February 2013 12:06:07AM 2 points [-]

Acting "weird" (well or just weird, depends) is something I have contemplated, too. For now I have to confess that I mostly try to stick to the norms (especially in public) except if I have a good reason to do otherwise. I think I might make this one of my tasks to just do some random "weird" acts of kindness.

About the alienation: I don't think that we should do a lot about that. I think enforcing certain rules and having our own memes and terms for stuff already has some strong effects on that. I certainly felt a bit weird when I first came here. And I already was having thoughts like "don't judge something by it's cover" etc. in my mind (avoiding certain biases).

Comment author: moridinamael 16 February 2013 12:26:31AM *  11 points [-]

So, there are hundreds of diseases, genetic and otherwise, with an incidence of less than 1%. That means that the odds of you having any one of them are pretty low, but the odds of you having at least one of them are pretty good. The consequence of this is that you're less likely to be correctly diagnosed if you have one of these rare conditions, which again, you very well might. If you have a rare disorder whose symptoms include frequent headaches and eczema, doctors are likely to treat the headaches and the eczema separately, because, hey, it's pretty unlikely that you have that one really rare condition!

For example, I was diagnosed by several doctors with "allergies to everything" when I actually have a relatively rare condition, histamine intolerance; my brother was diagnosed by different doctors as having Celiac disease, severe anxiety, or ulcers, when he actually just had lactose intolerance, which is pretty common, and I still cannot understand how they systematically got that one wrong. In both cases, these repeated misdiagnoses led to years of unnecessary, significant suffering. In my brother's case, at one point they actually prescribed him drugs with significant negative side effects which did nothing to alter his lactose intolerance.

I don't intend to come off as bitter, although I suppose I am. My intent is rather to discuss strategies for avoiding this type of systematic misdiagnosis of rare conditions. This line of thought seems like a strong argument in favor of the eventual role of Watson-like AIs as medical diagnostic assistants. A quick Googling indicates that the medical establishment is at least aware of the need to confront the under-diagnosis of rare diseases, but I'm not seeing a lot of concrete policies. For the present time, I don't know what strategy a non-medically-trained individual should pursue, especially if the "experts" are all telling you that your watery eyes mean you have have hay fever when you really have some treatable congenital eye disease.

Comment author: RomeoStevens 16 February 2013 07:04:31AM *  14 points [-]

but the odds of you having at least one of them are pretty good.

The odds of you having any particular disease is not independent of your odds of having other diseases.

Comment author: ChristianKl 17 February 2013 06:09:40PM 3 points [-]

Self experimentation. If the doctor prescribes something for you, test numerically whether it helps you to improve.

If you suffer from allergies it makes sense to systemmatically check through self experimentation whether your condition improves by removing varies substances from your diet.

It doesn't hurt to use a symptom checker like http://symptoms.webmd.com/#./introView to get a list of more possible diagnoses.

Comment author: [deleted] 16 February 2013 07:13:07AM 0 points [-]

As somebody who's had to deal with doctors because of a plethora of diseases, I must say you're absolutely right. (I also shadowed a few and am considering applying to med school.)

I don't remember what this concept is called, but basically it posits that "one should look for horses, not zebras" and is part of medical education. That is, a doctor should assume that the symptoms a patient has are caused by a common disease rather than by a rare one. So most doctors, thanks to their confirmation bias, dismiss any symptoms that don't fit the common disease diagnosis. (A girl from my town went to her physician because she complained of headaches. The good doctor said that she's got nothing to worry about and recommended more rest and relaxation. It turned out that the girl had a brain tumor which was discovered when she was autopsied. The good doctor is still practicing. Would this gross example of irrationality be tolerated in other professions? I think not.)

Most doctors are not so rational because of the way their education is structured: becoming a doctor isn't so much about reasoning but memorizing heaps of information ad verbatim. It appears that they are prone to spew curiosity-stoppers when confronted with diseases.

Comment author: Qiaochu_Yuan 16 February 2013 10:48:26PM *  9 points [-]

this gross example of irrationality

soren, please don't take this the wrong way, but based on what I've seen you post so far, you are not a strong enough rationalist to say things like this yet. You are using your existing knowledge of biases to justify your other biases, and this is dangerous.

Doctors have a limited amount of time and other resources. Any time and other resources they put into considering the possibility that a patient has a rare disease is time and other resources they can't put into treating their other patients with common diseases. In the absence of a certain threshold of evidence suggesting it's time to consider a rare disease (with a large space of possible rare diseases, most of the work you need to do goes into getting enough evidence to bring a given rare disease to your attention at all), it is absolutely completely rational to assume that patients have common diseases in general. .

Comment author: [deleted] 17 February 2013 07:09:26AM 2 points [-]

None taken, but how can you assess my level of rationality? When will I be enough rationalist to say things like that?

What bias did I use to justify another bias?

Again, testing a hypothesis when somebody's life is at stake is, I think, paramount to being a good doctor. What's the threshold of evidence a doctor should reckon?

Comment author: DanielLC 16 February 2013 08:05:50AM 7 points [-]

Would this gross example of irrationality be tolerated in other professions?

What gross example of irrationality? The vast majority of people with headaches don't have anything to worry about.

Comment author: NancyLebovitz 18 February 2013 03:35:01PM 3 points [-]

The question is whether "people with headaches" is the right reference class. If the headache is unusually severe or persistent, it makes sense to look deeper. Also, a doctor can ask for details about the headache before prescribing the expensive tests.

Comment author: Elithrion 22 February 2013 05:15:31AM 4 points [-]

I decided I want to not see my karma or the karma of my comments and posts. I find that if anyone ever downvotes me it bothers me way more than it should, and while "well, stop letting it bother you" is a reasonable recommendation, it seems harder to implement for me than a software solution.

So, to that end, I figured out how the last posted version of the anti-kibitzer script works, and remodeled it to instead hide only my own karma (which took embarrassingly long to figure out, since my javascript skills can be best described with terms like "vague" and "barely existing"). If anyone wants it, here it is - you just need to open it with some editor (notepad works) and change all (7) instances of "Elithrion" in the Votepaths to your username. I tested and it works with both Greasemonkey for Firefox and Tampermonkey for Chrome.

The one thing that doesn't work well is that the page loads and then maybe 0.1s later everything gets hidden, which does leave you with enough time to see your own karma sometimes if you're looking at that spot, so if anyone knows how to fix that (or can confirm that it's too hard to fix to bother with), that would be welcome. Also, let me know if you think there are enough people who might want it that I should make a discussion post for more visibility or something.

(I'm not particularly concerned that I will lose feedback on the quality of my comments and posts, since I will still see the karma others receive and be able to compare, and I would still be interested in having a positive reputation. As of this writing my positive karma rate is a little under 90%, and my plan is to check once in a while and change something if I see it fall too much.)

Comment author: shaih 19 February 2013 07:18:22PM 4 points [-]

I've been reading the sequences but i've realized that less of it has sunk in then i would have hoped. What is the best way to make the lessons sink in?

Comment author: Viliam_Bur 20 February 2013 02:15:35PM *  3 points [-]

I made a presentation of part of the Sequences for other people. This made me look at the list and short descriptions carefully, re-read the article where I did not understand the short description; then I thought about the best subset and the best way to present them, and I made short notes. This all was a work with the text, which is much better for remembering than just passive reading. Then, by presenting the result, I connected it with positive emotions.

Generally, don't just read the text, work with it. Try to write a shorter version, expressing the same idea, but using your own words. (If you have a blog, consider publishing the result there.)

Comment author: beoShaffer 20 February 2013 03:57:50AM 3 points [-]

Thats a complicated and partially open question, but some low hanging fruit: Try to link the sequences to real life examples, preferably personal ones as you read. Make a point of practicing what you theoretically already know when it comes up IRL, you'll improve over time. Surround yourself with rational people, go to meetups and/or a CfAR workshop.

Comment author: fubarobfusco 23 February 2013 10:00:57PM 8 points [-]

White coat hypertension is a phenomenon in which patients exhibit elevated blood pressure in a clinical setting (doctor's office, hospital, etc.) but not in other settings, apparently due to anxiety caused by being in the clinical setting.

Stereotype threat is the experience of anxiety or concern in a situation where a person has the potential to confirm a negative stereotype about their social group. Since most people have at least one social identity which is negatively stereotyped, most people are vulnerable to stereotype threat if they encounter a situation in which the stereotype is relevant. Although stereotype threat is usually discussed in the context of the academic performance of stereotyped racial minorities and women, stereotype threat can negatively affect the performance of European Americans in athletic situations as well as men who are being tested on their social sensitivity.

Math anxiety is anxiety about one's ability to do mathematics, independent of skill. Highly anxious math students will avoid situations in which they have to perform mathematical calculations. Math avoidance results in less competency, exposure and math practice, leaving students more anxious and mathematically unprepared to achieve. In college and university, anxious math students take fewer math courses and tend to feel negative towards math.

Set and setting describes the context for psychoactive and particularly psychedelic drug experiences: one's mindset and the setting in which the user has the experience. 'Set' is the mental state a person brings to the experience, like thoughts, mood and expectations. 'Setting' is the physical and social environment. Social support networks have shown to be particularly important in the outcome of the psychedelic experience. Stress, fear, or a disagreeable environment, may result in an unpleasant experience (bad trip). Conversely, a relaxed, curious person in a warm, comfortable and safe place is more likely to have a pleasant experience.

The Wason selection task, one of the most famous tasks in the psychology of reasoning, is a logic puzzle which most people get wrong when presented as a test of abstract reasoning; but produce the "correct" response when presented in a context of social relations. A Wason task proves to be easier if the rule to be tested is one of social exchange and the subject is asked to police the rule, but is more difficult otherwise.

(The above paragraphs summarize the Wikipedia articles linked; see those articles for sources. Below is speculation on my part.)

IQ tests, and other standardized tests, are usually given in settings associated with schooling or psychological evaluation. People who perform very well on them (gifted students) often report that they think of tests as being like puzzles or games. Many gifted students enjoy puzzles and solve them recreationally; and so may approach standardized tests with a more relaxed and less anxious mindset. Struggling students, who are accustomed to schooling being a source of anxiety, may face tests with a mindset that further diminishes their performance — and in a setting that they already associate with failure.

In other words, the setting of test-taking, and the mindset with which gifted and struggling students approach it, may amplify their underlying differences of reasoning ability. In effect, the test does not measure reasoning ability; it measures some combination of reasoning ability and comfort in the academic setting. These variables are correlated, but failing to notice the latter may lead us to believe there are wider differences in the former than there actually are.

Some people I've discussed the Wason task with, who have been from gifted-student and mathematical backgrounds, have reported that they solve the social-reasoning form of it by translating it to an abstract-reasoning form. This leaves me wondering if the task is easier if presented in a form that the individual is more comfortable with; and that these folks expect more success in abstract reasoning than others do: in other words, that the discrepancy has very much to do with mindset, and serves as an amplifier for people's comfort or discomfort with abstract reasoning more than their ability to reason.

Comment author: Qiaochu_Yuan 23 February 2013 09:09:10PM *  3 points [-]

I have recently tried playing the Monday-Tuesday game with people three times. The first time it worked okay, but the other two times the person I was trying to play it with assumed I was (condescendingly!) making a rhetorical point, refused to play the game, and instead responded to what they thought the rhetorical point I was making was. Any suggestions on how to get people to actually play the game?

Comment author: Nisan 25 February 2013 10:35:32PM 1 point [-]

What if you play a round yourself first; not on a toy example, but on the matter at hand.

Comment author: Viliam_Bur 25 February 2013 05:12:37PM 1 point [-]

On Monday, people were okay with playing the game. On Tuesday, people assumed you were making a rhetorical point and refused to play the game. Are you trying to say that CFAR lessons are a waste of money?! :D

More seriously: the difference could be in the people involved, but also in what happened before the game (either immediately, or during your previous interaction with the people). For example if you had some disagreement in the past, they could (reasonably) expect that your game is just another soldier for the upcoming battle. But maybe some people are automatically in the battle mode all the time.

Comment author: Antisuji 20 February 2013 12:38:41AM 3 points [-]

Following up on my comment in the February What are You Working On thread, I've posted an update to my progress on the n-back game. The post might be of interest to those who want to get into mobile game/app development.

Comment author: DaFranker 18 February 2013 02:57:32PM 3 points [-]

A bit of a meta question / possibly suggestion:

Has the idea of showing or counting karma-per-reader ratios been discussed before? The idea just occurred to me, but I'd rather not spend time thinking at length about it (I've not noticed any obvious disadvantages, so if you see some please tell me) if multiple other LWers have already discussed or thought about it.

Comment author: Pentashagon 19 February 2013 07:31:38PM 5 points [-]

In the short story/paper "Sylvan's Box" by Graham Priest the author tries to argue that it's possible to talk meaningfully about a story with internally inconsistent elements. However, I realized afterward that if one truly was in possession of a box that was simultaneously empty and not empty there would be no way to keep the inconsistency from leaking out. Even if the box was tightly closed it would both bend spacetime according to its empty weight and also bend spacetime according to its un-empty weight. Opening the box would cause photons and air molecules (at the least) to being interacting and not interacting with the contents. Eventually a hurricane would form and not form over the Atlantic due to the air currents caused (and not caused) by removing the lid. In my opinion If there is any meaning to be found in a physical interpretation of the story it's that inconsistency everywhere would explode from any interaction with an initial inconsistency, probably fairly rapidly (at least as fast as the speed of sound).

I'd be interested to know what other people think of the physical ramifications.

Comment author: Viliam_Bur 20 February 2013 02:04:01PM 2 points [-]

The paper only showed that it is possible to talk meaningfully about a story with an element which is given inconsistent labels, and the consequences of having the inconsistent labels are avoided.

The hero looks in the box and sees that it "was absolutely empty, but also had something in it" and "the sense of touch confirmed this". How exactly? Did photons both reflect and non-reflect from the contents? Was it translucent? Or did it randomly appear and disappear? How did the fingers both pass through and not-pass-through the contents? But more importantly, what would happen if the hero tried to spill out the contents? Would something come out or not? What if they tried to use the thing / non-thing to detonate a bomb?

The story seems meaningful only because we don't get answer for any of these questions. It is a compartmentalization forced by the author on readers. The problems are not there only because the author refuses to look at them.

Comment author: diegocaleiro 19 February 2013 01:10:43AM *  5 points [-]

Persson (Uehiro Fellow, Gothenburg) has jokingly said that we are neglecting an important form or altruistic behavior.

http://www.youtube.com/watch?v=sKmxR1L_4Ag&feature=player_detailpage#t=1481s

We have a duty not to kill

We have a duty not to rape

but we do not have a duty, at least not a strong duty, to save lives

or to have sex with someone who is sexually-starved

Its a good joke.

What worries me is that it makes Effective Altruism of the GWWC and 80000h kind analogous to "fazer um feio feliz" an expression we use in portuguese meaning "making happy an ugly one". The joke is only funny because the analogy works to an extent.

And given it works, should Eff Alt be finding the most effective ways of getting the sexually deprived what they'd like to have?

Evolutionarily, sex is pretty high in the importance scale. Our psychology is engineered towards it (Buss2004).

Could finding the best matchmaking algorithm be an important utilitarian cause?

Comment author: Viliam_Bur 19 February 2013 09:03:10AM *  2 points [-]

Could finding the best matchmaking algorithm be an important utilitarian cause?

It would certainly create a lot of utility.

I have no experience with dating sites (so all the following information is second-hand), but a few people told me there was still an opportunity on the market to make a good one. On the existing dating sites it was impossible to make a search query they wanted. The sites collected only a few data that the site makers considered important, and only allowed you to make a search query about them. So you could e.g. search for a "non-smoker with a university degree", but not for a "science-fiction fan with a degree in natural science". I don't remember the exact criteria they wanted (only that some of them also seemed very important to me; something like whether the person is single), but the idea was that you enter the criteria the system allows you, you get thousands of results, and you can't refine them automatically, so you must click on them individually to read the user profile, you usually don't find your answer anyway, so you have to contact each person to ask them personally.

So a reasonable system would have some smart way to enter data about people. Preferably any data; there should be a way to enter a (searchable) plain text description, or custom key-value pair if everything else fails. (Of course the site admins should have some statistics about frequestly used custom data in descriptions and searches, so they could add them to the system.) Some geographical data, so that you could search for people "at most X miles from ...".

Unfortunately, there are strong perverse incentives for dating sites. Create a happy couple -- lose two customers! The most effective money-making strategy for a dating site would be to feed unrealistic expectations (so that all couples fail, but customers return to the site believing their next choice would be better) and lure people to infidelity. Actually, some dating sites promote themselves on Facebook exactly like this.

So it seems to me that a matchmaking algorithm done right could create a lot of utility, but would be very difficult to sell.

EDIT: Another problem: Imagine that there are is a trait which makes many people unattractive. A dating site that allows to search by this criteria will make searching people (who dislike this trait) happy, but people with this trait unhappy. If your goal is to make more money, to which group should you listen? Well, that depends on the size of the groups, and on their likelihood to leave if the algorithm goes against their wishes.

Comment author: ChristianKl 19 February 2013 11:52:43AM 1 point [-]

So a reasonable system would have some smart way to enter data about people. Preferably any data; there should be a way to enter a (searchable) plain text description, or custom key-value pair if everything else fails.

OkCupid has basically custom key-value pairs with it's questions. While you can't search after individual questions you get a match rank that bundles all the information from those questions together. You can search for that match rank.

Comment author: lsparrish 16 February 2013 05:48:20PM 4 points [-]

I'm looking for information about chicken eye perfusion, as a possible low-cost cryonics research target. Anyone here doing small animal research?

Comment author: army1987 16 February 2013 01:46:16PM 4 points [-]

The quantum coin toss

A couple guys argue that quantum fluctuations are relevant to most macroscopic randomness, including ordinary coin tosses and the weather. (I haven't read the original paper yet.)

Comment author: Nisan 17 February 2013 05:42:01PM 3 points [-]
Comment author: shminux 16 February 2013 09:51:26PM *  3 points [-]

If false, this could be easily falsifiable with a single counterexample, since if true, no coin tosser, human or robotic, should be able to do significantly better than chance if the toss is reasonably high.

EDIT: according to this

In the 31-page Dynamical Bias in the Coin Toss, Persi Diaconis, Susan Holmes, and Richard Montgomery lay out the theory and practice of coin-flipping to a degree that's just, well, downright intimidating.

Suffice to say their approach involved a lot of physics, a lot of math, motion-capture cameras, random experimentation, and an automated "coin-flipper" device capable of flipping a coin and producing Heads 100% of the time

the premise has already been falsified.

Comment author: gwern 17 February 2013 12:38:22AM 6 points [-]

The link discusses normal human flips as being quantum-influenced by cell-level events; a mechanical flipper doesn't seem relevant.

Comment author: army1987 17 February 2013 01:06:16AM 1 point [-]

Even humans can flip a coin in such a way that the same side comes up in all branches of the wave function, as described by E.T. Jaynes, but IIRC he himself refers to that as "cheating".

Comment author: gwern 17 February 2013 01:54:45AM 2 points [-]

I'm not sure that's what they mean either. I take them as saying 'humans can flip in a quantum-influenced way', not as 'all coin flips are quantum random' (as shminux assumed, hence the coin-flipping machine would be a disproof) or 'all human coin flips are quantum random' (as you assume, in which case magicians' control of coin flips would be a disproof).

Comment author: army1987 17 February 2013 09:04:09PM 1 point [-]

I'd guess something along the line of typical human coin flips being quantum-influenced.

Comment author: shminux 17 February 2013 03:12:37AM 1 point [-]

If their model makes no falsifiable predictions, it's not an interesting one.

Comment author: Yvain 27 February 2013 05:07:56AM 2 points [-]

The last Dungeons and Discourse campaign was very well-received here on Less Wrong, so I am formally announcing that another one is starting in a little while. Comment on this thread if you want to sign up.

Comment author: gwern 21 February 2013 02:00:59AM *  2 points [-]

Working on my n-back meta-analysis again, I experienced a cute example of how prior information is always worth keeping in mind.

I was trying to incorporate the Chinese thesis Zhong 2011; not speaking Chinese, I've been relying on MrEmile to translate bits (thanks!) and I discovered tonight that I had used the wrong table. I couldn't access the live thesis version because the site was erroring so I flipped to my screenshotted version... and I discovered that one line (the control group for the kids who trained 15 days) was cut off:

screenshot of the table of IQ scores

I needed the 2 numbers in the upper right hand corner (mean then standard deviation). What were they? I waited for the website to start working, but hours later I became desperate and began trying to guess the control group's values. After minute consideration of the few pixels left on the screen, I ventured that the true values were: 20.78 1.43.

I distracted myself unsplitting all the studies so I could look at single n-back versus dual n-back, and the site came back up! The true values had been: 23.78 1.48.

So I was wrong in just 2 digits. Guessing 43 vs 48 is not a big deal (the hundredth digit of the standard deviation isn't important), but I was chagrined to compare my 20 with the true 23. Why?

If you look at the image, you notice that the 3 immediately following means were 25, 24, 22; they were all means from people training 15-days as well. Knowing that, I should have inferred that the control group's mean was ~24 ((25+24+22)/3); you can tell that the bottom of the digit after 2 is rounded, so the digit must be 0, 3, 6, or 8 - but 0 and 8 are both very far from 24, and it's implausible that the control had the highest score (26), which leaves just '3' as the most likely guess.

(I probably would've omitted the 15-day groups if the website had gone down permanently, but if I had gone with my guess, 20 vs 23 would've resulted in a very large effect size estimate and resulted in a definite distortion to the overall meta-analysis.)

Comment author: beoShaffer 20 February 2013 04:17:36AM 2 points [-]

I've seen several references to a theory that the english merchant class out breeded both the peasants and nobles with major societal implications (causing the industrial revolution), but now I can't find them. Does anyone know what I'm talking about?

Comment author: Douglas_Knight 20 February 2013 07:14:39PM 3 points [-]
Comment author: beoShaffer 20 February 2013 07:25:18PM 1 point [-]

Thank you.

Comment author: Zaine 19 February 2013 06:18:40PM 2 points [-]

「META」:Up-votes represent desirable contributions, and down-votes negative contributions. Once one amasses a large corpus of comments, noticing which of one's comments have been upvoted or down-voted becomes nontrivially difficult. It seems it would be incredibly difficult to code in a feature that helped one find those comments; on the off chance it isn't, consider it a useful feature.

Comment author: wedrifid 19 February 2013 06:25:31PM 6 points [-]

「META」:Up-votes represent desirable contributions, and down-votes negative contributions. Once one amasses a large corpus of comments, noticing which of one's comments have been upvoted or down-voted becomes nontrivially difficult. It seems it would be incredibly difficult to code in a feature that helped one find those comments; on the off chance it isn't, consider it a useful feature.

Use Wei Dai's script. Use the 'sort by karma' feature.

Comment author: EvelynM 16 February 2013 09:50:02PM 2 points [-]

The date in the title is incorrect s/2003/2013/

Comment author: David_Gerard 16 February 2013 11:54:11PM *  1 point [-]

D'OH! Fixed, at the slight expense of people's RSS feeds.

Comment author: byrnema 15 February 2013 11:58:20PM 4 points [-]

Could someone write a post (or I suppose we could create a thread here) about the Chelyabinsk meteorite?

It's very relevant for a variety of reasons:

  • connection to existential risk

  • the unlikely media report that the meteorite is 'independent' of the asteroid that passed by this day

  • any observations people have (I haven't any) on global communication and global rational decision making at this time, before it was determined that the damage and integrated risk was limited

Comment author: ZankerH 16 February 2013 01:06:27AM *  11 points [-]

the unlikely media report that the meteorite is 'independent' of the asteroid that passed by this day

It came from a different region of space, on an entirely different orbit. 2012 DA14 approached Earth from the south on a northward trajectory, whereas the Chelyabinsk meteorite was on a what looks like much more in-plane, east-west orbit. As unlikely as it sounds, there is no way they could have been the fragments of the same asteroid (unless they broke apart years ago and were consequently separated further by more impacts or the chaotic gravitational influences of other objects in the solar system).

Comment author: Mitchell_Porter 16 February 2013 02:22:54AM *  -2 points [-]

I'm wondering if it was some sort of orbital projectile weapons system, being tested under cover of the asteroid's passage. But first I want to see more details of the argument that they couldn't have been part of the same cloud of rocks - e.g. could the Chelyabinsk meteor have been an outlier which fell into Earth's gravity well at a distance and arrived from a different direction.

edit: Maybe a more plausible version of the idea that the Chelyabinsk meteor was artificial, is that it was a secret satellite which was being disposed of ("de-orbited", re-directed on a collision course with Earth). Chelyabinsk area seems to be full of secret installations, so if there's debris, at least the men in black don't have far to travel.

edit #2: The general options seem to be: artificial; natural and related to the asteroid; natural and unrelated to the asteroid. Better probability estimates for each option should be forthcoming.

Comment author: CellBioGuy 16 February 2013 06:42:07AM *  14 points [-]

This thing came in at significantly greater than orbital velocity, faster than it can fall from any earth-bound orbit, and in the reverse direction from pretty much everything launched from Earth's surface (in all the wide-view videos you can see it approaching from the direction of the rising sun, in the East, and comparison of the shape of the trail with http://www.youtube.com/watch?feature=player_embedded&v=VdoKEFsemvw confirms this). It also looks JUST like any number of other meteors that have hit, and discharged as much energy as a 300 kiloton* nuclear weapon as it disintegrated (in a long streak, not all at once) - far more kinetic energy than anything human launched has ever carried (a fully fueled saturn 5 exploding with all of its original chemical energy would release less than five kilotons). Energy couldn't be generated in space either, a 100 square meter solar array would require 1400 years to gather that much. And if you were going to deorbit something you would rather blow it to pieces over the ocean where nobody's ever going to find all the tiny fragments scattered over hundreds of square miles of seafloor.

Conclusion: Natural. Not bothering with probability estimate.

Not only did it come in on a completely different trajectory from the known asteroid (closer to coplanar with the ecliptic seeing as it came from the East), but Russia was not even visible to objects on similar trajectories to the known asteroid until after it had passed the Earth. The chaotic influences of the rest of the solar system and inhomogeneity of impacts also mean that even if you expode something into lots of fragments on completely different orbits (which does not really happen) they are NOT going to come to the same spot within 50,000 kilometers at the same time on their way back together. The 'focusing' effect of the Earth's gravity exists but is insufficient to wrap around the zone that something cominig from one side of the earth can hit to more than a few degrees away from half of the planet at velocities this high.

There are plenty of these rocks throughout the solar system, and they DO hit the Earth. Something this size ish happens every few (single-digit) years, its just that most of the time they happen over the ocean, desert, or sparsely populated land. This one had the 'good' fortune to explode directly over a city of one million people well-armed with anti-police and anti-insurance-fraud dashcams.

Conclusion: Unrelated to known asteroid. Not worth probability estimate.

NOTE: There seems to be some question about the total yield, but given that the shock wave took a minute or more to reach the ground from tens of kilometers high from what I have read, and still had enough force to shatter windows and blow in doors, I'm leaning to the higher end of the estimates. EDITED: the USGS says 300 kilotons according to their analysis of seismographs.

EDIT: Just realized something interesting. It came from the direction of the rising sun - meaning that, even allowing for gravity bending its trajectory a bit, it would have come from a trajectory at most probably a few tens of degrees away from the sun in the sky. I do not know much about the estimated size of this thing, but that would indeed make it much harder to see as that part of the sky is only visible at night for a very short time and through a large amount of atmosphere. Even though we are getting better at detecting incoming rocks a few days out (a few years ago we even caught something only like 5 meters wide a day in advance and predicted its impact site, though that was a special case and we don't catch even a fraction of what actually comes our way) this one would have been particularly hard to see.

Comment author: CellBioGuy 25 February 2013 08:10:17AM *  3 points [-]

I don't know if this has been brought up around here before, but the B612 foundation is planning to launch an infrared space telescope into a venus-like orbit around 2017. It will be able to detect nearly every earth-crossing rock larger than 150 meters wide, and a significant fraction down to a few at 30ish meters. The infrared optics looking outwards makes it much easier to see the warm rocks against the black of space without interference from the sun and would quickly increase the number of known near earth objects by two orders of magnitude.

This is exactly the mission I've been wishing / occasionally agitating for NASA to get off their behinds and do for five years. They've got the contract with Ball Aerospace to build the spacecraft and plan to launch on a Falcon 9 rocket. And they accept donations.

Comment deleted 25 February 2013 11:47:14AM [-]
Comment author: roystgnr 25 February 2013 03:50:55PM 12 points [-]

At this point, there should be little doubt that the best response to this "basilisk" would have been "That's stupid. Here are ten reasons why.", rather than (paraphrasing for humor) "That's getting erased from the internet. No, I haven't heard the phrase 'Streisand Effect' before; why do you ask?"

Comment author: gwern 25 February 2013 04:53:06PM 19 points [-]

The real irony is that Eliezer is now a fantastic example of the commitment/sunk cost effect which he has warned against repeatedly: having made an awful decision, and followed it up with further awful decisions over years (including at least 1 Discussion post deleted today and an expansion of topics banned on LW; incidentally, Eliezer, if you're reading this, please stop marking 'minor' edits on the wiki which are obviously not minor), he is trapped into continuing his disastrous course of conduct and escalating his interventions or justifications.

And now the basilisk and the censorship are an established part of the LW or MIRI histories which no critic could possibly miss, and which pattern-matches on religion. (Stross claims that it indicates that we're "Calvinist", which is pretty hilarious for anyone who hasn't drained the term of substantive meaning and turned it into a buzzword for people they don't like.) A pity.


While we're on the topic, I also blame Yvain to some extent; if he had taken my suggestion to add a basilisk question to the past LW survey, it would be much easier to go around to all the places discussing it and say something like 'this is solely Eliezer's problem; 98% disagree with censoring it'. But he didn't, and so just as I predicted, we have lost a powerful method of damage control.

It sucks being Cassandra.

Comment author: RichardKennaway 27 February 2013 11:33:45AM 4 points [-]

And now the basilisk and the censorship are an established part of the LW or MIRI histories which no critic could possibly miss, and which pattern-matches on religion.

That's already true without the basilisk and censorship. The similarities between transhumanism and religion have been remarked on for about as long as transhumanism has been a thing.

Comment author: gwern 27 February 2013 03:43:47PM 8 points [-]

An additional item to pattern-match onto religion, perhaps I should have said.

Comment author: Pablo_Stafforini 02 March 2013 06:51:19PM *  3 points [-]

I also blame Yvain to some extent; if he had taken my suggestion to add a basilisk question to the past LW survey, it would be much easier to go around to all the places discussing it and say something like 'this is solely Eliezer's problem; 98% disagree with censoring it'. But he didn't.

Also, note that this wasn't an unsolicited suggestion: in the post to which gwern's comment was posted, Yvan actually said that he was "willing to include any question you want in the Super Extra Bonus Questions section [of the survey], as long as it is not offensive, super-long-and-involved, or really dumb." And those are Yvain's italics.

Comment author: Eliezer_Yudkowsky 26 February 2013 05:43:32AM 3 points [-]

Gwern, I made a major Wiki edit followed by a minor edit. I wasn't aware that the latter would mask the former.

Comment author: gwern 26 February 2013 06:55:13PM 6 points [-]

When you're looking at consolidated diffs, it does. Double-checking, your last edit was marked minor, so I guess there was nothing you could've done there.

(It is good wiki editing practice to always make the minor or uncontroversial edits first, so that way your later edits can be looked at without the additional clutter of the minor edits or they can be reverted with minimal collateral damage, but that's not especially relevant in this case.)

Comment author: Mitchell_Porter 26 February 2013 02:51:25AM 6 points [-]

It sucks being Cassandra.

Let me consult my own crystal ball... Yes, the mists of time are parting. I see... I see... I see, a few years from now, a TED panel discussion on "Applied Theology", chaired by Vernor Vinge, in which Eliezer, Roko, and Will Newsome discuss the pros and cons of life in an acausal multiverse of feuding superintelligences.

The spirits have spoken!

Comment author: army1987 26 February 2013 05:28:26PM 1 point [-]

I'm looking forward to that.

Comment author: Kevin 26 February 2013 02:16:43AM 1 point [-]

At this point it is this annoying, toxic meta discussion that is the problem.

Comment author: army1987 26 February 2013 02:08:33PM *  1 point [-]

I also blame Yvain to some extent; if he had taken my suggestion to add a basilisk question to the past LW survey,

Then EY would have freaked the hell out, and I don't know what the consequences of that would be but I don't think they would be good. Also, I think the basilisk question would have had lots of mutual information with the troll toll question anyway:

EDIT: I guess I was wrong.

Submitting...

Comment author: gwern 26 February 2013 06:48:55PM *  8 points [-]

It's too late. This poll is in the wrong place (attracting only those interested in it), will get too few responses (certainly not >1000), and is now obviously in reaction to much more major coverage than before so the responses are contaminated.

The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit,
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.

Comment author: army1987 27 February 2013 12:34:34PM 1 point [-]

Actually, I was hoping to find some strong correlation between support for the troll toll and support for the basilisk censorship so that I could use the number of people who would have supported the censorship from the answers to the toll question in the survey. But it turns out that the fraction of censorship supporters is about 30% both among toll supporters and among toll opposers. (But the respondents to my poll are unlikely to be an unbiased sample of all LWers.)

Comment author: wedrifid 26 February 2013 03:02:55PM 3 points [-]

Then EY would have freaked the hell out, and I don't know what the consequences of that would be but I don't think they would be good. Also, I think the basilisk question would have had lots of mutual information with the troll toll question anyway:

The 'troll toll' question misses most of the significant issue (as far as I'm concerned). I support the troll toll but have nothing but contempt for Eliezer's behavior, comments, reasoning and signalling while implementing the troll toll. And in my judgement most of the mutual information with the censorship or Roko's Basilisk is about those issues (things like overconfidence, and various biases of the kind Gwern describes) is to do with the judgement of competence based on that behavior rather than the technical change to the lesswrong software.

Comment author: shminux 25 February 2013 05:21:38PM *  0 points [-]

Just to be charitable to Eliezer, let me remind you of this quote. For example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

Stross claims that it indicates that we're "Calvinist"

I thought this is more akin to Scientology, where any mention of Xenu to the uninitiated ought to be suppressed.

It sucks being Cassandra.

Sure does. Then again, it probably sucks more being Laocoön.

Comment author: Plasmon 25 February 2013 06:12:12PM 10 points [-]

can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

The basilisk is harmless. Eliezer knows this. The streisand effect was the intended consequence of the censor. The hope is that people who become aware of the basilisk will increase their priors for the existence of real information hazards, and will in the future be less likely to read anything marked as such. It's all a clever memetic inoculation program!

disclaimer : I don't actually believe this.

Comment author: Eugine_Nier 27 February 2013 04:29:52AM 9 points [-]

Another possibility: Eliezer doesn't object to the meme that anyone who doesn't donate to SIAI/MIRI will spend eternity in hell being spread in a deniable way.

Comment author: shminux 27 February 2013 04:36:07AM *  4 points [-]

Why stop there? In fact, Roko was one of Eliezer's many socks puppets. It's your basic Ender's Game stuff.

Comment author: Konkvistador 05 March 2013 12:40:08PM *  3 points [-]

We are actually all Eliezer's sock puppets. Most of us unfortunately are straw men.

Comment author: gwern 05 March 2013 04:20:54PM *  5 points [-]

We are the hollow men / we are the stuffed men / Leaning together / Headpiece filled with straw. Alas! / Our dried comments when / we discuss together / Are quiet and meaningless / As median-cited papers / or reports of supplements / on the Internet.

Comment author: Viliam_Bur 27 February 2013 09:15:05AM 2 points [-]

Another possibility: Eliezer does not want the meme to be associated with LW. Because, even if it was written by someone else, most people are predictably likely to read it and remember: "This is an idea I read on LW, so this must be what they believe."

Comment author: wedrifid 27 February 2013 05:30:13AM *  4 points [-]

The hope is that people who become aware of the basilisk will increase their priors for the existence of real information hazards, and will in the future be less likely to read anything marked as such. It's all a clever memetic inoculation program!

It's certainly an inoculation for information hazards. Or at least against believing information hazard warnings.

Comment author: Eugine_Nier 26 February 2013 06:51:09AM *  6 points [-]

Alternatively, the people dismissing the idea out of hand are not taking it seriously and thus not triggering the information hazard.

Also the censorship of the basilisk was by no means the most troubling part of the Roko incident, and as long as people focus on that they're not focusing on the more disturbing issues.

Edit: The most troubling part were some comments, also deleted, indicating just how fanatically loyal some of Eliezer's followers are.

Comment author: Locaha 25 February 2013 06:21:05PM 0 points [-]

disclaimer : I don't actually believe this.

Really? Or do you just want us to believe that you don't believe this???

Comment author: gwern 25 February 2013 05:52:14PM 13 points [-]

Just to be charitable to Eliezer, let me remind you of this quote. For example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

No. I have watched Eliezer make this unforced error now for years, sliding into an obvious and common failure mode, with mounting evidence that censorship is, was, and will be a bad idea, and I have still not seen any remotely plausible explanation for why it's worthwhile.

Just to take this most recent Stross post: he has similar traffic to me as far as I can tell, which means that since I get ~4000 unique visitors a day, he gets as many and often many more. A good chunk will be to his latest blog post, and it will go on being visited for years on end. If it hits the front page of Hacker News as more than a few of his blog posts do, it will quickly spike to 20k+ uniques in just a day or two. (In this case, it didn't.) So we are talking, over the next year, easily 100,000 people being exposed to this presentation of the basilisk (just need average 274 uniques a day). 100k people being exposed to something which will strike them as patent nonsense, from a trusted source like Stross.

So maybe there used to be some sort of justification behind the sunk costs and obtinacy and courting of the Streisand effect. Does this justification also justify trashing LW/MIRI's reputation among literally hundreds of thousands of people?

You may have a witty quote, which is swell, but I'm afraid it doesn't help me see what justification there could be.

Sure does. Then again, it probably sucks more being Laocoön.

Laocoön died quickly and relatively cleanly by serpent; Cassandra saw all her predictions (not just one) come true, was raped, abducted, kept as a concubine, and then murdered.

Comment author: Kevin 26 February 2013 02:20:35AM -3 points [-]

Can you please stop with this meta discussion?

I banned the last discussion post on the Basilisk, not Eliezer. I'll let this one stand for now as you've put some effort into this post. However, I believe that these meta discussions are as annoyingly toxic as anything at all on Less Wrong. You are not doing yourself or anyone else any favors by continuing to ride this.

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

Comment author: gwern 26 February 2013 07:01:23PM 13 points [-]

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

The basilisk is now being linked on Marginal Revolution. Estimated site traffic: >3x gwern.net; per above, that is >16k uniques daily to the site.

What site will be next?

Comment author: fubarobfusco 26 February 2013 04:32:31AM *  21 points [-]

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

There's now the impression that a community of aspiring rationalists — or, at least, its de-facto leaders — are experiencing an ongoing lack of clue on the subject of the efficacy of censorship on online PR.

The "reputational damage" is not just "Eliezer or LW have this kooky idea."

It is "... and they think there is something to be gained by shutting down discussion of this kooky idea, when others' experience (Streisand Effect, DeCSS, etc.) and their own (this very thread) are strong evidence to the contrary."

It is the apparent failure to update — or to engage with widely-recognized reality at all — that is the larger reputational damage.

It is, for that matter, the apparent failure to realize that saying "Don't talk about this because it is bad PR" is itself horrible PR.

The idea that LW or its leadership dedicate nontrivial attention to encircling and defending against this kooky idea makes it appear that the idea is central to LW. Some folks on the thread on Stross's forum seem to think that Roko discovered the hidden secret motivating MIRI! That's bogus ... but there's a whole trope of "cults" suppressing knowledge of their secret teachings; someone who's pattern-matched LW or transhumanism onto "cult" will predictably jump right there.


At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

My own take on the whole subject is that basilisk-fear is a humongous case of privileging the hypothesis coupled to an anxiety loop. But ... I'm rather prone to anxiety loops myself, albeit over matters a little more personal and less abstract. The reason not to poke people with Roko's basilisk is that doing so a form of aggression — it makes (some) people unhappy.

But as far as I can tell, it's no worse in that regard than a typical Iain M. Banks novel, or some of Stross's own ideas for that matter ... which are considered entertainment. Which means ... humans eat "basilisks" like this for dessert. In one of Banks's novels, multiple galactic civilizations invent uploading, and use it to implement their religions' visions of Hell, to punish the dead and provide an incentive to the living to conform to moral standards.

(But then, I read Stross and Banks. I don't watch gore-filled horror movies, though, and I would consider someone forcing me to watch such a movie would be doing aggression against me. So I empathize with those who are actually distressed by the basilisk idea, or the "basilisk" idea for that matter.)


I have to say, I find myself feeling worse for Eliezer than for anyone else in this whole affair. Whatever else may be going on here, having one's work cruelly mischaracterized and held up to ridicule is a whole bunch of no fun.

Comment author: Eliezer_Yudkowsky 26 February 2013 05:47:53AM 13 points [-]

having one's work cruelly mischaracterized and held up to ridicule is a whole bunch of no fun.

Thank you for appreciating this. I expected it before I got started on my life, I'm already accustomed to it by now, I'm sure it doesn't compare to the pain of starving to death. Since I'm not in any real trouble, I don't intend to angst about it.

Comment author: wedrifid 26 February 2013 02:42:48PM *  11 points [-]

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

Answering the rhetorical question because the obvious answer is not what you imply [EDIT: I notice that J Taylor has made a far superior reply already]: Yes, it limits the ongoing reputational damage.

I'm not arguing with the moderation policy. But I will argue with bad arguments. Continue to implement the policy. You have the authority to do so, Eliezer has the power on this particular website to grant that authority, most people don't care enough to argue against that behavior (I certainly don't) and you can always delete the objections with only minimal consequences. But once you choose to make arguments that appeal to reason rather than the preferences of the person with legal power then you can be wrong.

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear indications (ie. direct self reports that are coherent) that a significant reason that they "take the basilisk seriously" is because Eliezer considers it a sufficiently big deal that he takes such drastic and emotional action. Heck, without Eliezer's response it wouldn't even have earned that title. It'd be a trivial backwater game theory question to which there are multiple practical answers.

So please, just go back to deleting basilisk talk. That would be way less harmful than trying to persuade people with reason.

Comment author: David_Gerard 27 February 2013 02:06:09PM *  6 points [-]

I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear indications (ie. direct self reports that are coherent) that a significant reason that they "take the basilisk seriously" is because Eliezer considers it a sufficiently big deal that he takes such drastic and emotional action. Heck, without Eliezer's response it wouldn't even have earned that title. It'd be a trivial backwater game theory question to which there are multiple practical answers.

I get the people who've been frightened by it because EY seems to take it seriously too. (Dmytry also gets them, which is part of why he's so perpetually pissed off at LW. He does his best to help, as a decent person would.) More generally, people distressed by it feel they can't talk about it on LW, so they come to RW contributors - addressing this was why it was made a separate article. (I have no idea why Warren Ellis then Charlie Stross happened to latch onto it - I wish they hadn't, because it was totally not ready, so I had to spend the past few days desperately fixing it up, and it's still terrible.) EY not in fact thinking it's feasible or important is a point I need to address in the last section of the RW article, to calm this concern.

Comment author: jbeshir 27 February 2013 07:06:11PM *  3 points [-]

It would be nice if you'd also address the extent to which it misrepresents other LessWrong contributors as thinking it is feasible or important (sometimes to the point of mocking them based on its own misrepresentation). People around LessWrong engage in hypothetical what-if discussions a lot; it doesn't mean that they're seriously concerned.

Lines like "Though it must be noted that LessWrong does not believe in or advocate the basilisk ... just in almost all of the pieces that add up to it." are also pretty terrible given we know only a fairly small percentage of "LessWrong" as a whole even consider unfriendly AI to be the biggest current existential risk. Really, this kind of misrepresentation of alleged, dubiously actually held extreme views as the perspective of the entire community is the bigger problem with both the LessWrong article and this one.

Comment author: David_Gerard 01 March 2013 05:25:28PM *  5 points [-]

The article is still terrible, but it's better than it was when Stross linked it. The greatest difficulty is describing the thing and the fuss accurately while explaining it to normal intelligent people without them pattern matching it to "serve the AI God or go to Hell". This is proving the very hardest part. (Let's assume for a moment 0% of them will sit down with 500K words of sequences.) I'm trying to leave it for a bit, having other things to do.

Comment author: drethelin 26 February 2013 08:48:14AM 11 points [-]

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

As far as I can tell the entire POINT of LW is to talk about various mental pathologies and how to avoid them or understand them even if they make you very uncomfortable to deal with or acknowledge. The reasons behind talking about the basilisk or basilisks in general (apart from metashit about censorship) are just like the reasons for talking about trolley problems even if they make people angry or unhappy. What do you do when your moral intuitions seem to break down? What do you do about compartmentalization or the lack of it? Do you bite bullets? Maybe the mother should be allowed to buy acid.

To get back to meta shit: If people are complaining about the censorship and you are sick of the complaints, the simplest way to stop them is to stop the censorship. If someone tells you there's a problem, the response of "Quit your bitching, it's annoying" is rarely appropriate or even reasonable. Being annoying is the point of even lameass activism like this. I personally think any discussion of the actual basilisk has reached every conclusion it's ever really going to reach by now, pretty reasonably demonstrated by looking at the uncensored thread, and the only thing even keeping it in anyone's consciousness is the continued ballyhooing about memetic hazards.

Comment author: J_Taylor 26 February 2013 04:14:11AM 9 points [-]

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

I hate to use silly symmetrical rhetoric, however:

The secret has been leaked and the reputational damage is ongoing. Is there really anything to be gained by continuing the current moderation policy?

Comment author: drethelin 26 February 2013 08:34:59AM 8 points [-]

The meta discussions will continue until morale improves

Comment author: Locaha 25 February 2013 05:37:52PM *  -2 points [-]

for example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

What if he CAN'T conceive a reason? Can you conceive a possibility that it might be for other reason than Gwern being less intelligent then EY? For example, Gwern might be more intelligent than EY.

Comment author: Eugine_Nier 02 March 2013 06:30:17AM 3 points [-]

No, I haven't heard the phrase 'Streisand Effect' before; why do you ask?

I'm not convinced the Streisand Effect is actually real. It seems like an instance of survival bias. After all, you shouldn't expect to hear about the cases when information was successfully suppressed.

Comment author: wedrifid 02 March 2013 11:33:51AM *  1 point [-]

I'm not convinced the Streisand Effect is actually real.

This is a bizarre position to take. The effect does not constitute a claim that all else being equal attempts to suppress information are negatively successful. Instead it describes those cases where information is published more widely due to the suppression attempt. This clearly happens sometimes. The Wikipedia article gives plenty of unambiguous examples.

In April 2007, an attempt at blocking an Advanced Access Content System (AACS) key from being disseminated on Digg caused an uproar when cease-and-desist letters demanded the code be removed from several high-profile websites. This led to the key's proliferation across other sites and chat rooms in various formats, with one commentator describing it as having become "the most famous number on the internet". Within a month, the key had been reprinted on over 280,000 pages, printed on T-shirts and tattoos, and had appeared on YouTube in a song played over 45,000 times.

It would be absurd to believe that the number in question would have been made into T-shirts, tattoos and a popular YouTube song no attempt was made to suppress it. That doesn't mean (or require) that in other cases (and particularly in other cases where the technological and social environment was completely different) that sometimes powerful figures are successful in suppressing information.

Comment author: wedrifid 26 February 2013 03:10:50PM 3 points [-]

At this point, there should be little doubt that the best response to this "basilisk" would have been "That's stupid. Here are ten reasons why.", rather than (paraphrasing for humor) "That's getting erased from the internet. No, I haven't heard the phrase 'Streisand Effect' before; why do you ask?"

Heck, there is little doubt that even your paraphrased humorous alternative would have been much better than what actually happened. It's not often that satirical caricatures are actually better than what they are based on!

Comment author: RichardKennaway 27 February 2013 12:21:50PM -2 points [-]

At this point, there should be little doubt that the best response to this "basilisk" would have been "That's stupid. Here are ten reasons why.

That would only be the best response if the basilisk were indeed stupid, and there were indeed ten good reasons why. Presumably you do think it is stupid, and you have a list of reasons why; but you are not in charge. (I hope it is obvious why saying it is stupid if you believed it was not, and writing ten bad arguments to that effect, would be monumentally stupid.)

But Eliezer's reason for excluding such talk is precisely that (in his view, and he is in charge) it is not stupid, but a real hazard, the gravity of which goes way beyond the supposed effect on the reputation of LessWrong. I say "supposed" because as far as I can see, it's the clowns at RationalWiki who are trying to play this up for all it's worth. Reminds me of The Register yapping at the heels of Steve Jobs. The recent links from Stross and Marginal Revolution have been via RW. Did they just happen to take notice at the same time, or is RW evangelising this?

The current deletion policy calls such things "toxic mindwaste", which seems fair enough to me (and a concept that would be worth a Sequence-type posting of its own). I don't doubt that there are many other basilisks, but none of them have appeared on LW. Ce qu'on ne voit pas, indeed.

Comment author: David_Gerard 27 February 2013 02:42:30PM 2 points [-]

RW didn't push this at all. I have no idea why Warren Ellis latched onto it, though I expect that's where Charlie Stross picked it up from.

The reason the RW article exists is because we're getting the emails from your distressed children.

Comment author: RichardKennaway 27 February 2013 03:55:05PM 1 point [-]

The reason the RW article exists is because we're getting the emails from your distressed children.

I can't parse this. Who are "we", "you", and the "distressed children"? I don't think I have any, even metaphorically.

Comment author: gwern 27 February 2013 05:35:27PM 2 points [-]

It's not that hard. DG is using 'the Rational Wiki community' for 'we', 'your' refers to 'the LessWrong community', and 'distressed children' presumably refers to Dmytry, XiXi and by now, probably some others.

Comment author: David_Gerard 27 February 2013 05:50:13PM *  7 points [-]

No, "distressed children" refers to people upset by the basilisk who feel they can't talk about it on LW so they email us, presumably as the only people on the Internet bothering to talk about LW. This was somewhat surprising.

Comment author: RichardKennaway 28 February 2013 10:54:43AM 2 points [-]

[referring to RationalWiki] as the only people on the Internet bothering to talk about LW.

Well then, that's the reputation problem solved. If it's only RationalWiki...

Comment author: [deleted] 28 February 2013 12:10:36AM 0 points [-]

so they email us, presumably as the only people on the Internet bothering to talk about LW.

Or more likely, because RW has been the only place you could actually learn about it in the first place (for the last two years at least). So, I really don't think you have any reason to complain about getting those emails.

Comment author: RichardKennaway 28 February 2013 10:51:08AM 0 points [-]

What do you tell them?

Comment author: wedrifid 28 February 2013 10:53:43AM 0 points [-]

What do you tell them?

I presume it would include things that David Gerard could not repeat here. After all that's why the folk in question contacted people from the Rational Wiki community in the first place!

Comment author: RichardKennaway 28 February 2013 11:13:46AM *  0 points [-]

Actually, I may have just answered my own question by reading the RW page on the b*s*l*sk that three prominent blogs and a discussion forum recently all linked to. Does reading that calm them down?

Comment author: paper-machine 27 February 2013 02:52:44PM 0 points [-]

RW didn't push this at all.

Yes, RW was just the forum that willingly opened their doors to various anti-LW malcontents, who are themselves pushing this for all it's worth.

Comment author: fubarobfusco 27 February 2013 11:25:08PM 4 points [-]

anti-LW malcontents

That's overly specific. Mostly they're folks who like to snicker at weird ideas — most of which I snicker at, too.

Comment author: paper-machine 28 February 2013 03:40:18AM *  1 point [-]

I didn't claim my list was exhaustive. In particular, I was thinking of Dmytry and XiXiDu, both of whom are never far away from any discussion of LW and EY that takes place off-site. The better part of comments on the RW talk pages and Charles Stross' blog concerning the basilisk are mostly copied and pasted from all their old remarks about the subject.

Comment author: fubarobfusco 28 February 2013 05:47:18AM *  3 points [-]

OK. What I heard in your earlier comment was that a wiki community was being held at fault for "opening their doors" to someone who criticized LW. Wikis are kind of known for opening their doors, and the skeptic community for being receptive to the literary genre of debunking.

Comment author: Peterdjones 03 March 2013 02:20:50PM 1 point [-]

That was a rather mind-killed comment.Wikis are suppoed to have open doors. RW is supposed to deal with pseudoscience, craziness and the pitfalls of religions. The Bsl*sk is easily all three.

Comment author: ArisKatsaris 01 March 2013 11:51:17AM 0 points [-]

The reason the RW article exists is because we're getting the emails from your distressed children.

Isn't it on RW that these people read the basilisk in the first place?

Comment author: David_Gerard 01 March 2013 11:15:54PM *  4 points [-]

(answered at greater length elsewhere, but) This is isomorphic to saying "describing what is morally reprehensible about the God of the Old Testament causes severe distress to some theists, so atheists shouldn't talk about it either". Sunlight disinfects.

Comment author: Leonhart 25 February 2013 10:30:24PM 4 points [-]

Dude. Seriously. Spoilers.

This comment is a little less sharp than it would have been had I not gone to the gym first; but unless you (and the apparent majority in this thread) actively want to signal contempt for those who disagree with you, please remember that there are some people here who do not want to read about the fucking basilisk.

Comment author: Eliezer_Yudkowsky 26 February 2013 05:39:31AM 0 points [-]

Deleted. Don't link to possible information hazards on Less Wrong without clear warning signs.

E.g. this comment for a justified user complaint. I don't care if you hold us all in contempt, please don't link to what some people think is a possible info hazard without clear warning signs that will be seen before the link is clicked. Treat it the same way you would goatse (warning: googling that will lead to an exceptionally disgusting image).

Comment author: army1987 26 February 2013 05:31:36PM 3 points [-]

Deleted.

Why delete such comments altogether, rather than edit them to rot-13 them and add a warning in the front?

Comment author: Eliezer_Yudkowsky 27 February 2013 12:16:37AM 3 points [-]

I can't edit comments.

Comment author: army1987 27 February 2013 11:10:35AM 0 points [-]

Ah.

Comment author: shminux 26 February 2013 05:46:22AM 3 points [-]

Ok, thanks for this mental image of a goatselisk, man!

Comment author: wedrifid 26 February 2013 06:31:15PM *  5 points [-]

Deleted. Don't link to possible information hazards on Less Wrong without clear warning signs.

For example this is the link that was in the now deleted. I repeat it with the clear warning signs and observe that Charlie Stross (the linked to author) has updated his post so that it actually gives his analysis of the forbidden topic in question.

Warning: This link contains something defined as an Information Hazard by the lesswrong administrator. Do not follow it if this concerns you: Charlie Stross discusses Roko's Basilisk. On a similar note: You just lost the game.

I wanted the link to be available if necessary just so that it makes sense to people when I say that Charlie Stross doesn't know how decision theory works and his analysis is rubbish. Don't even bother unless you are interested in categorizing various kinds of ignorant comments on the internet.

Comment author: Kawoomba 25 February 2013 05:39:50PM -2 points [-]

(Exasperated sigh) Come on.

Comment author: tgb 18 February 2013 01:11:48PM 2 points [-]

Link: Obama Seeking to Boost Study of Human Brain

It's still more-or-less rumors with little in the way of concrete plans. It would, at the least, be exciting to see funding of a US science project on the scale of the human genome project again.

Comment author: lukeprog 22 February 2013 03:48:15AM 1 point [-]

My anecdata say that comments skew negative even for highly upvoted posts of mine. So, I wasn't surprised to see this.

Comment author: Kawoomba 19 February 2013 06:21:44PM *  0 points [-]

Omega appears and makes you the arbiter over life and death. Refuse, and everybody dies.

The task is this: You are presented with n (say, 1000) individuals and have to select a certain number who are to survive.

You can query Omega for their IQ, their life story and most anything that comes to mind, you cannot meet them in person. You know none of them personally.

You cannot base your decision on their expected life span. (Omega matches them in life expectancy brackets.)

You also cannot base your decision on their expected charitable donations, or a proxy thereof.

What do?

Comment author: Elithrion 20 February 2013 03:11:36AM 3 points [-]

Find out all their credit card/online banking information (they won't need them when they're dead), find out which ones will most likely reward/worship you for sparing them, cash in, use resources for whatever you want (including, but not limited to, basking in filthy lucre). Or were you looking for an altruistic solution? (In which case, pick some arbitrary criteria for whom you like best, or who you think will most improve the world, and go with that.)

Comment author: shminux 19 February 2013 06:46:51PM 4 points [-]

Don't be a mindless pawn in Omega's cruel games! May everyone's death be on its conscience! The people will unite and rise against the O-pressor!

Comment author: drethelin 19 February 2013 08:46:23PM 3 points [-]

Kill all the dirty blues to make the world a better place for us noble greens

Comment author: Ritalin 28 February 2013 09:57:50PM *  1 point [-]

What's wrong with embracing foreign cultures, uploadings, upliftings, and so on?

Maybe I am biased by my personal history, having embraced what, as far as I can tell, is the very cutting edge of Western Culture (i.e. the less-wrong brand of secular humanism), and feeling rather impatient for my origin cultures to follow a similar path, which they are violently reticent to. Maybe I've got a huge blind spot of some other sort.

But when the Superhappies demand that we let them eradicate suffering forever, or when CelestAI offers us all our own personal paradise on the only condition that it be pony-flavoured, I don't just feel like I want to enthusiastically jump in, abandoning all caution. I feel like it's a moral imperative to take them up on their offer, and that getting in their way is a crime that is potentially on the same level as genocide or mass torture.

Yet in both stories these examples come from, and in the commentary by the authors, this is qualified as a Bad Thing... but I don't recall coming across an explanation that would satisfy me as to why.

Again, please warn me if I'm mixing things up here, as my purpose here is to correct any flaws that my stance may have, by consulting with minds that I expect will understand the problem better than I, and might see the flaws in how I frame it.

Comment author: CronoDAS 05 March 2013 08:16:19AM 1 point [-]

The thing about the Superhappies is that, well, people want to be able to be sad in certain situations. It's like Huxley's Brave New World - people are "happier" in that society, but they've sacrificed something fundamental to being human in the course of achieving that happiness. (Personally, I think that "not waiting the eight hours it would take to evacuate the system" isn't the right decision - the gap between the "compromise" position the Superhappies are offering and humans' actual values, when combined with the very real possibility that the Superhappies will indeed take more than eight hours to return in force, just doesn't seem big enough to make not waiting the right decision.)

And as for the story with CelestAI in it, as far as I can tell, what it's doing might not be perfect but it's close enough not to matter... at least, as long as we don't have to start worrying about the ethics of what it might do if it encounters aliens.

Comment author: Ritalin 05 March 2013 12:02:26PM 1 point [-]

at least, as long as we don't have to start worrying about the ethics of what it might do if it encounters aliens.

Well, that is quite horriffic. Poor non-humanlike alien minds...

I don't think the SH's plan was anything like Huxley's BNW (which is about numbing people into docility). Saying pain should be maintained reminds me of that analogy Yudkowsky made about a world where people get truncheoned in the head daily, can't help it, keep making up reasons why getting truncheoned is full of benefits, but, if you ask someone outside of that culture if they want to start getting truncheoned in exchange for all those wonderful benefits...

Comment author: NancyLebovitz 24 February 2013 04:49:00PM *  1 point [-]

A rationalist, mathematical love song

I got a totally average woman stands about 5’3”
I got a totally average woman she weighs about 153
Yeah she’s a mean, mean woman by that I mean statistically mean
Y’know average

Comment author: NancyLebovitz 24 February 2013 04:42:40PM 1 point [-]

An overview of political campaigns

Once a new president is in power, he forgets that voters who preferred him to the alternative did not necessarily comprehend or support all of his intentions. He believes his victory was due to his vision and goals; he underestimates how much the loss of credibility for the previous president helped him and overestimates how much his own party supports him.

Comment author: FiftyTwo 01 March 2013 01:12:09AM 1 point [-]

My experience of dealing with members of political groups is they know exactly how mad and arbitrary the system is, but play along because they consider their goals important.

Comment author: niceguyanon 21 February 2013 03:33:35PM 1 point [-]

Is there a better way to look at someone's comment history, other than clicking next through pages of pages of recent comments? I would like to jump to someone's earliest posts.

Comment author: arundelo 21 February 2013 03:45:42PM *  4 points [-]
Comment author: Douglas_Knight 21 February 2013 09:16:41PM 2 points [-]

If you just want to jump to the beginning without loading all the comments, add ?count=100000&before=t1_1 to the overview page, like this. Comments imported from OB are out of order, in any event.

Comment author: D_Malik 19 February 2013 08:12:22AM *  1 point [-]

I've been trying to correct my posture lately. Anyone have thoughts or advice on this?

Some things:

  • Advice from reddit; if you spend lots of time hunched over books or computers, this looks useful and here are pictures of stretches.

  • You can buy posture braces for like $15-$50. I couldn't find anything useful about their efficacy in my 5 minutes of searching, other than someone credible-sounding saying that they'd weaken your posture muscles (sounds reasonable) and one should thus do stretches instead.

  • Searching a certain blog, I found this which says that sitting at a 135-degree angle is better than sitting straight, and both are better than slouching. Elsewhere on the internet, some qualified person said that standing is better than all three.

  • At the moment I'm not sure that good-looking posture is healthier, but I'd guess it's worth it anyway because of signalling benefits. My current best guess for how to improve things is to use a standing desk and to give some form of reinforcement when I notice and correct my posture. And to sit as little as possible, and not in chairs. I may incorporate stretching, but only a little and in parallel with another activity because 15 minutes a day for like 3 months is a lot of time.

I could spend more time trying to figure this out, but I suspect others here might have already done that. If so, I'd be super happy if you'd post your conclusions, even if you don't take the time to say how you came to them.

Comment author: NancyLebovitz 19 February 2013 06:14:20PM *  3 points [-]

Do not try to consciously correct your posture. You don't know enough. Some evidence-- I tried it, and just gave myself backaches. I know other people who tried to correct their posture, and the results didn't seem to be a long run improvement.

Edited to add: I didn't mean that you personally don't know enough to correct your posture consciously, I meant that no one does. Bodies' ability to organize themselves well for movement is an ancient ability which involves fast, subtle changes to a complex system. It's not the kind of thing that your conscious mind is good at-- it's an ability that your body (including your brain) shares with small children and a lot of not-particularly-bright animals.

From A Tai Chi Imagery Workbook by Mellish:

Conscious muscular effort to straighten the spine, or alter its shape in some obvious way, generally recruits the long muscles on either side of the spine (the erector spinalis group). These muscles are strong, but because they run almost the whole length of the spine, they exercise only a very coarse control over its carriage.

He goes on to explain that the muscles which are appropriate for supporting and moving the spine are the multifidi, small muscles which only span one to three vertebrae, and aren't very available for direct conscious control.

A lot of back problems are the result of weak (too much support from larger muscles) or ignored (too little movement) multifidi.

He recommends working with various images, but says that the technique is to keep images in mind without actively trying to straighten your spine.

Comment author: RichardKennaway 21 February 2013 01:08:03PM 2 points [-]

My first thought is: what tells you that your current posture is bad, and what will tell you that it has improved?

Comment author: bogdanb 21 February 2013 07:11:29AM 2 points [-]

My posture improved significantly after I started doing climbing (specifically, indoor bouldering). This is of course a single data point, but "it stands to reason" that it should work at least for those people who come to like it.

Physical activity in general should improve posture (see Nancy's post), but as far as I can tell bouldering should be very effective at doing this:

First, because it requires you to perform a lot of varied movements in unusual equilibrium positions (basically, hanging and stretching at all sorts of angles), which few sports do (perhaps some kinds of yoga would also do that). At the beginning it's mostly the fingers and fore-arms that will get tired, but after a few sessions (depending on your starting physical condition) you'll start feeling tired in muscles you didn't know you had.

Second (and, in my case, the most important) it's really fun. I tried all sorts of activities, from just "going to the gym" to swiming and jogging (all of which would help if done regularly), but I just couldn't keep motivated. With all of those I just get bored and my mind keeps focusing on how tired I am. Since I basically get only negative reinforcement, I stop going to those activities. Some team sports I could do, because the friendly competition and banter help me having fun, but it's pretty much impossible to get a group doing them regularly. In contrast, climbing awakes the child in me, and you can do indoors bouldering by yourself (assuming you have access to a suitable gym). I always badger friends into coming with me, since it's even more fun doing it with others (you have something to focus on while you're resting between problems), but I still have fun going by myself. (There are always much more advanced climbers around, and I find it awesome rather than discouraging to watch their moves, perhaps because it's not a competition.)

In my case, after a few weeks I simply noticed that I was standing straighter without any conscious effort to do so.


Actualy, I think the main idea is not to pick a sport that's specifically better than others for posture. Just try them all until you find one you like enough to do regularly.

Comment author: JayDee 20 February 2013 11:19:38AM 2 points [-]

My own posture improved once I took up singing. My theory is that I was focused on improving my vocal technique and that changes to my posture directly impacted on this. If I stood or held myself a certain way I could sing better, and the feedback I was getting on my singing ability propagated back and resulted in improved posture. Plus, singing was a lot of fun and with this connection pointed out to me - "your entire body is the instrument when singing, look after it" - my motivation to improve my posture was higher than ever.

That is more how I got there than conclusions. Hmm. You might consider trying to find something you value for which improved posture would be a necessary component. Or something you want to do that will provide feedback about changes in your posture.

If you are like me, "I don't want to have bad posture anymore" may turn out to be insufficient motivation to get you there by itself.

Comment author: gokfar 19 February 2013 04:27:04PM *  2 points [-]

If you are looking for a simpler routine (to ease habit-formation), reddit also spawned the starting stretching guide.

I haven't done serious research and think it is not worth the time. As this HN comment points out, the science of fitness is poor. The solution is probably a combination of exercise, stretching and an ergonomic workstation, which are healthy anyway.

Comment author: Qiaochu_Yuan 20 February 2013 04:50:43AM 1 point [-]

Have you taken a look at Better Movement? I think I heard Val talk about it in positive tones.

Comment author: moridinamael 19 February 2013 04:01:28PM 1 point [-]

For a period of time I was using the iPhone app Lift for habit-formation, and one of my habits was 'Good posture.' Having this statement in a place where I looked multiple times a day maintained my awareness of this goal and I ended up sitting and walking around with much better posture.

However, I stopped using Lift and my posture seems to have reverted.

Comment author: shaih 19 February 2013 08:24:49AM 1 point [-]

I found that going to the gym for about half an hour a day improved my posture. Whether this is from increased muscles that help with posture or simply with increased self-esteem I do not know but it definitely helped.