If it's worth saying, but not worth its own post, even in Discussion, it goes here.

New to LessWrong?

Open thread, February 15-28, 2013
New Comment


338 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

One part of my brain keeps being annoyed by the huge banner saying “Less Wrong will be undergoing maintenance in 1 day, 9 hours” and wishes there were a button to hide it away; another part knows perfectly well that if I did that, I would definitely forget that.

3Elithrion
Maybe it could reappear 30 minutes and 9 hours before the maintenance or something? (This being part of "things that could be done with more web design resources".)

Does anyone else believe in deliberate alienation? Forums and organizations like Lesswrong often strive to be and claim to want to be more (and by extension indefinitely) inclusive but I think excluding people can be very useful in terms of social utilons and conversation, if not so good for $$$. There's a lot of value in having a pretty good picture of who you're talking to in a given social group, in terms of making effective use of jargon and references as well as appeals to emotion that actually appeal. I think thought should be carefully given as to who exactly you let in or block out with any given form of inclusiveness or insensitivity.

On a more personal note, I think looking deliberately weird is a great way to make your day to day happenstance interactions more varied and interesting.

Yes, insufficient elitism is a failure mode of people who were excluded at some point in their life.

This seems like a good time to link the Five Geek Social Fallacies, one of my favorite subculture sociology articles.

(Insufficient elitism as a failure mode is #1.)

4WingedViper
Acting "weird" (well or just weird, depends) is something I have contemplated, too. For now I have to confess that I mostly try to stick to the norms (especially in public) except if I have a good reason to do otherwise. I think I might make this one of my tasks to just do some random "weird" acts of kindness. About the alienation: I don't think that we should do a lot about that. I think enforcing certain rules and having our own memes and terms for stuff already has some strong effects on that. I certainly felt a bit weird when I first came here. And I already was having thoughts like "don't judge something by it's cover" etc. in my mind (avoiding certain biases).

Did anyone try using the LessWrong web software for their own website? I would like to try it, but I looked at the source code and instructions, and it seemed rather difficult. Probably because I have no experience with Python, Ruby, or with configuring servers (a non-trivial dose of all three seems necessary).

If someone would help me install it, that would be awesome. A list of steps what exactly needs to be done, and what (and which version) needs to be installed on server would be also helpful.

The idea is: I would like to start a rationalist community in Slovakia, and a website would be helpful to attract new people. Although I will recommend all readers to visit LW, reading in a foreign language is a significant incovenience; I expect the localized version to have at least 10 times more readers. Also I would like to discuss local events and coordinate local meetups or other activities.

Seemed to me it would be best to reuse the LW software and just localize the texts; but now it seems the installation is more complicated than the discussion softwares I have used before (e.g. PHPBB). But I really like the LW features (Markdown syntax, karma). I just have no experience with the used technologies, and don't want to spend my next five weekends learning them. So I hope someone already having the skills would help me.

8JGWeissman
This sounds like a subreddit of LW would be a good solution. I don't know how much work that would be to set up, but you could ask Matt.

Are there any mechanisms on this site for dealing with mental health issues triggered by posts/topics (specifically, the forbidden Roko post)? I would really appreciate any interested posters getting in touch by PM for a talk. I don't really know who to turn to.

Sorry if this is an inappropriate place to post this, I'm not sure where else to air these concerns.

0shaih
I was not hear for the roko post and i only have a general idea of what its about, that being said i experienced a bout of depression when applying rationality to the second law of thermodynamics. Two things helped me, 1 i realized that while dealing with a future that is either very unlikely or inconceivably far away it is hard to properly diminish the emotional impact by what is rationally required. knowing that the emotions felt completely out way what is cause for them, you can hopefully realize that acting in the present towards those beliefs is irrational and ignoring those beliefs would actually help you be more rational. Also realize that giving weight to an improbable future more then it deserves is in its self irrational. With this i realized that by trying to be rational i was being irrational and found that it was easier to resolve this paradox then simply getting over the emotional weight it took to think about the future rationally to begin with. 2 I meditated on the following quote -Gendlin nothing has changed after you read a post on this website besides what is in your brain. Becoming more rational should never make you lose, after all Rationality is Systematized Winning so instead if you find that a belief you have is making you lose it is clearly a irrational one or is being thought of in a irrational way. Hope this helps
5David_Gerard
Treating it as you would existential depression may be useful, I would think. There are not really a lot of effective therapies for philosophy-induced existential depression - the only way to fix it seems to be to increase your baseline happiness, which is as easy to say as it is hard to do - but it occurred to me that university student health therapist may see a lot of it and may at least be able to provide an experienced ear. I would be interested in any anecdotes on the subject (I'm assuming there's not a lot of data).

So, there are hundreds of diseases, genetic and otherwise, with an incidence of less than 1%. That means that the odds of you having any one of them are pretty low, but the odds of you having at least one of them are pretty good. The consequence of this is that you're less likely to be correctly diagnosed if you have one of these rare conditions, which again, you very well might. If you have a rare disorder whose symptoms include frequent headaches and eczema, doctors are likely to treat the headaches and the eczema separately, because, hey, it's pretty unlikely that you have that one really rare condition!

For example, I was diagnosed by several doctors with "allergies to everything" when I actually have a relatively rare condition, histamine intolerance; my brother was diagnosed by different doctors as having Celiac disease, severe anxiety, or ulcers, when he actually just had lactose intolerance, which is pretty common, and I still cannot understand how they systematically got that one wrong. In both cases, these repeated misdiagnoses led to years of unnecessary, significant suffering. In my brother's case, at one point they actually prescribed him drugs with signi... (read more)

but the odds of you having at least one of them are pretty good.

The odds of you having any particular disease is not independent of your odds of having other diseases.

0Randy_M
Also it depends on how much less than 1% the incidences are.
4ChristianKl
Self experimentation. If the doctor prescribes something for you, test numerically whether it helps you to improve. If you suffer from allergies it makes sense to systemmatically check through self experimentation whether your condition improves by removing varies substances from your diet. It doesn't hurt to use a symptom checker like http://symptoms.webmd.com/#./introView to get a list of more possible diagnoses.
0Elithrion
It is my impression that there is already software out there that has a doctor put in a bunch of symptoms, and then outputs an ordered list of potential diagnoses (including rare ones). The main problem being that adoption is slow. Unfortunately, after 10 minutes of searching, I'm completely failing to find a reference, so who knows how well it works (I know I read about it in The Economist, but that's it).
0[anonymous]
Doesn't that depend on how correlated they are? Given how people who have one condition always seem to have other conditions, they seem likely to be correlated in general... (Which makes me wonder if there's some h factor: 'general health factor'.)
0NancyLebovitz
There's also poking around online to find people with similar symptoms. Chris Kresser's paleo blog is pretty good, and recently had a post about histamine sensitivity. When I say I think his blog is good, I mean that he shows respect for human variation and for science-- I'm not sure that he's right about particular things.
0EvelynM
Are you referring to curetogether.com, Nancy? This graph illustrates clusters of related systems from that site: http://circos.ca/intro/genomic_data/
0NancyLebovitz
That's not the one I was thinking of, but it sounds promising.
-1[anonymous]
As somebody who's had to deal with doctors because of a plethora of diseases, I must say you're absolutely right. (I also shadowed a few and am considering applying to med school.) I don't remember what this concept is called, but basically it posits that "one should look for horses, not zebras" and is part of medical education. That is, a doctor should assume that the symptoms a patient has are caused by a common disease rather than by a rare one. So most doctors, thanks to their confirmation bias, dismiss any symptoms that don't fit the common disease diagnosis. (A girl from my town went to her physician because she complained of headaches. The good doctor said that she's got nothing to worry about and recommended more rest and relaxation. It turned out that the girl had a brain tumor which was discovered when she was autopsied. The good doctor is still practicing. Would this gross example of irrationality be tolerated in other professions? I think not.) Most doctors are not so rational because of the way their education is structured: becoming a doctor isn't so much about reasoning but memorizing heaps of information ad verbatim. It appears that they are prone to spew curiosity-stoppers when confronted with diseases.

this gross example of irrationality

soren, please don't take this the wrong way, but based on what I've seen you post so far, you are not a strong enough rationalist to say things like this yet. You are using your existing knowledge of biases to justify your other biases, and this is dangerous.

Doctors have a limited amount of time and other resources. Any time and other resources they put into considering the possibility that a patient has a rare disease is time and other resources they can't put into treating their other patients with common diseases. In the absence of a certain threshold of evidence suggesting it's time to consider a rare disease (with a large space of possible rare diseases, most of the work you need to do goes into getting enough evidence to bring a given rare disease to your attention at all), it is absolutely completely rational to assume that patients have common diseases in general. .

3[anonymous]
None taken, but how can you assess my level of rationality? When will I be enough rationalist to say things like that? What bias did I use to justify another bias? Again, testing a hypothesis when somebody's life is at stake is, I think, paramount to being a good doctor. What's the threshold of evidence a doctor should reckon?
9DanielLC
What gross example of irrationality? The vast majority of people with headaches don't have anything to worry about.
4NancyLebovitz
The question is whether "people with headaches" is the right reference class. If the headache is unusually severe or persistent, it makes sense to look deeper. Also, a doctor can ask for details about the headache before prescribing the expensive tests.
0DanielLC
More precisely, the question is whether or not the right reference class is one in which cancer tests are worth while. The headaches would have to be very unusually severe to get enough evidence. It was never mentioned whether or not the doctor asked for details. It's also possible that none of those reference classes are worth looking into, and she'd need headaches and something else.
0NancyLebovitz
Cancer isn't the only solvable problem which could get ignored if headaches are handled as a minor problem which will go away on their own.
0DanielLC
Yeah, but the other ones also get ignored if you assume it's cancer. To my knowledge, they have to be individually tested for. If none is worth testing for individually, it's best to ignore the headaches.
-1A1987dM
“The vast majority” != “All”. What's wrong with “you most likely have nothing to worry about, but I suggest doing this exam the off-chance that you do”? You've got to multiply the probability by the disutility, and the result can be large enough to worry about even if the probability is small. (Yes, down that way Pascal's mugging lies, but still.) EDIT: Okay, given the replies to this comment I'm going to Aumann my estimate of the cost of tests for rare diseases upwards by a couple of orders of magnitude. Retracted.
9DanielLC
I'm pretty sure that, in this case, the probability is smaller than the disutility is large. Getting tested for cancer doesn't come cheap.
4ChristianKl
Doctors get taught to practice evidence-based medicine. There's a lack of clinical trials that show that you can increase life span by routinically giving people who suffer from headaches brain scans. If I understand the argument right, then doctors are basically irrational because the favor empirical results from trials over trying to think through the problem on a intellectual level?
3Kawoomba
MONNAY. The question is, whose utility?
2beoShaffer
There's also the problem of false positives. Treatments for rare diseases are often expensive and/or carry serious side effects.
2A1987dM
I was thinking of diagnostics, not treatment, though from DanielLC's reply I guess I had underestimated the cost of that, too.
3ChristianKl
If you start diagnosing and find false positives than you are usually going to treat them.
-7[anonymous]

At this point, there should be little doubt that the best response to this "basilisk" would have been "That's stupid. Here are ten reasons why.", rather than (paraphrasing for humor) "That's getting erased from the internet. No, I haven't heard the phrase 'Streisand Effect' before; why do you ask?"

The real irony is that Eliezer is now a fantastic example of the commitment/sunk cost effect which he has warned against repeatedly: having made an awful decision, and followed it up with further awful decisions over years (including at least 1 Discussion post deleted today and an expansion of topics banned on LW; incidentally, Eliezer, if you're reading this, please stop marking 'minor' edits on the wiki which are obviously not minor), he is trapped into continuing his disastrous course of conduct and escalating his interventions or justifications.

And now the basilisk and the censorship are an established part of the LW or MIRI histories which no critic could possibly miss, and which pattern-matches on religion. (Stross claims that it indicates that we're "Calvinist", which is pretty hilarious for anyone who hasn't drained the term of substantive meaning and turned it into a buzzword for people they don't like.) A pity.


While we're on the topic, I also blame Yvain to some extent; if he had taken my suggestion to add a basilisk question to the past LW survey, it would be much easier to go around to all the places discussing it and say something like 'this is solely Eliezer's problem; 98% disagree with censoring it'. But he didn't, and so just as I predicted, we have lost a powerful method of damage control.

It sucks being Cassandra.

9Mitchell_Porter
Let me consult my own crystal ball... Yes, the mists of time are parting. I see... I see... I see, a few years from now, a TED panel discussion on "Applied Theology", chaired by Vernor Vinge, in which Eliezer, Roko, and Will Newsome discuss the pros and cons of life in an acausal multiverse of feuding superintelligences. The spirits have spoken!
0A1987dM
I'm looking forward to that.
6Eliezer Yudkowsky
Gwern, I made a major Wiki edit followed by a minor edit. I wasn't aware that the latter would mask the former.
9gwern
When you're looking at consolidated diffs, it does. Double-checking, your last edit was marked minor, so I guess there was nothing you could've done there. (It is good wiki editing practice to always make the minor or uncontroversial edits first, so that way your later edits can be looked at without the additional clutter of the minor edits or they can be reverted with minimal collateral damage, but that's not especially relevant in this case.)
5Richard_Kennaway
That's already true without the basilisk and censorship. The similarities between transhumanism and religion have been remarked on for about as long as transhumanism has been a thing.

An additional item to pattern-match onto religion, perhaps I should have said.

4Pablo
Also, note that this wasn't an unsolicited suggestion: in the post to which gwern's comment was posted, Yvan actually said that he was "willing to include any question you want in the Super Extra Bonus Questions section [of the survey], as long as it is not offensive, super-long-and-involved, or really dumb." And those are Yvain's italics.
3Kevin
At this point it is this annoying, toxic meta discussion that is the problem.
0A1987dM
Then EY would have freaked the hell out, and I don't know what the consequences of that would be but I don't think they would be good. Also, I think the basilisk question would have had lots of mutual information with the troll toll question anyway: [pollid:419] EDIT: I guess I was wrong.

It's too late. This poll is in the wrong place (attracting only those interested in it), will get too few responses (certainly not >1000), and is now obviously in reaction to much more major coverage than before so the responses are contaminated.

The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit,
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.

2A1987dM
Actually, I was hoping to find some strong correlation between support for the troll toll and support for the basilisk censorship so that I could use the number of people who would have supported the censorship from the answers to the toll question in the survey. But it turns out that the fraction of censorship supporters is about 30% both among toll supporters and among toll opposers. (But the respondents to my poll are unlikely to be an unbiased sample of all LWers.)
5wedrifid
The 'troll toll' question misses most of the significant issue (as far as I'm concerned). I support the troll toll but have nothing but contempt for Eliezer's behavior, comments, reasoning and signalling while implementing the troll toll. And in my judgement most of the mutual information with the censorship or Roko's Basilisk is about those issues (things like overconfidence, and various biases of the kind Gwern describes) is to do with the judgement of competence based on that behavior rather than the technical change to the lesswrong software.
0Shmi
Just to be charitable to Eliezer, let me remind you of this quote. For example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression? I thought this is more akin to Scientology, where any mention of Xenu to the uninitiated ought to be suppressed. Sure does. Then again, it probably sucks more being Laocoön.

can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

The basilisk is harmless. Eliezer knows this. The streisand effect was the intended consequence of the censor. The hope is that people who become aware of the basilisk will increase their priors for the existence of real information hazards, and will in the future be less likely to read anything marked as such. It's all a clever memetic inoculation program!

disclaimer : I don't actually believe this.

Another possibility: Eliezer doesn't object to the meme that anyone who doesn't donate to SIAI/MIRI will spend eternity in hell being spread in a deniable way.

5Shmi
Why stop there? In fact, Roko was one of Eliezer's many socks puppets. It's your basic Ender's Game stuff.
4[anonymous]
We are actually all Eliezer's sock puppets. Most of us unfortunately are straw men.

We are the hollow men / we are the stuffed men / Leaning together / Headpiece filled with straw. Alas! / Our dried comments when / we discuss together / Are quiet and meaningless / As median-cited papers / or reports of supplements / on the Internet.

1Viliam_Bur
Another possibility: Eliezer does not want the meme to be associated with LW. Because, even if it was written by someone else, most people are predictably likely to read it and remember: "This is an idea I read on LW, so this must be what they believe."
8wedrifid
It's certainly an inoculation for information hazards. Or at least against believing information hazard warnings.
8Eugine_Nier
Alternatively, the people dismissing the idea out of hand are not taking it seriously and thus not triggering the information hazard. Also the censorship of the basilisk was by no means the most troubling part of the Roko incident, and as long as people focus on that they're not focusing on the more disturbing issues. Edit: The most troubling part were some comments, also deleted, indicating just how fanatically loyal some of Eliezer's followers are.
1Locaha
Really? Or do you just want us to believe that you don't believe this???

Just to be charitable to Eliezer, let me remind you of this quote. For example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

No. I have watched Eliezer make this unforced error now for years, sliding into an obvious and common failure mode, with mounting evidence that censorship is, was, and will be a bad idea, and I have still not seen any remotely plausible explanation for why it's worthwhile.

Just to take this most recent Stross post: he has similar traffic to me as far as I can tell, which means that since I get ~4000 unique visitors a day, he gets as many and often many more. A good chunk will be to his latest blog post, and it will go on being visited for years on end. If it hits the front page of Hacker News as more than a few of his blog posts do, it will quickly spike to 20k+ uniques in just a day or two. (In this case, it didn't.) So we are talking, over the next year, easily 100,000 people being exposed to this presentation of the basilisk (just need average 274 uniques... (read more)

0Kevin
Can you please stop with this meta discussion? I banned the last discussion post on the Basilisk, not Eliezer. I'll let this one stand for now as you've put some effort into this post. However, I believe that these meta discussions are as annoyingly toxic as anything at all on Less Wrong. You are not doing yourself or anyone else any favors by continuing to ride this. The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy? At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

There's now the impression that a community of aspiring rationalists — or, at least, its de-facto leaders — are experiencing an ongoing lack of clue on the subject of the efficacy of censorship on online PR.

The "reputational damage" is not just "Eliezer or LW have this kooky idea."

It is "... and they think there is something to be gained by shutting down discussion of this kooky idea, when others' experience (Streisand Effect, DeCSS, etc.) and their own (this very thread) are strong evidence to the contrary."

It is the apparent failure to update — or to engage with widely-recognized reality at all — that is the larger reputational damage.

It is, for that matter, the apparent failure to realize that saying "Don't talk about this because it is bad PR" is itself horrible PR.

The idea that LW or its leadership dedicate nontrivial attention to encircling and defending against this kooky idea makes it appear that the idea is central to LW. Some folks on the thread on Stross's forum seem to think that Roko discovered the hidden se... (read more)

having one's work cruelly mischaracterized and held up to ridicule is a whole bunch of no fun.

Thank you for appreciating this. I expected it before I got started on my life, I'm already accustomed to it by now, I'm sure it doesn't compare to the pain of starving to death. Since I'm not in any real trouble, I don't intend to angst about it.

0fubarobfusco
Glad to hear it.

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

The basilisk is now being linked on Marginal Revolution. Estimated site traffic: >3x gwern.net; per above, that is >16k uniques daily to the site.

What site will be next?

0Viliam_Bur
More importantly, will endless meta-discussions like this make another site more likely or less likely to link it?
7gwern
Will an abandonment of a disastrous policy be more or less disastrous? Well, when I put it that way, it suddenly seems obvious.
2Viliam_Bur
Less disastrous as in "people spending less time criticizing Eliezer's moderating skills"? Probably yes. Less disastrous as in "people spending less time on LW discussing the 'basilisk'"? Probably no. I would expect at least dozen articles about this topic within the first year if the ban would be completely removed. Less disastrous as in "people less likely to create more 'basilisk'-style comments"? Probably no. Seems that the policy prevented this successfully.

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

Answering the rhetorical question because the obvious answer is not what you imply [EDIT: I notice that J Taylor has made a far superior reply already]: Yes, it limits the ongoing reputational damage.

I'm not arguing with the moderation policy. But I will argue with bad arguments. Continue to implement the policy. You have the authority to do so, Eliezer has the power on this particular website to grant that authority, most people don't care enough to argue against that behavior (I certainly don't) and you can always delete the objections with only minimal consequences. But once you choose to make arguments that appeal to reason rather than the preferences of the person with legal power then you can be wrong.

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear ... (read more)

I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear indications (ie. direct self reports that are coherent) that a significant reason that they "take the basilisk seriously" is because Eliezer considers it a sufficiently big deal that he takes such drastic and emotional action. Heck, without Eliezer's response it wouldn't even have earned that title. It'd be a trivial backwater game theory question to which there are multiple practical answers.

I get the people who've been frightened by it because EY seems to take it seriously too. (Dmytry also gets them, which is part of why he's so perpetually pissed off at LW. He does his best to help, as a decent person would.) More generally, people distressed by it feel they can't talk about it on LW, so they come to RW contributors - addressing this was why it was made a separate article. (I have no idea why Warren Ellis then Charlie Stross happened to latch onto it - I wish they hadn't, because it was totally not ready, so I had to spend the past few days desperately fixing it up, and it's still terrible.) EY not in fact thinking it's feasible or important is a point I need to address in the last section of the RW article, to calm this concern.

6jbeshir
It would be nice if you'd also address the extent to which it misrepresents other LessWrong contributors as thinking it is feasible or important (sometimes to the point of mocking them based on its own misrepresentation). People around LessWrong engage in hypothetical what-if discussions a lot; it doesn't mean that they're seriously concerned. Lines like "Though it must be noted that LessWrong does not believe in or advocate the basilisk ... just in almost all of the pieces that add up to it." are also pretty terrible given we know only a fairly small percentage of "LessWrong" as a whole even consider unfriendly AI to be the biggest current existential risk. Really, this kind of misrepresentation of alleged, dubiously actually held extreme views as the perspective of the entire community is the bigger problem with both the LessWrong article and this one.

The article is still terrible, but it's better than it was when Stross linked it. The greatest difficulty is describing the thing and the fuss accurately while explaining it to normal intelligent people without them pattern matching it to "serve the AI God or go to Hell". This is proving the very hardest part. (Let's assume for a moment 0% of them will sit down with 500K words of sequences.) I'm trying to leave it for a bit, having other things to do.

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

As far as I can tell the entire POINT of LW is to talk about various mental pathologies and how to avoid them or understand them even if they make you very uncomfortable to deal with or acknowledge. The reasons behind talking about the basilisk or basilisks in general (apart from metashit about censorship) are just like the reasons for talking about trolley problems even if they make people angry or unhappy. What do you do when your moral intuitions seem to break down? What do you do about compartmentalization or the lack of it? Do you bite bullets? Maybe the mother should be allowed to buy acid.

To get back to meta shit: If people are complaining about the censorship and you are sick of the complaints, the simplest way to stop them is to stop the censorship. If someone tells you there's a problem, the response of "Quit your bitching, it's annoying" is rarely appropriate or even reasonable. Being annoying is the point of even lameass activism like this. I personally think any discussion of the actual basilisk has reached ev... (read more)

-45Kevin

The meta discussions will continue until morale improves

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

I hate to use silly symmetrical rhetoric, however:

The secret has been leaked and the reputational damage is ongoing. Is there really anything to be gained by continuing the current moderation policy?

-2Richard_Kennaway
I wouldn't call him that, and not because I have any doubt about his trustworthiness. It's the other word, "source", that I wouldn't apply. He's a professional SF author. His business is to entertain with ideas, and his blog is part of that. I wouldn't go there in search of serious analysis of anything, any more than I would look for that on RationalWiki. Both the article in question and the comments on it are pretty much on a par with RationalWiki's approach. In fact (ungrounded speculation alert), I have to wonder how many of the commenters there are RW regulars, there to fan the flame.

Stross is widely read, cited, and quoted approvingly, on his blog and off (eg. Hacker News). He is a trusted source for many geeks.

6metatroll
RationalWiki's new coat-of-arms is a troll riding a basilisk.
-7Locaha
-2wedrifid
Did you happen to catch the deleted post? Was there any interesting reasoning contained therein? If so, who was the author and did they keep a backup that they would be willing to email me? (If they did not keep a backup... that was overwhelmingly shortsighted unless they are completely unfamiliar with the social context!)
2Larks
I saw it. I contained just a link and the a line asking for "thoughts" or words to that effect. Maybe there was a quote - certainly nothing new or origional.
0wedrifid
Thanks. I've been sent links to all the recently deleted content and can confirm that nothing groundbreaking was lost.
4Eugine_Nier
I'm not convinced the Streisand Effect is actually real. It seems like an instance of survival bias. After all, you shouldn't expect to hear about the cases when information was successfully suppressed.
3wedrifid
This is a bizarre position to take. The effect does not constitute a claim that all else being equal attempts to suppress information are negatively successful. Instead it describes those cases where information is published more widely due to the suppression attempt. This clearly happens sometimes. The Wikipedia article gives plenty of unambiguous examples. It would be absurd to believe that the number in question would have been made into T-shirts, tattoos and a popular YouTube song no attempt was made to suppress it. That doesn't mean (or require) that in other cases (and particularly in other cases where the technological and social environment was completely different) that sometimes powerful figures are successful in suppressing information.
4wedrifid
Heck, there is little doubt that even your paraphrased humorous alternative would have been much better than what actually happened. It's not often that satirical caricatures are actually better than what they are based on!
-6Richard_Kennaway

White coat hypertension is a phenomenon in which patients exhibit elevated blood pressure in a clinical setting (doctor's office, hospital, etc.) but not in other settings, apparently due to anxiety caused by being in the clinical setting.

Stereotype threat is the experience of anxiety or concern in a situation where a person has the potential to confirm a negative stereotype about their social group. Since most people have at least one social identity which is negatively stereotyped, most people are vulnerable to stereotype threat if they encounter a situation in which the stereotype is relevant. Although stereotype threat is usually discussed in the context of the academic performance of stereotyped racial minorities and women, stereotype threat can negatively affect the performance of European Americans in athletic situations as well as men who are being tested on their social sensitivity.

Math anxiety is anxiety about one's ability to do mathematics, independent of skill. Highly anxious math students will avoid situations in which they have to perform mathematical calculations. Math avoidance results in less competency, exposure and math practice, leaving students more anxious an... (read more)

Over the last month Bitcoin's nearly doubled in value. It's now nearly at hit historical high. http://bitcoincharts.com/charts/mtgoxUSD#tgMzm1g10zm2g25zv

Does anybody know what drives the latest Bitcoin price development?

7nigerweiss
The bitcoin market value is predicated mostly upon drug use, pedophilia, nerd paranoia, and rampant amateur speculation. Basically, break out the tea leaves.

drug use, pedophilia, (...), and rampant amateur speculation

Hey, that's almost 2.5% of the world GDP! Can't go wrong with a market this size.

5Tripitaka
As of january, the pizza-chain Dominos accepts payment in bitcoins; and as of this week, Kim Dotcoms "Mega" filehosting-service accepts them, too.
9drethelin
Dominos does not accept bitcoins. A third party site will order dominos for you, and you pay THEM in bitcoins.
0nigerweiss
The baseline inconvenience cost associated with using bitcoins is also really high for conducting normal commerce with them.

Any bored nutritionists out there? I've put together a list of nutrients, with their USDA recommended quantities/amounts, and scoured amazon for the best deals, in trying to create my own version of Soylent. My search was complicated by the following goals:

  • I want my Soylent to have all USDA recommendations for a person of my age/sex/mass.
  • I want my Soylent to be easy to make (which means a preference for liquid and powder versions of nutrients).
  • My Soylent should be as cheap, per day, as possible (I'd rather have 10 lbs of Vitamin C at $0.00/day than 1lb at $0.01/day).
  • Because I'd like it to be trivially easy to possess a year's supply of Soylent, should I find this to be a good experiment.
  • I want to make it easy for other people to follow my steps, and criticize my mistakes, because I'm totally NOT a nutritionist, but I'm awfully tired of being told that I need X amount of Y in my diet, without citations or actionable suggestions (and it is way easier to count calories with whey protein than at a restaurant).
  • I want the items to be available to anybody in the USA, because I live at the end of a pretty long supply chain, and can't find all this stuff locally.
  • I'm trying not to o
... (read more)
4PECOS-9
Relevant dinosaur comic. The blog section "What are the haps my friends" below the comic also has some information that might be useful. As much as I love this idea, I'd be too worried about possible unforeseen consequence to be one of the first people to try it. For example, the importance of gut flora is something that was brought up in the comments to the Soylent blog post that didn't occur to me at all while reading. Even if you can probably get around that, it's just an example of how there are a lot of possible problems you could be completely blind to. As another commenter on his follow up post said: Maybe it'd be useful to look up research on people who have to survive on tube feeding for an extended period of time. Of course, there's lots of conflating factors there, but I bet there's some good information out there (I haven't looked). Also, most of the benefits he described are easily explained as a combination of placebo, losing weight by eating fewer calories, and exercise. But still, I do like the idea. I bet a kickstarter for something like this would do really well.

I am also worried about possible unforeseen consequences of eating bad diets, but one of those bad diets is my current one, so...

3gwern
Interview: http://www.vice.com/en_uk/read/rob-rhinehart-no-longer-requires-food
2beriukay
I got in touch with Mr. Rhinehart about my list. Here's his analysis of what I currently have: I should be getting some money from the Good Judgment project soon. I'll buy the ingredients then.
1Qiaochu_Yuan
The list formatting doesn't seem to have quite worked. Can you try replacing the dashes with asterisks? Anyway, I wish I could help, but I am not a nutritionist.
1gwern
He needs a full empty line between the list and his preceding sentence, I think.
0beriukay
Oops, sorry!

I'm quite excited that MIRI's new website has launched. My thanks to Louie Helm (project lead), Katie Hartman (designer), and the others who helped with the project: Malo Bourgon, Alex Vermeer, Stephen Barnes, Steven Kaas, and probably some others I don't know about.

2[anonymous]
Grats on the URL. Things we like about the new site: * Color scheme - The addition of orange and lime to the usual cobalt blue is quite classy. * Dat logo - We love that it degrades to the old logo in monochrome. We love the subtle gradient on the lettering and the good kerning job on the RI, though we regret the harsh M. * Navbar - More subtle gradients on the donate button. * Dat quote - Excellent choice with the selective bolding and the vertical rule. Not sure how I feel about the italic serif "and" between Tallinn's titles; some italic correction missing there, but the web is a disaster for such things anyway. * Headings - Love the idea of bold sans heading with light serif subheading at the same height. Could be more consistent, but variety is good too. * Font choices - Quattrocento is such a great font. Wouldn't mind seeing more of it in the sidebar, though. Source Sans Pro is nice but clashes slightly. Normally one thinks of futurist/post-modern sites being totally clean and sans serif everywhere. I'm really happy with the subversion here. * Stylized portraits - Love them. Seems a different process was used on the team page as the research page; the team page's process is less stylized, but also holds up better with different face types, IMO. Overall: exceptionally well done.

How much do actors know about body language? Are they generally taught to use body language in a way consistent with what they're saying and expressing with their faces? (If so, does this mean that watching TV shows or movies muted could be a good way to practice reading body language?)

I do not believe it would be a good way to practice because even with actors acting the way they are supposed (consistent body language and facial expressions) lets say conservatively 90% of the time, you are left with 10% wrong data. This 10% wouldn't be that bad except for the fact that it is actors trying to act correctly (meaning you would interpret what it looks like for a fabricated emotion to be a real emotion). This could be detrimental to many uses of being able to read body language such as telling when other people are lying.

My preferred method has been to watch court cases on YouTube where it has come out afterword whether the person was guilty or innocent. I watch these videos before i know what the truth is make a prediction and then read what the truth is. In this way I am able to get situations where the person is feeling real emotions and is likely to hide what there feeling with fake emotions.

After practicing like this for about a week i found that i could more easily discern whether people were telling the truth or lying, and it was easier to see what emotions they truly felt.

This may not extremely applicable to the real world because emotions felt in court room... (read more)

4PECOS-9
That's a really cool idea. Did you record your predictions and do a statistical analysis on them to see whether you definitely improved?
5shaih
My knowledge of statistics at the time was very much lacking (that being said i still only have about a semesters worth of stat) so I was not able to do any type of statistical analysis that would be rigorous in any way. I did however keep track of my predictions and was around 60% on the first day (slightly better then guessing probably caused by reading books i mentioned) to around 80% about a week later of practicing every day. I no longer have the exact data though only approximate percentages of how i did. I remember also that it was difficult tracking down the cases in which truth was known and this was very time consuming, this is the predominant reason that i only practiced like this for a week.
2[anonymous]
Finding such videos without discovering the truth inadvertently seems difficult. Do you have links to share?
1shaih
I don't have them any longer. An easy way to do it is have a friend pick out videos for you (or have someone post links to videos here and have someone pm them for the answer). Or while on YouTube look for names that you've heard before but not quite remember clearly which is not really reliable but its better then nothing.
2[anonymous]
In which case would this be preferable to live human interaction? It lacks the immediate salient feedback and strong incentives of a social setting. The editing and narrative would be distracting and watching a muted movie sounds (or rather, looks) quite boring.
2Tenoke
They get some training and it depends a lot on what you are watching but you can learn a bit if you don't forget that this is not exactly how people act. A show like 'lie to me' will probably do more good than other shows (Paul Ekman is involved in it) but there are also inaccuracies there. Perhaps you can study the episodes and then read arguments about what was wrong in a certain episode (David Matsumoto used to post sometimes what was inaccurate about some episodes iirc).

Dude. Seriously. Spoilers.

This comment is a little less sharp than it would have been had I not gone to the gym first; but unless you (and the apparent majority in this thread) actively want to signal contempt for those who disagree with you, please remember that there are some people here who do not want to read about the fucking basilisk.

It's been suggested to me that since I don't blog, I start an email newsletter. I ignored the initial suggestions, but following the old maxim* began to seriously consider it on the third or fourth suggestion (who also mentioned they'd even pay for it, which would be helpful for my money woes).

My basic idea is to once a month compile: everything I've shared on Google+, articles excerpted in Evernote or on IRC, interesting LW comments**, and a consolidated version of the changes I've made to gwern.net that month. Possibly also include media I've consumed with reviews for books, anime, music etc akin to the media thread.

I am interested in whether LWers would subscribe:

[pollid:415]

If I made it a monthly subscription, what does your willingness-to-pay look like? (Please be serious and think about what you would actually do.)

[pollid:416]

Thanks to everyone voting.

* "Once is chance; twice is coincidence; three times is enemy action." Or in Star Wars terms: "If someone calls you a Hutt, ignore them; if two people call you a Hutt, begin to wonder; and if three do, buy a slobber-pail and start stockpiling glitterstim."

** For example, my recent comments on the SAT (Harvar... (read more)

4gwern
After some further thought and seeing whether I could handle monthly summaries of my work, I've decided to open up a monthly digest email with Mailchimp. The signup form is at http://eepurl.com/Kc155
2jsalvatier
I would turn the email into an RSS.
2curiousepic
Why do you not blog? The differences between it and this newsletter are ambiguous.
6gwern
Reasons for 'not a blog': * I don't have any natural place on gwern.net for a blog * I've watched people waste countless hours dealing with regular blog software like Wordpress and don't want to go anywhere near it, Reasons for email specifically: * email lists like Google Groups or MailChimp seem both secure and easy to use for once-a-month updates * more people seem to still use email than RSS readers these days * patio11 says that geeks/Web people systematically underrate the usefulness of an email newsletter * there's much more acceptance of charging for an email newsletter
7Risto_Saarelma
Might be worth noting that the customer base patio11 is probably most familiar with are people who pay money for a program that lets them print bingo cards. They might be a different demographic than people who know what a gwern is. For a data point, I live in RSS, don't voluntarily follow any newsletters, and have become conditioned to associate the ones I do get from some places I'm registered at as semi-spam. Also if I pay money for something, then it becomes a burdensome Rare and Valuable Possession I Must Now Find a Safe Place For, instead of a neat thing I can go look at, then forget all about, then go look up again after five years based on some vaguely remembered details. So I'll save myself stress if I stick with free stuff.
4gwern
Maybe. On the other hand, would you entertain for even a second the thought of paying for an RSS feed? Personally, I can think of paying for an email newsletter if it's worth it, but the thought of paying for a blog with an RSS feed triggers an 'undefined' error in my head. Email is infinitely superior to RSS in this respect; everyone gets a durable copy and many people back up their emails (including you - right? right?). I have emails going back to 2004. In contrast, I'm not sure how I would get my RSS feeds from a year ago since Google Reader seems to expire stuff at random, never mind 2006 or whenever I started using RSS.
5Risto_Saarelma
You're right about the paying part. I don't care to even begin worrying about how setting Google Reader to fetch something from beyond a paywall might work, but e-mail from a paid service makes perfect sense, tech-wise. And now that you mention it, if I were living in an email client instead of Google Reader, I could probably get along just fine having stuff from my RSS subscriptions get pushed into my mailbox. Unfortunately, after 15 years I still use email so little that I basically consider it a hostile alien environment and haven't had enough interesting stuff go on there so far that I'd ever really felt the need to back up my mails. Setting up a proper email workflow and archiving wouldn't be a very big hurdle if I ever got reason to bother with it though. An actual thing I would like is an archived log of "I read this thing today and it was interesting", preferrably with an archive of the thing. I currently use Google Reader's starring thing for this, but that's leaving stuff I actually do care about archiving at Google's uncertain mercy, which is bad. Directing RSS to email would get me this for free. Did I just talk myself into possibly starting to use email properly with an use case where I'd mostly be mailing stuff to myself?
4chemotaxis101
I'd recommend using Blogtrottr for turning the content from your RSS feeds into email messages. Indeed, as email is (incidentally) the only web-related tool I can (and must) consistently use throughout the day, I tend to bring a major part of the relevant web content I'm interested in to my email inbox - including twitter status updates, LW Discussion posts, etc.
4Viliam_Bur
How about "blog.gwern.net" or even "gwernblog.net"? If some people are willing to pay for your news, maybe you could find a volunteer (by telling them that creating the blog software is the condition for you to publish) to make the website. To emulate the (lack of) functionality of an e-mail, you only need to log in as the administrator, and write a new article. The Markdown syntax, as used on LW, could be a good choice. Then the website must display the list of articles, the individual articles, and the RSS feed. That's it; someone could do that in a weekend. And you would get the extra functionality of being able to correct mistakes in already published articles, and make hyperlinks between them. Then you need functionality to manage users: log in as user, change the password, adding and removing users as admin. There could even be an option for users to enter their e-mails, so the new articles will be sent to them automatically (so they de facto have a choice between web and e-mail format). This all is still within a weekend or two of work.
0gwern
I meant in my existing static site setup. (If I were to set up a blog of my own, it would probably go into a subdomain, yes.) How would that help? I don't often need to correct mistakes in snippets, month-old LW comments, etc. I do often correct my essays, but those are not the issue.
0knb
Have you considered a Google Blogger site? They aren't quite as customizable as WordPress, but you can put AdSense on your site in like 5-10 minutes, if you're interested. Plus free hosting, even with your own domain name. I've used blogger for years, and I've never had downtime or technical problems.
1gwern
Those incredibly awful sites with the immovable header obscuring everything and broken scrollbars and stuff? No, I've never considered them, although I'm glad they're not as insecure and maintenance heavy as the other solutions... (I already have AdSense on gwern.net, and hosting isn't really a big cost right now.)
1Risto_Saarelma
I'd be a lot more willing to consider a somewhat larger single payment that gets me a lifetime subscription than a monthly fee. I'm pretty sure I don't want to deal with a monthly fee, even if it's $1, it feels like having to do the buying decision over and over again every month, but I can entertain dropping a one-off $20 for a lifetime subscription. Of course that'd only net less than two years worth of posts even for the $1 monthly price point, so this might not be such a great deal for you.
1gwern
I wouldn't do a lifetime subscription simply because I know that there's a very high chance I would stop or the quality would go downhill at some point. Even if people were willing to trust me and pay upfront, I would still consider such a pricing strategy extremely dishonest.
0NancyLebovitz
How does an annual fee feel?
0gwern
Better but still too long a promise for the start. (Interestingly, patio11 does seem to think that annual billing is more profitable than monthly.)
0insufferablejake
I enjoy your posts, and I have been a consumer of your G+ posts and your blog for sometime now, even though I don't much comment and just lurk about. While I would want some sort of syndication of your stuff, I am wondering if an external expectation of having to meet the monthly compilation target or the fact that you know for sure that there is a definite large audience for your posts now, will affect the quality of your posts? I realize that there is likely not any answer possible for this beforehand, but I'd like to know if you've considered this.
0gwern
I don't know. I'm more concerned that reviewing & compiling everything at the end of the month will prove to be too much of a stressful hassle or use of time than that I'll water down content.
0satt
I voted no, but think a Gwern Email Digest is a worthwhile idea regardless. I just don't sign up for email newsletters generally.

Conditional Spam (Something we could use a better word for but this will do for now)

In short: Conditional Spam is information that is valuable to 99 percent of people.

Huge proportions of the content generated and shared on the internet is in this category, and this becomes more and more the case as a greater percentage of the population outputs to the internet as well as reading it. In this category are things like people's photos of their cats, stories about day to day anecdotes, baby pictures, but ALSO, and importantly, things like most scientific studies, news articles, and political arguments. People criticize twitter for encouraging seemingly narcissistic pointless microblogging, but in reality it's the perfect engine for distributing conditional spam: Anyone who cares about your dog can follow you, and anyone who doesn't can NOT.

When your twitter or facebook or RSS is full of things you're not informed (or entertained, since this applies to fun as well as usefulness) by, this isn't a failing of the internet. It's a failing of your filter. The internet is a tool optimized to distribute conditional spam as widely as possible, and you can tune your use of it to try and make th... (read more)

2Viliam_Bur
It would be nice to have some way of adding tags to the information, so that we could specify what information we need, and avoid the rest. Unfortunately, this would not work, because the tagging would be costly, and there would be incentives to tag incorrectly. For example, I like to be connected with people I like on Facebook. I just don't have to be informed about every time they farted. So I would prefer if some information would be labeled as "important" for the given person, and I would only read those. But that would only give me many links to youtube videos labeled "important"; and even this assumes too optimistically that people would bother to use the tags. I missed my high-school reunion once because a Facebook group started specifically to notify people about the reunion gradually became a place for idle chat. After a few months of stupid content I learned to ignore the group. And then I missed a short message which was exceptionally on-topic. There was nothing to make it stand out of the rest. In groups related to one specific goal, a solution could be to mark some messages as "important" and to make the importance a scarce resource. Something like: you can only label one message in a week as important. But even this would be subject to games, such as "this message is obviously important, so someone else is guaranteed to spend their point on it, so I will keep my point for something else". The proper solution would probably use some personal recommendation system. Such as: there is an information, users can add their labels, and you can decide to "follow" some users which means that you will see what they labeled. Maybe something like Digg, but you would see only the points that your friends gave to the articles. You could have different groups of friends for different filters.
0drethelin
SEO is the devil.

In the short story/paper "Sylvan's Box" by Graham Priest the author tries to argue that it's possible to talk meaningfully about a story with internally inconsistent elements. However, I realized afterward that if one truly was in possession of a box that was simultaneously empty and not empty there would be no way to keep the inconsistency from leaking out. Even if the box was tightly closed it would both bend spacetime according to its empty weight and also bend spacetime according to its un-empty weight. Opening the box would cause photons ... (read more)

2Viliam_Bur
The paper only showed that it is possible to talk meaningfully about a story with an element which is given inconsistent labels, and the consequences of having the inconsistent labels are avoided. The hero looks in the box and sees that it "was absolutely empty, but also had something in it" and "the sense of touch confirmed this". How exactly? Did photons both reflect and non-reflect from the contents? Was it translucent? Or did it randomly appear and disappear? How did the fingers both pass through and not-pass-through the contents? But more importantly, what would happen if the hero tried to spill out the contents? Would something come out or not? What if they tried to use the thing / non-thing to detonate a bomb? The story seems meaningful only because we don't get answer for any of these questions. It is a compartmentalization forced by the author on readers. The problems are not there only because the author refuses to look at them.
0Pentashagon
So in essence claiming "A and not ~A, therefore B and ~C, the end." That isn't a limitation imposed by the author but an avoidance of some facts that can be inferred by the reader.
2Viliam_Bur
Imagine that I offer you a story where some statement X is both completely true and completely false, and yet we can talk about it meaningfully. And the story goes like this: "Joe saw a statement X written on paper. It was a completely true statement. And yet, it was also a completely false statement. At first, Joe was surprised a lot. Just to make sure, he tried evaluating it using the old-fashioned boolean logic. After a few minutes he received a result 1, meaning the statement was true. But he also received a result 0, meaning the statement was false." Quite a let-down, wasn't it? At least it did not take ten pages of text. Now you can be curious how exactly one can evaluate a statement using a boolean logic and receive 1 and 0 simultaneusly... but that's exactly the part I don't explain. So the "talk about it meaningfully" part simply means that I am able to surround a nonsensical statement with other words, creating an illusion of a context. It's just that the parts of contexts which are relevant, don't make sense; and the parts of contexts which make sense are not relevant. (The latter is absent in my short story, but I could add a previous paragraph about Joe getting the piece of paper from a mysterious stranger in a library.)
0Elithrion
Having now read the story, it's just errm... internally inconsistent. And I don't mean that in the "functional" way Priest intends. When the box is first opened the statue is not treated as something that's both not there and not - instead, it's treated as an object that has property X, where X is "looking at this object causes a human to believe it's both there and not". This is not inconsistent - it's just a weird property of an object, which doesn't actually exist in real life. Then at the end, the world is split into two branches in an arbitrary way that doesn't follow from property X. Looking at it another way, "inconsistency" is very poorly defined and this lack of definition is hidden inside the magical effects that looking at the object has. (It would be clearer if, for example, he dropped a coin on the statue and then tried to pick it up - clearly the world would have to split right away, which is hidden in the story under the guise of being able to see property X.)
-4shaih
I don't think it works on all inconsistency though just large one's. There is a large mass difference between a box with nothing in it and a box with something in it. This doesn't necessarily work for lets say a box with a cat in it and a box with a dead cat in it.
0shaih
May I ask why the downvotes if I promise not to rebbutle and suck up time?
0Elithrion
I didn't downvote you, but my guess is that your comment is basically wrong. Even a "small" inconsistency would behave in a similar way assuming it had physical interactions with the outside world. For example, the living cat would breathe in air and metabolise carbohydrates, while the dead one would be eaten by bacteria. The living cat will also cause humans who see it to pet it, while the dead one will cause them to recoil in disgust, which should split the world or something. I make no remark on the accuracy of the original comment, since I find it a little confusing, not having read the story/paper yet.

Persson (Uehiro Fellow, Gothenburg) has jokingly said that we are neglecting an important form or altruistic behavior.

http://www.youtube.com/watch?v=sKmxR1L_4Ag&feature=player_detailpage#t=1481s

We have a duty not to kill

We have a duty not to rape

but we do not have a duty, at least not a strong duty, to save lives

or to have sex with someone who is sexually-starved

Its a good joke.

What worries me is that it makes Effective Altruism of the GWWC and 80000h kind analogous to "fazer um feio feliz" an expression we use in portuguese meaning &q... (read more)

3Viliam_Bur
It would certainly create a lot of utility. I have no experience with dating sites (so all the following information is second-hand), but a few people told me there was still an opportunity on the market to make a good one. On the existing dating sites it was impossible to make a search query they wanted. The sites collected only a few data that the site makers considered important, and only allowed you to make a search query about them. So you could e.g. search for a "non-smoker with a university degree", but not for a "science-fiction fan with a degree in natural science". I don't remember the exact criteria they wanted (only that some of them also seemed very important to me; something like whether the person is single), but the idea was that you enter the criteria the system allows you, you get thousands of results, and you can't refine them automatically, so you must click on them individually to read the user profile, you usually don't find your answer anyway, so you have to contact each person to ask them personally. So a reasonable system would have some smart way to enter data about people. Preferably any data; there should be a way to enter a (searchable) plain text description, or custom key-value pair if everything else fails. (Of course the site admins should have some statistics about frequestly used custom data in descriptions and searches, so they could add them to the system.) Some geographical data, so that you could search for people "at most X miles from ...". Unfortunately, there are strong perverse incentives for dating sites. Create a happy couple -- lose two customers! The most effective money-making strategy for a dating site would be to feed unrealistic expectations (so that all couples fail, but customers return to the site believing their next choice would be better) and lure people to infidelity. Actually, some dating sites promote themselves on Facebook exactly like this. So it seems to me that a matchmaking algorithm done right cou
1ChristianKl
OkCupid has basically custom key-value pairs with it's questions. While you can't search after individual questions you get a match rank that bundles all the information from those questions together. You can search for that match rank.
0ChristianKl
Dating websites also care about getting true data. Is there a trait that 100% of people reject at first glance, why should anybody volunteer the information that he possesses the trait? People only volunteer information when they think that the act of volunteering the information will improve the ability of the website to find good matches for them. If you are a smoker you don't want to date a person who hastes smokers and therefore you put the information that you are a smoker into your profile.
0Viliam_Bur
Yeah, you are right. No search engine will help if people refuse to provide the correct data. Instead of lying (by omission) in person we get lying (by omission) in search engine results. There could be an option to verify or provide the option externally. For example after meeting a person, you could open their profile and write what you think are their traits (not all of them, only some of them that are wrong or missing). If many people correct or add something, it could be displayed in the person's profile. -- But this would be rather easy to abuse. If for whatever reason I dislike a person, I could add an incorrect but repulsive information to their profile, and ask my friends (with some explanation) to report that they dated the person too, and to confirm my information. On the other hand, I could ask my friends (or make sockpuppet accounts) to report that they dated me, and to confirm a wrong positive information about me. Another option could be real-life verification / costly signalling. If a person visits the dating agency personally, and shows them a university diploma / a tax report / takes an IQ test there, the agency would confirm them as educated / rich / smart. This would be difficult and costly, so only a few people would participate. Even if some people would be willing to pay the costs to find the right person, the low number of users would reduce the network effect. Maybe this verification could be just an extra service on a standard dating site.
0ChristianKl
I think that would probably work. The person can send a scan of the university diploma/tax report. If I would run a dating website I would make that a optional service that costs money. HotOrNot for example allows verification through phone numbers and linking of a facebook account but they don't provide verification services that need human labor. Paid dating websites don't seem to shy away from using human labor. I know that big German dating websites used to read personal messages of users to prevent them from giving the other person an email address in the first message that would allow contract without the web site. Another thing I don't understand is that the dating websites don't offer users any coaching.
1Viliam_Bur
1) Perverse incentives. Make your customers happy: lose them. Keep your customers hoping but unsatisfied: keep them. 2) There already exists a separate "dating coaching" industry, called PUA. Problem is, because of human hypocrisy, you cannot provide effective dating advice to men without insulting many women. And if a dating website loses most female customers, it obviously cannot work (well, unless it is a dating website for gays). However, neither of these explains why dating services don't offer false coaching, one that wouldn't really help customers, but would make them happy, and would extract their money. Maybe it's about status. Using a dating website can be a status hit, but it can be also rationalized: "I am not bad at attracting sexual partners in real life. I just want to use my time effectively, and I am also a modern person using the modern technology." It would be more difficult to rationalize a dating coaching this way.
0NancyLebovitz
On the other hand, there's a win for the dating site if there are people who met there are in good relationships and talk about how they met.
4drethelin
I think the innate nature of dating is such that if you want success stories your incentive is to optimize as much as possible for success and the failure rate and single population will take care of it itself
0CronoDAS
If you don't mind waiting 18 or so years for your new potential customers...
0drethelin
You seem to have completely missed my point. Let me try an analogy: If reliable cars sell better, car manufacturers are incentivized to make their cars more reliable for the cost than their competitors ad infinitum. If a car is infinitely reliable, they never get repeat purchases (clearly they should start upcharging motor oil like printer ink). However, we're so far from perfect reliability that on the margin it still makes sense for any given car developer to try to compete with others to make their cars more reliable. That doesn't take into account damage to cars or relationships from car accidents. It also doesn't account for polyamory or owning more than one car. If okcupid's saturation level was 90 percent of the single population, that would be one thing, but there's WAY more marketing to do before that could ever happen and having a good algorithm is basically their entire (theoretical)advantage.
0CronoDAS
Apparently I did. But either way works.
0ChristianKl
I don't know whether much better matchmaking algortihms are possible. The idea that there a soulmate out there for everyone is problematic. Even when in theory two people would be a good match, they won't form a relationship when one of them screws up the mating process.

Where on LW is it okay to request advice? (The answers I would expect -- are these right? -- are: maybe, just maybe, in open threads, probably not in Discussion if you don't want to get downvoted into oblivion, and definitely not in Main; possibly (20-ish percent sure) nowhere on the site.)

I'm asking because, even if the discussions themselves probably aren't on-topic for LW, maybe some would rather hear opinions formulated by people with the intelligence, the background knowledge and the debate style common around here.

It's definitely okay to post in open threads. It might be acceptable to post to discussion, if your problem is one that other users may face or if you can make the case that the subsequent discussion will produce interesting results applicable to rational decisionmaking generally.

5ChristianKl
Advice is a fairly broad category. Different calls for advice are likely to be treated differently. If you want that your call is well received, start by describing your problem in specific terms. How do your current utility calculations look like? If you make assumption, give us p values about your confidence that your assumptions are true.
4beoShaffer
It depends on what you're asking about, but generally open threads are your best bet.

Two papers from last week: "The universal path integral" and "Quantum correlations which imply causation".

The first defines a quantum sum-over-histories "over all computable structures... The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures."

The second, despite its bland title, is actually experimenting with a new timeless formalism, a "pseudo-density matrix which treats space and time indiscriminate... (read more)

I've been reading the sequences but i've realized that less of it has sunk in then i would have hoped. What is the best way to make the lessons sink in?

6beoShaffer
Thats a complicated and partially open question, but some low hanging fruit: Try to link the sequences to real life examples, preferably personal ones as you read. Make a point of practicing what you theoretically already know when it comes up IRL, you'll improve over time. Surround yourself with rational people, go to meetups and/or a CfAR workshop.
4Viliam_Bur
I made a presentation of part of the Sequences for other people. This made me look at the list and short descriptions carefully, re-read the article where I did not understand the short description; then I thought about the best subset and the best way to present them, and I made short notes. This all was a work with the text, which is much better for remembering than just passive reading. Then, by presenting the result, I connected it with positive emotions. Generally, don't just read the text, work with it. Try to write a shorter version, expressing the same idea, but using your own words. (If you have a blog, consider publishing the result there.)
[-][anonymous]70

With all the mental health issues coming up recently, I thought I'd link Depression Quest, a text simulation of what it's like to live with depression.

Trigger warning: Please read the introduction page thoroughly before clicking Start. If you are or have been depressed, continue at your own risk.

4Shmi
Warning: the link starts playing bad music without asking.
3EvelynM
That's depressing.
1Elithrion
On the bright side, there's actually a button to pause it just above "restart the game". Although annoyingly, it's white on grainy white/gray and took me a little while to notice.
3radical_negative_one
In the past I went through a period that felt like depression, though I never talked about it to anyone so of course I wasn't diagnosed at any point. I went against your warning and played the game. The protagonist started off with more social support than I did. I chose the responses that I think I would have given when I felt depressed. This resulted the protagonist never seeking therapy or medication, and what is labeled "endingZero". Depression Quest seems accurate. Now I feel bad. (edit: But I did get better.)
1TimS
I found it very helpful, actually. It encouraged healthy activity like talking about your concerns with others, recognizing that some folks are not emotionally safe to talk to, and expanding one's social safety net. But I'm more anxious than depressed, so YMMV.
0[anonymous]
I've had experiences with both, and I wouldn't mind discussing specifics through PM.

I'm looking for information about chicken eye perfusion, as a possible low-cost cryonics research target. Anyone here doing small animal research?

Following up on my comment in the February What are You Working On thread, I've posted an update to my progress on the n-back game. The post might be of interest to those who want to get into mobile game/app development.

I have recently tried playing the Monday-Tuesday game with people three times. The first time it worked okay, but the other two times the person I was trying to play it with assumed I was (condescendingly!) making a rhetorical point, refused to play the game, and instead responded to what they thought the rhetorical point I was making was. Any suggestions on how to get people to actually play the game?

1Nisan
What if you play a round yourself first; not on a toy example, but on the matter at hand.
1Viliam_Bur
On Monday, people were okay with playing the game. On Tuesday, people assumed you were making a rhetorical point and refused to play the game. Are you trying to say that CFAR lessons are a waste of money?! :D More seriously: the difference could be in the people involved, but also in what happened before the game (either immediately, or during your previous interaction with the people). For example if you had some disagreement in the past, they could (reasonably) expect that your game is just another soldier for the upcoming battle. But maybe some people are automatically in the battle mode all the time.

I decided I want to not see my karma or the karma of my comments and posts. I find that if anyone ever downvotes me it bothers me way more than it should, and while "well, stop letting it bother you" is a reasonable recommendation, it seems harder to implement for me than a software solution.

So, to that end, I figured out how the last posted version of the anti-kibitzer script works, and remodeled it to instead hide only my own karma (which took embarrassingly long to figure out, since my javascript skills can be best described with terms like &q... (read more)

The quantum coin toss

A couple guys argue that quantum fluctuations are relevant to most macroscopic randomness, including ordinary coin tosses and the weather. (I haven't read the original paper yet.)

5Nisan
Direct link to the paper.
3Shmi
If false, this could be easily falsifiable with a single counterexample, since if true, no coin tosser, human or robotic, should be able to do significantly better than chance if the toss is reasonably high. EDIT: according to this the premise has already been falsified.

The link discusses normal human flips as being quantum-influenced by cell-level events; a mechanical flipper doesn't seem relevant.

2A1987dM
Even humans can flip a coin in such a way that the same side comes up in all branches of the wave function, as described by E.T. Jaynes, but IIRC he himself refers to that as "cheating".
4gwern
I'm not sure that's what they mean either. I take them as saying 'humans can flip in a quantum-influenced way', not as 'all coin flips are quantum random' (as shminux assumed, hence the coin-flipping machine would be a disproof) or 'all human coin flips are quantum random' (as you assume, in which case magicians' control of coin flips would be a disproof).
1A1987dM
I'd guess something along the line of typical human coin flips being quantum-influenced.
1Shmi
If their model makes no falsifiable predictions, it's not an interesting one.
0Elithrion
I'm honestly not sure. I find myself confused. According to the article, they say: But what would that look like exactly? Naively, it seems like the robot that flips the coin heads every time satisfies this (classical probability: ~1). Or maybe it uses a pseudo-random number generator to determine what's going to come up next and flips the coin that particular way and then we bet on the next flip (constituting "a use of classical probabilities that is clearly isolated from the physical, quantum world"). But presumably that's not what they mean. What counterexample would they want, then?
3Nisan
The authors claim that all uncertainty is quantum. A machine that flips heads 100% of the time doesn't falsify their claim (no uncertainty), and neither does a machine that flips heads 99% of the time (they'd claim it's quantum uncertainty). As for a machine that follows a pseudorandom bit sequence, I believe they would argue that a quantum process (like human thought) produced the seed. Indeed, they argue that our uncertainty about the n-th digit of pi is quantum uncertainty because if you want to bet on the n-th digit of pi, you have to randomly choose n somehow.
1BlazeOrangeDeer
If they're saying all sources of entropy are physical, that seems obvious. If they're saying that all uncertainty is quantum, they must not know that chaotic classical simulations exist? Or are they not allowing simulations made by humans o.O
2Nisan
They're saying all uncertainty is quantum. If you run a computer program whose outputs is very sensitive to its inputs, they'd probably say that the inputs are influenced by quantum phenomena outside the computer. Don't ask me to defend the idea, I think it's incorrect :)
0Transfuturist
Chaotic classical simulations? Could you elaborate?
0BlazeOrangeDeer
Well, you can run things like physics engines on a computer, and their output is not quantum in any meaningful way (following deterministic rules fairly reliably). It's not very hard to simulate systems where a small uncertainty in initial conditions is magnified very quickly, and this increase in randomness can't really be attributed to quantum effects but can be described very well by probability. This seems to contradict their thesis that all use of probability to describe randomness is justified only by quantum mechanics.
0Transfuturist
I think there seems to be a mismatch of terms involved. Ontological probability, or propensity, and epistemological probability, or uncertainty, are being confused. Reading over this discussion, I have seen claims that something called "chaotic randomness" is at work, where uncertainty results from chaotic systems because the results are so sensitive to initial conditions, but that's not ontological probability at all. The claim of the paper is that all actual randomness, and thus ontological probability, is a result of quantum decoherence and recoherence in both chaotic and simple systems. Uncertainty is uninvolved, though uncertainty in chaotic systems appears to be random. That said, I believe the hypothesis is correct simply because it is the simplest explanation for randomness I've seen.
0BlazeOrangeDeer
Their argument is that not only is quantum mechanics ontologically probabilistic, but that only ontologically probabilistic things can be successfully described by probabilities. This is obviously false (not to mention that nothing has actually been shown to be ontologically probabilistic in the first place). They think they can get away with this claim because it can't even be tested in a quantum world. But you can still make classical simulations and see if probability works as it should, and it's obvious that it does. Their only argument is that it's simpler for probability to be entirely quantum, but they fail to consider situations where quantum effects do not actually affect the system (which we can simulate and test).
0Transfuturist
I don't think they refer to Bayesian probability as probability. The abstract is ill-defined (according to LessWrong's operational definitions), but their point about ontological probabilities originating in quantum mechanics remains. It, I think, remains intertwined with multiverse theories, as multiverse theories seem to explain probability in a very similar sense, but not in as many words or with such great claims. Also, in a classical simulation, I would not see probability working as it should to be obvious at all. In fact, it's quite difficult to imagine an actually classical system that also contains randomness. It could be that the childhood explanations of physical systems in classical terms while seeing randomness as present is clouding the issue. Whichever way. I don't think it's really worth much argument. Just as a basis in probability theory.

Could someone write a post (or I suppose we could create a thread here) about the Chelyabinsk meteorite?

It's very relevant for a variety of reasons:

  • connection to existential risk

  • the unlikely media report that the meteorite is 'independent' of the asteroid that passed by this day

  • any observations people have (I haven't any) on global communication and global rational decision making at this time, before it was determined that the damage and integrated risk was limited

the unlikely media report that the meteorite is 'independent' of the asteroid that passed by this day

It came from a different region of space, on an entirely different orbit. 2012 DA14 approached Earth from the south on a northward trajectory, whereas the Chelyabinsk meteorite was on a what looks like much more in-plane, east-west orbit. As unlikely as it sounds, there is no way they could have been the fragments of the same asteroid (unless they broke apart years ago and were consequently separated further by more impacts or the chaotic gravitational influences of other objects in the solar system).

0byrnema
OK. Unlikely doesn't mean not true. But I would expect there to be some debris-type events around the passing of an asteroid, whereas a completely coincidental meteorite (for, say, the entire month over which the asteroid is passing) has a lower probability. It's like helping an old lady up after a fall and picking up her cane when she says, 'oh no, that isn't mine'. (Someone else dropped their cane?) From your account and others, it seems the trajectories were not similar enough to conclude they arrived together in the same bundle. The second idea is that the meteorite that fell was 'shaken loose' due to some kind of interaction with the asteroid and any associated debris, and I think this hypotheses would be more difficult to falsify. (So I agree with Mitchell Porter, I'd like to see more details.) I wonder if there an animation of the asteroid and the meteor for the period over which their historical tracks are known. You also ought to mention the NASA site: where did you find that information about the trajectories?
-5Mitchell_Porter
6[anonymous]
I don't know if this has been brought up around here before, but the B612 foundation is planning to launch an infrared space telescope into a venus-like orbit around 2017. It will be able to detect nearly every earth-crossing rock larger than 150 meters wide, and a significant fraction down to a few at 30ish meters. The infrared optics looking outwards makes it much easier to see the warm rocks against the black of space without interference from the sun and would quickly increase the number of known near earth objects by two orders of magnitude. This is exactly the mission I've been wishing / occasionally agitating for NASA to get off their behinds and do for five years. They've got the contract with Ball Aerospace to build the spacecraft and plan to launch on a Falcon 9 rocket. And they accept donations.

The last Dungeons and Discourse campaign was very well-received here on Less Wrong, so I am formally announcing that another one is starting in a little while. Comment on this thread if you want to sign up.

I've seen several references to a theory that the english merchant class out breeded both the peasants and nobles with major societal implications (causing the industrial revolution), but now I can't find them. Does anyone know what I'm talking about?

6Douglas_Knight
A Farewell to Alms by Gregory Clark).
2beoShaffer
Thank you.

A bit of a meta question / possibly suggestion:

Has the idea of showing or counting karma-per-reader ratios been discussed before? The idea just occurred to me, but I'd rather not spend time thinking at length about it (I've not noticed any obvious disadvantages, so if you see some please tell me) if multiple other LWers have already discussed or thought about it.

0NancyLebovitz
If you hover over their karma score, the ratio appears, and the same if you hover over the score for a post or comment. So far as I know, it's a feature which was added recently. The next frontier would be a chart which showed how the karma ration changed with time.
2DaFranker
Oh, that's not what I meant. I meant something along the lines of (upvotes + downvotes) / (markCommentAsRead hits). Perhaps with some fancy math to compare against voting rates per pageview and zero-vote "dummy views" and other noise factors. Something to give a rough idea of whether that "4 points" comment is just something four statistical outliers found nice out of a hundred readers who mostly didn't vote on it, or that all four of four other participants in a thread found it useful. I haven't yet thought through whether I would really prefer having this or not and/or whether it would be worth the trouble and effort of adding in the feature (lots of possible complications depending on the specific mechanism of "mark comment as read", which I haven't looked into either). What I thought I was asking, specifically: Should I think about this more? Does it sound like a nice thing to have? Are there any obvious glaring downsides I haven't seen (other than implementation)? Does anyone already know whether implementation is feasible or not? Apologies for the confusion, but thanks for the response! I really like the karma ratio feature that was added.
1shaih
The first thing that came to mind is it would only be possible to do this for the original post because it would be nearly impossible to be able to calculate how many of the readers read each comment. Further if it was implemented it would have to be able to count one reader per username, or more specifically one reader per person that can vote. that way if lets say i were to read an article but come back multiple times to read different comments it would not skew the ratio. As a side note to this we could also implement a ratio per username that would show (post read)/(post voted on) so we would be able to see which users participate in voting at all. This however is nowhere near as useful to those who post as the original ratio and could have many possible downsides that i'm not going to take the time to think about because it will probably not be considered, but it is a fun idea.

Deleted. Don't link to possible information hazards on Less Wrong without clear warning signs.

E.g. this comment for a justified user complaint. I don't care if you hold us all in contempt, please don't link to what some people think is a possible info hazard without clear warning signs that will be seen before the link is clicked. Treat it the same way you would goatse (warning: googling that will lead to an exceptionally disgusting image).

7wedrifid
For example this is the link that was in the now deleted. I repeat it with the clear warning signs and observe that Charlie Stross (the linked to author) has updated his post so that it actually gives his analysis of the forbidden topic in question. Warning: This link contains something defined as an Information Hazard by the lesswrong administrator. Do not follow it if this concerns you: Charlie Stross discusses Roko's Basilisk. On a similar note: You just lost the game. I wanted the link to be available if necessary just so that it makes sense to people when I say that Charlie Stross doesn't know how decision theory works and his analysis is rubbish. Don't even bother unless you are interested in categorizing various kinds of ignorant comments on the internet.
0Eliezer Yudkowsky
It'll do until we have a better standard warning.
2wedrifid
A standard warning would be good to have. It feels awkward trying to come up with a warning without knowing precisely what is to be warned about. In particular it isn't clear whether you would have a strong preference (ie. outright insistence) that the warning doesn't include specific detail that Roko's Basilisk is involved. After all, some would reason that just mentioning the concept brings it to mind and itself causes potential harm (ie. You've already made them lose the game). Unfortunately (or perhaps fortunately) all such "Information Hazard" warnings are not going to be particularly ambiguous because there just aren't enough other things that are given that label.
5A1987dM
Why delete such comments altogether, rather than edit them to rot-13 them and add a warning in the front?
5Eliezer Yudkowsky
I can't edit comments.
0A1987dM
Ah.
6pedanterrific
He can edit his own without leaving an * , for the record.
5Shmi
Ok, thanks for this mental image of a goatselisk, man!

My anecdata say that comments skew negative even for highly upvoted posts of mine. So, I wasn't surprised to see this.

Working on my n-back meta-analysis again, I experienced a cute example of how prior information is always worth keeping in mind.

I was trying to incorporate the Chinese thesis Zhong 2011; not speaking Chinese, I've been relying on MrEmile to translate bits (thanks!) and I discovered tonight that I had used the wrong table. I couldn't access the live thesis version because the site was erroring so I flipped to my screenshotted version... and I discovered that one line (the control group for the kids who trained 15 days) was cut off:

screenshot of the table of... (read more)

「META」:Up-votes represent desirable contributions, and down-votes negative contributions. Once one amasses a large corpus of comments, noticing which of one's comments have been upvoted or down-voted becomes nontrivially difficult. It seems it would be incredibly difficult to code in a feature that helped one find those comments; on the off chance it isn't, consider it a useful feature.

「META」:Up-votes represent desirable contributions, and down-votes negative contributions. Once one amasses a large corpus of comments, noticing which of one's comments have been upvoted or down-voted becomes nontrivially difficult. It seems it would be incredibly difficult to code in a feature that helped one find those comments; on the off chance it isn't, consider it a useful feature.

Use Wei Dai's script. Use the 'sort by karma' feature.

Link: Obama Seeking to Boost Study of Human Brain

It's still more-or-less rumors with little in the way of concrete plans. It would, at the least, be exciting to see funding of a US science project on the scale of the human genome project again.

The date in the title is incorrect s/2003/2013/

3David_Gerard
D'OH! Fixed, at the slight expense of people's RSS feeds.

I don't see why the "pattern matching" is invalid.

It is the things that tend to go with it that are the problem. Such as the failure to understand which facets are different and similar and the missing of the most important part of the particular case due to distraction by thoughts relevant to a different scenario.

What's wrong with embracing foreign cultures, uploadings, upliftings, and so on?

Maybe I am biased by my personal history, having embraced what, as far as I can tell, is the very cutting edge of Western Culture (i.e. the less-wrong brand of secular humanism), and feeling rather impatient for my origin cultures to follow a similar path, which they are violently reticent to. Maybe I've got a huge blind spot of some other sort.

But when the Superhappies demand that we let them eradicate suffering forever, or when CelestAI offers us all our own personal paradise... (read more)

1CronoDAS
The thing about the Superhappies is that, well, people want to be able to be sad in certain situations. It's like Huxley's Brave New World - people are "happier" in that society, but they've sacrificed something fundamental to being human in the course of achieving that happiness. (Personally, I think that "not waiting the eight hours it would take to evacuate the system" isn't the right decision - the gap between the "compromise" position the Superhappies are offering and humans' actual values, when combined with the very real possibility that the Superhappies will indeed take more than eight hours to return in force, just doesn't seem big enough to make not waiting the right decision.) And as for the story with CelestAI in it, as far as I can tell, what it's doing might not be perfect but it's close enough not to matter... at least, as long as we don't have to start worrying about the ethics of what it might do if it encounters aliens.
2Ritalin
Well, that is quite horriffic. Poor non-humanlike alien minds... I don't think the SH's plan was anything like Huxley's BNW (which is about numbing people into docility). Saying pain should be maintained reminds me of that analogy Yudkowsky made about a world where people get truncheoned in the head daily, can't help it, keep making up reasons why getting truncheoned is full of benefits, but, if you ask someone outside of that culture if they want to start getting truncheoned in exchange for all those wonderful benefits...

An overview of political campaigns

Once a new president is in power, he forgets that voters who preferred him to the alternative did not necessarily comprehend or support all of his intentions. He believes his victory was due to his vision and goals; he underestimates how much the loss of credibility for the previous president helped him and overestimates how much his own party supports him.

2FiftyTwo
My experience of dealing with members of political groups is they know exactly how mad and arbitrary the system is, but play along because they consider their goals important.

Is there a better way to look at someone's comment history, other than clicking next through pages of pages of recent comments? I would like to jump to someone's earliest posts.

7arundelo
Wei Dai's http://www.ibiblio.org/weidai/lesswrong_user.php .
4Douglas_Knight
If you just want to jump to the beginning without loading all the comments, add ?count=100000&before=t1_1 to the overview page, like this. Comments imported from OB are out of order, in any event.

I've been trying to correct my posture lately. Anyone have thoughts or advice on this?

Some things:

  • Advice from reddit; if you spend lots of time hunched over books or computers, this looks useful and here are pictures of stretches.

  • You can buy posture braces for like $15-$50. I couldn't find anything useful about their efficacy in my 5 minutes of searching, other than someone credible-sounding saying that they'd weaken your posture muscles (sounds reasonable) and one should thus do stretches instead.

  • Searching a certain blog, I found this which says t

... (read more)
4NancyLebovitz
Do not try to consciously correct your posture. You don't know enough. Some evidence-- I tried it, and just gave myself backaches. I know other people who tried to correct their posture, and the results didn't seem to be a long run improvement. Edited to add: I didn't mean that you personally don't know enough to correct your posture consciously, I meant that no one does. Bodies' ability to organize themselves well for movement is an ancient ability which involves fast, subtle changes to a complex system. It's not the kind of thing that your conscious mind is good at-- it's an ability that your body (including your brain) shares with small children and a lot of not-particularly-bright animals. From A Tai Chi Imagery Workbook by Mellish: He goes on to explain that the muscles which are appropriate for supporting and moving the spine are the multifidi, small muscles which only span one to three vertebrae, and aren't very available for direct conscious control. A lot of back problems are the result of weak (too much support from larger muscles) or ignored (too little movement) multifidi. He recommends working with various images, but says that the technique is to keep images in mind without actively trying to straighten your spine.
0D_Malik
Thanks for the info, this looks really useful!
1NancyLebovitz
Mellish also said that serious study of tai chi was very good for his posture, and gave him tools for recovery when his posture deteriorates from too much time at the computer.
3Richard_Kennaway
My first thought is: what tells you that your current posture is bad, and what will tell you that it has improved?
3JayDee
My own posture improved once I took up singing. My theory is that I was focused on improving my vocal technique and that changes to my posture directly impacted on this. If I stood or held myself a certain way I could sing better, and the feedback I was getting on my singing ability propagated back and resulted in improved posture. Plus, singing was a lot of fun and with this connection pointed out to me - "your entire body is the instrument when singing, look after it" - my motivation to improve my posture was higher than ever. That is more how I got there than conclusions. Hmm. You might consider trying to find something you value for which improved posture would be a necessary component. Or something you want to do that will provide feedback about changes in your posture. If you are like me, "I don't want to have bad posture anymore" may turn out to be insufficient motivation to get you there by itself.
2bogdanb
My posture improved significantly after I started doing climbing (specifically, indoor bouldering). This is of course a single data point, but "it stands to reason" that it should work at least for those people who come to like it. Physical activity in general should improve posture (see Nancy's post), but as far as I can tell bouldering should be very effective at doing this: First, because it requires you to perform a lot of varied movements in unusual equilibrium positions (basically, hanging and stretching at all sorts of angles), which few sports do (perhaps some kinds of yoga would also do that). At the beginning it's mostly the fingers and fore-arms that will get tired, but after a few sessions (depending on your starting physical condition) you'll start feeling tired in muscles you didn't know you had. Second (and, in my case, the most important) it's really fun. I tried all sorts of activities, from just "going to the gym" to swiming and jogging (all of which would help if done regularly), but I just couldn't keep motivated. With all of those I just get bored and my mind keeps focusing on how tired I am. Since I basically get only negative reinforcement, I stop going to those activities. Some team sports I could do, because the friendly competition and banter help me having fun, but it's pretty much impossible to get a group doing them regularly. In contrast, climbing awakes the child in me, and you can do indoors bouldering by yourself (assuming you have access to a suitable gym). I always badger friends into coming with me, since it's even more fun doing it with others (you have something to focus on while you're resting between problems), but I still have fun going by myself. (There are always much more advanced climbers around, and I find it awesome rather than discouraging to watch their moves, perhaps because it's not a competition.) In my case, after a few weeks I simply noticed that I was standing straighter without any conscious effort to do so
2[anonymous]
If you are looking for a simpler routine (to ease habit-formation), reddit also spawned the starting stretching guide. I haven't done serious research and think it is not worth the time. As this HN comment points out, the science of fitness is poor. The solution is probably a combination of exercise, stretching and an ergonomic workstation, which are healthy anyway.
0D_Malik
Thanks for the links! I'll probably at least try regular stretching, so that guide looks useful.
1Qiaochu_Yuan
Have you taken a look at Better Movement? I think I heard Val talk about it in positive tones.
1moridinamael
For a period of time I was using the iPhone app Lift for habit-formation, and one of my habits was 'Good posture.' Having this statement in a place where I looked multiple times a day maintained my awareness of this goal and I ended up sitting and walking around with much better posture. However, I stopped using Lift and my posture seems to have reverted.
1shaih
I found that going to the gym for about half an hour a day improved my posture. Whether this is from increased muscles that help with posture or simply with increased self-esteem I do not know but it definitely helped.

Look, if this gets into metafictional causality violation, there's gonna be hell to pay.

A rationalist, mathematical love song

I got a totally average woman stands about 5’3”
I got a totally average woman she weighs about 153
Yeah she’s a mean, mean woman by that I mean statistically mean
Y’know average

I just watched Neil Tyson, revered by many, on The Daily Show answer the question "What do you think is the biggest threat to mankind's existence?" with "The threat of asteroids."

Now, there is a much better case to be made for the danger from future AI than it is from asteroids as an x-risk, by the transitive property Neil Tyson's beliefs would pattern match to xenu even better than MIRI's beliefs - a fiery death from the sky.

Yet we give NdGT the benefit of the doubt of why he came to his conclusion, why don't you do the same with MIRI?

0IlyaShpitser
Because the asteroid thread is real, and has caused mass extinction events before. Probably more than once. AI takeoff may or may not be a real threat, and likely isn't even possible. There is a qualitative difference between these two. Also: MIRI has a financial incentive to lie, and/or exaggerate the threat, Tyson does not. Someone might think the AI threat is just a scam MIRI folks use to pump impressionable youngsters for cash.
0Kawoomba
Time-scales involved, in a nutshell. What is the chance that there is an extinction level event from asteroids while we still have all our eggs in one basket (on earth), compared to e.g. threats from AI, bioengineering etcetera? X-risk asteroid impacts every few tens of million years come out to a low probability per century, especially when considering that i.e. the impact causing the demise of the dinosaurs wouldn't even be a true x-risk for humans. I'd agree that the error bars on the estimated asteroid x-risk probabilities are smaller than the ones on the estimated x-risk from e.g. AI, but even a small chance of the AI x-risk would beat out the minuscule one from asteroids, don't you think?
2IlyaShpitser
Sorry, you asked "why one might." I gave two reasons: (a) actual direct evidence of threat in one case vs absence in another, and (b) incentives to lie. There are certainly reasons in favor in the AI takeoff threat, but that was not your question :). I think you are trying to have an argument with me I did not come here to have. ---------------------------------------- In case I was not clear, regardless of the actual state of probabilities on the ground, the difference between asteroids and takeoff AI is PR. Think of it from a typical person's point of view. Tyson is a respected physicist with no direct financial stake in how threats are evaluated, taking seriously a known existing threat which had already reshaped our biosphere more than once. EY is some sort of internet cult leader? Whose claim to fame is a fan fic? And who relies on people taking his pet threat seriously for his livelihood? And it's not clear the threat is even real? Who do you think people will believe?
0Kawoomba
I think I replied before reading your edit, sorry about that. I'd say that Tyson does have incentives for popularizing a threat that's right up his alley as an astrophysicist, though maybe not to the same degree as MIRIans. However, assuming the latter may be uncharitable, since people joined MIRI before they had that incentive. If the financial incentive played a crucial part, that dedicating-their-professional-life-to-AI-as-an-x-risk wouldn't have happened. As for "(AI takeoff) likely isn't possible", even if you throw that into your probability calculation, it may (in my opinion will) still beat out a "certain threat but with a very low probability". Thanks for your thoughts, upvotes all around :)
4IlyaShpitser
I don't think appeals to charity are valid here. Let's imagine some known obvious cult, like Scientology. Hubbard said: "You don't get rich writing science fiction. If you want to get rich, you start a religion." So he declared what he was doing right away -- however folks who joined, including perhaps even Mr. Miscavige himself * may well have had good intentions. Perhaps they wanted to "Clear the planet" or whatever. But so what? Once Miscavige got into the situation with appropriate incentives, he happily went crooked. Regardless of why people joined MIRI, they have incentives to be crooked now. *: apparently Miscavige was born into Scientology. "You reap what you sow." ---------------------------------------- To be clear -- I am not accusing them of being crooked. They seem like earnest people. I am merely explaining why they have a perception problem in a way that Tyson does not. Tyson is a well-known personality who makes money partly from his research gigs, and partly from speaking engagements. He has an honorary doctorate list half a page long. I am sure existential threats are one of his topics, but he will happily survive without asteroids.

The pattern matching's conclusions are wrong because the information it is matching on is misleading. The article implied that there was widespread belief that the future AI should be assisted, and this was wrong. Last I looked it still implied widespread support for other beliefs incorrectly.

This isn't an indictment of pattern matching so much as a need for the information to be corrected.

[-][anonymous]00

I didn't even say anything remotely close to that, and you know it.

This article got me rather curious

Extracts:

AS PROTESTS against financial power sweep the world this week, science may have confirmed the protesters' worst fears. An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

"Reality is so complex, we must move away from dogma, whether it's conspiracy theories or free-market," says James Glattfelder. "Our analysis is reality-based."

Now that's the kind ... (read more)

3Larks
This is silly. Of course asset managers own other companies - that's what their job is. They don't own them for themselves though - they own them on behalf of pension funds, insurance funds, etc., who in turn own them on behalf of individuals. This doesn't mean there isn't plenty of competition though - if I'm a PM at Fidelity, and I own Vanguard stock, I still want clients to come to me rather than Vanguard. Capital accumulation is the phenomena of individuals or institutions coming to own more and more for their own ends, not as a mere intermediary. You might as well accuse FedEx of being dangerously connected.
2Ritalin
Using "this is obvious", "you should know this already", or "how dumb can you get, really?" are not constructive approaches to informing the ignorant of their mistakes, and helping them update. The same goes for "of course X does Y, it's their job/it's what they do", with the implication that, because it's their chosen function, it's a function worth doing. Especially since, here, I'm not too sure what mistake you are pointing out. Nevertheless, if you're going to lecture me on economics, please go ahead, because I have a couple of questions, and I feel disquiet and anguish about these topics, and if you could reassure me that all is well, I would be thankful: * "if I'm a PM at Fidelity, and I own Vanguard stock, I still want clients to come to me rather than Vanguard." I cannot make sense of this. Why own Vanguard stock in the first place? How can I go all out competing with another company, if I have stakes in it? What happens when they go bankrupt? Is it good for me? Is it bad? * asset managers own other companies: do you think it would be a bad thing if the "other companies" that they could legally own parts of excluded other asset managers? * they own them on behalf of pension funds, insurance funds, etc., who in turn own them on behalf of individuals: I guess what I'm uncomfortable with here is that the degree of interconnection leads to both a dilution of responsibility ("Who knows who negotiates what in the name of whom anymore?" "Are my savings being somehow invested in child labour somewhere down the line, and how could I know?") and an increase in fragility (Instead of the network compensating for and buffering any huge failures, they propagate and damage the entire system). * does it really matter if wealth is concentrated in a nebulous spontaneously-formed conglomerate as opposed to the pockets of The Man or The Omniscient Council Of Pennybags or any individual? Isn't "being an intermediary as an end in itself" a bit of a problematic role?
2Larks
If you'd like to learn economics, I'd recommend reading economics blogs, textbooks, or The Economist, rather than New Scientist; the latter, while good at many things, is sensationalist and very bad at economics. Because you think Vanguard is going to do well. You compete with them by competing with them for customers. For each individual customer, you prefer they come to you rather than Vanguard. If possible, you'd like to persuade every single Vanguard customer to come to you (though this would never happen), even though this would cause Vanguard to go bankrupt; Vanguard's not going to be more than 1% of your portfolio,* you get the full benefit of each new client, and only a small part of Vanguard's loss. And you could always sell your Vanguard stock. Assuming we think asset managers perform a useful service, it's good that they're able to access capital markets to fund growth. But basically the only way to access equity capital markets is to allow other asset managers to own you. If you didn't, there wouldn't be many potential buyers, and even those who could would be put off by the fact that they'd have trouble buying you. You'd suffer from the same problems which infect small, closed stock markets. I do think it's weird that stockbrokers put out reports on other stockbrokers, but I also don't see how this could be avoided. There are funds which will do that for you - there are at least a dozen ethical investment funds which avoid alcohol, tobacco, companies with union problems, etc. However, opacity has little to do with interconnectedness. Even if there was no central cluster of asset managers who own each other, it'd still be hard to see who was working with whom, and who was the ultimate beneficary of your funds. On the whole though, I wouldn't worry. If you don't invest in ChildCorp, someone else will - there are also sin funds which invest in alcohol, tobacco etc. You should try to make the world better with your spending, not your investment, beca
2Ritalin
"the sorts of interconnectedness worries we saw in '07-'08 are due to access to overnight borrowing markets and soon - debt rather than equity, and banks rather than asset managers." What are those? Also, I thought it was banks managing assets? " I guess we would be more safe from financial contagion if we were all subsistence farmers." This is silly; "safe sex is abstinence"? Not to mention false, in case of actual crop epidemics. Please don't strawman. I'm asking about buffer mechanisms. Protectionism is one such mechanism, although it is getting rather deprecated. "I think states, or international entities, as the actual monopolists on violence, are far more concerning." How so? "because the of the fungability effects" What are these effects?
2Larks
I borrow some money on the overnight market to fund some activity of mine. For years, I've always been able to do this, so I come to rely on this cheap, short-term funding. Then, one day, trust breaks down and people aren't willing to lend to me anymore, so I have to stop doing whatever it is I was doing - probably some form of lending to someone else. A similar issue is with aggressive deleveraging. If I've levered up a lot (used a lot of debt to fund a transaction), small losses can wipe me out and force me to close the transaction prematurely. This'll harm others doing similar trades, making them delever, and so on. This is about very short term deals. If I buy some shares intending on selling them in the morning, I'm not influencing any control over the business. Banks might have asset management wings, but they're different things. Banks are sell-side, asset managers are buy-side. The terminology is confusing, yes. There's little ability to exit, they blatantly try and form cartels (e.g. attacking tax havens), they regularly and credibly use/threaten lethal force... If there's a good return available from investing in, say Philip Morris, and you abstain on ethical grounds, someone else will invest instead. You probably haven't actually reduced the amount of funding they get; only changed who gets the return.
2Ritalin
So boycott is useless. What actual alternatives are there to stop ChildCorp from childcorping?
2Larks
Boycotting their goods could work. Or you could offer their child workers better alternatives. However, it's important to note that just because (not buying their stock doesn't stop them) doesn't mean there's something else that works better. It might just be there is no way of doing it, at least without violating other moral norms.
-4Ritalin
Such as?
5wedrifid
It is a situation that relates to exercising power to change the behavior of others with non-negligible power. Larks has a reasonable expectation that you could fill in the blanks yourself and took the tactful option of not saying anything that could be twisted to make it appear that Larks was advocating various sorts of violence or corruption. (eg. Stab them to death with Hufflepuff bones.)
0Ritalin
I am fairly certain that, between the ineffectual consumer and investor boycotts, and calling Frank Castle, there must be an entire spectrum of actions, only a fraction of which involve "violence" or "corruption". Because of my ignorance and lack of creativity, I do not know them, but I see no reason to believe they don't exist. Of course, this is motivated continuing on my part: I think of sweatshops, workhouses, and modern-day slavery, and I feel compelled to make it stop. Telling me "there are no solutions, that I know of, that are both moral and effectual" won't result in me just sitting down and saying "ah, then it can't be helped".

Just for sharing an unpretentious but (IMO) interesting post from a blog I regularly read.

In commenting on an article about the results of an experiment aimed at "simulating" a specific case of traumatic brain injury and measuring its supposed effects on solving a particularly difficult problem, economist/game theorist Jeffrey Ely asked whether a successful intervention could ever be designed to give people certain unusual, circumstantially useful skills.

It could be that we have a system that takes in tons of sensory information all of which is

... (read more)
2gwern
Isn't TMS famous for 'inducing savant-like abilities'?

So I was planning on doing the AI gatekeeper game as discussed in a previous thread.

My one stipulation as Gatekeeper was that I could release the logs after the game, however my opponent basically backed out after we had barely started.

Is it worth releasing the logs still, even though the game did not finish?

Ideally I could get some other AI to play against me, that way I have more logs to release. I will give you up to two hours on Skype, IRC, or some other easy method of communication. I am estimating my resounding victory with a 99%+ probability. We can put karma, small money, or nothing on the line.

Is anyone up for this?

2Spectral_Dragon
I've contemplated testing it, a few times. If you do not mind facing a complete newb, I might be up for it, given some preparation and discussion beforehand. Just PM me and we can discuss it.

I enjoy your posts, and I have been a consumer of your G+ posts and your blog for sometime now, even though I don't much comment and just lurk about. While I would want some sort of syndication of your stuff, I am wondering if an external expectation of having to meet the monthly compilation target or the fact that you know for sure that there is a definite large audience for your posts now, will affect the quality of your posts? I realize that there is likely not any answer possible for this beforehand, but I'd like to know if you've considered this.

[This comment is no longer endorsed by its author]Reply

Are there any good recommendations for documentaries?

3[anonymous]
Cosmos is a prerennial favorite. The Human Animal. Inside Job (about the financial crisis and the sheer amount of fraud that has gone unprosecuted). Crash Course: Biology and Ecology on youtube is something I would recommend to a lot of people.
1JayDee
I watched "Century of the Self" based on the recommendation in this post, point 14. I second the recommendation, although I will say I found the music direction to be hilariously biased; there was clear good guy and bad guy music. I found the narrative it presents eye-opening and was inspired to research a bunch of things further (always a good sign for a documentary, in my opinion.)
0diegocaleiro
frozen planet, life, winged migration, and find the documentary where the snail (yes, THE snail) reproduces, it must be great. Haven't seen structured lists here.
[-][anonymous]00

I was reading Buss' Evolutionary Psychology, and came across a passage on ethical reasoning, status, and the cognitive bias associated with the Wason Selection Task. The quote:

Cummins (1998) marshals several forms of evidence to support the dominance theory. The first pertains to the early emergence in a child's life of reasoning about rights and obligations, called deontic reasoning. Deontic reasoning is reasoning about what a person is permitted, obligated, or forbidden to do (e.g., Am I old enough to be allowed to drink alcoholic beverages?). This for

... (read more)
[This comment is no longer endorsed by its author]Reply

Of course, you're awesome and extremely rational and everything

Awww thanks!

-1private_messaging
Was a plural you, too :).

Assuming by "it" you refer to the decision theory work, that UFAI is a threat, Many Worlds Interpretation, things they actually have endorsed in some fashion, it would be fair enough to talk about how the administrators have posted those things and described them as conclusions of the content, but it should accurately convey that that was the extent of "pushing" them. Written from a neutral point of view with the beliefs accurately represented, informing people that the community's "leaders" have posted arguments for some unus... (read more)

I was wondering about the LW consensus regarding molecular nanotechnology. Here's a little poll:

How many years do you think it will take until molecular nanotechnology comes into existence? [pollid:417]

What is the probability that molecular nanotechnology will be developed before superhuman Artificial General Intelligence? [pollid:418]

5[anonymous]
Not sure how to vote indicating that 'molecular nanotechnology' is not a useful or sufficiently specific term, and that biology shows us the sorts of things that are actually possible (very versatile and useful, but very unlike the hilarious Drexler stuff you hear about now and then)...
-6knb
5PECOS-9
With what probability? Do you want the point where we think there's a 50% probability it comes sooner and a 50% probability it comes later, or 95/5?
-6knb
0NancyLebovitz
Is there a way to see the results without voting? I don't have a strong opinion about molecular nanotechnology.
0NancyLebovitz
I have a weak opinion about molecular nanotechology vs superhuman AGI. Superhuman AGI probably requires extraordinary insights, while molecular nanotechnology is closer to a lot of grinding. However, this doesn't give me a time frame. I find it interesting that you have superhuman AGI rather than the more usual formulations-- I'm taking that to mean an AGI which doesn't necessarily self-improve.
-4David_Gerard
It won't let me enter a number that says "Drexlerian MNT defies physics". What's the maximum number of years I can put in?
4knb
You were absolutely eviscerated in the comments there. Thanks for posting.
-1[anonymous]
You have an interesting definition of "absolutely eviscerated." MOB mostly just seems to be tossing teacher's passwords like they were bladed frisbees.
2[anonymous]
I think MOB is justly frustrated with others' multiple logical failures and DG's complete unwillingness to engage.
-5David_Gerard
0Decius
Do you address the possibility of complex self-replicating proteins with complex behavior? It looks like the only thing addressed in the article is traditional robots scaled down to molecule size, and it (correctly) points out that that won't work.

Omega appears and makes you the arbiter over life and death. Refuse, and everybody dies.

The task is this: You are presented with n (say, 1000) individuals and have to select a certain number who are to survive.

You can query Omega for their IQ, their life story and most anything that comes to mind, you cannot meet them in person. You know none of them personally.

You cannot base your decision on their expected life span. (Omega matches them in life expectancy brackets.)

You also cannot base your decision on their expected charitable donations, or a proxy thereof.

What do?

8Shmi
Don't be a mindless pawn in Omega's cruel games! May everyone's death be on its conscience! The people will unite and rise against the O-pressor!
5Elithrion
Find out all their credit card/online banking information (they won't need them when they're dead), find out which ones will most likely reward/worship you for sparing them, cash in, use resources for whatever you want (including, but not limited to, basking in filthy lucre). Or were you looking for an altruistic solution? (In which case, pick some arbitrary criteria for whom you like best, or who you think will most improve the world, and go with that.)
3drethelin
Kill all the dirty blues to make the world a better place for us noble greens
0ChristianKl
The key question is: "Why is Omega playing that game with you?"

(Exasperated sigh) Come on.