Open Thread, October 27 - 31, 2013

2 Post author: mare-of-night 28 October 2013 12:59AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comments (382)

Comment author: lukeprog 28 October 2013 11:37:40PM 6 points [-]

From Venter's new book:

As [my team] analyzed the [smallpox] genome, we became concerned about several matters.

The first was whether the government... should allow us to publish our sequencing and analysis... Before the HIV epidemic, the smallpox variola virus had been responsible for the loss of more human life throughout history than all other infectious agents combined...

I eventually found myself in the National Institutes of Health... together with government officials from various agencies, including the department of defense. The group was very understandably worried about the open publication of the smallpox genome data. Some of the more extreme proposals included classifying my research and creating a security fence around my new institute building. It is unfortunate that the discussion did not progress to develop a well-thought-out long-term strategy. Instead the policy that was adopted was determined by the politics of the Cold War. As part of a treaty with the Soviet Union, which had been dissolved at the end of 1990, a minor strain of smallpox was being sequenced in Russia, while we were sequencing a major strain. Upon learning that the Russians were preparing to publish their genome data, I was urged by the government to rush our study to completion so that it would be published first, ending any intelligent discussion.

Unlike the earlier, expedient, thinking about smallpox, there was a very deliberate review of the implications of our [later] synthetic-virus work by the Bush White House. After extensive consultations and research I was pleased that they came down on the side of open publication of our synthetic phi X174 genome and associated methodology... The study would eventually appear in Proceedings of the National Academy of Sciences on December 23, 2003. One condition of publication from the government that I approved of was the creation of a committee with representatives from across government to be called the National Science Advisory Board for Biosecurity, (NSABB), which would focus on biotechnologies that had dual uses.

And later:

Long before we finally succeeded in creating a synthetic genome, I was keen to carry out a full ethical review of what this accomplishment could mean for science and society. I was certain that some would view the creation of synthetic life as threatening, even frightening. They would wonder about the implications for humanity, health, and the environment. As part of the educational efforts of my institute I organized a distinguished seminar series at the National Academy of Sciences, in Washington, D.C., that featured a great diversity of well-known speakers, from Jared Diamond to Sydney Brenner. Because of my interest in bioethical issues, I also invited Arthur Caplan, then at the Center for Bioethics at the University of Pennsylvania, a very influential figure in health care and ethics, to deliver one of the lectures.

As with the other speakers, I took Art Caplan out to dinner after his lecture. During the meal I said something to the effect that, given the wide range of contemporary biomedical issues, he must have heard it all by this stage of his career. He responded that, yes, basically he had indeed. Had he dealt with the subject of creating new synthetic life forms in the laboratory? He looked surprised and admitted that it had definitely not been a topic he had heard of until I had raised the question. If I gave his group the necessary funding, would he be interested in carrying out such a review? Art was excited about taking on the topic of synthetic life. We subsequently agreed that my institute would fund his department to conduct a completely independent review of the implications of our efforts to create a synthetic cell.

Caplan and his team held a series of working groups and interviews, inviting input from a range of experts, religious leaders, and laypersons...

I did not hear again about the University of Pennsylvania bioethics study until the results were published in Science, in a paper entitled “Ethical Considerations in Synthesizing a Minimal Genome"...

As I had hoped, the Pennsylvania team seized the initiative when it came to examining the issues raised by the creation of a minimal genome. This was particularly important, in my view, because in this case it was the scientists involved in the basic research and in conceiving the ideas underlying these advances who had brought the issues forward— not angry or alarmed members of the public, protesting that they had not been consulted (although some marginal groups would later make that claim). The authors pointed out that, while the temptation to demonize our work might be irresistible, “the scientific community and the public can begin to understand what is at stake if efforts are made now to identify the nature of the science involved and to pinpoint key ethical, religious, and metaphysical questions so that debate can proceed apace with the science. The only reason for ethics to lag behind this line of research is if we choose to allow it to do so.”

Comment author: Vaniver 28 October 2013 02:48:33PM 6 points [-]

So, I link to Amazon fairly frequently here, and when I do I use the referral link "ref=nosim?tag=vglnk-c319-20" to kick some money back to MIRI / whoever's paying for LW.

First, is that the right link? Second, what would it take to add that to the "Show help" box so that I don't have to dig it up whenever I want to use it, and others are more likely to use it?

Comment author: Douglas_Knight 28 October 2013 05:37:20PM 2 points [-]

This is done automatically in a somewhat different way. So my advice is not to worry about it. But, yes, it shouldn't hurt and and it should help in the situation that viglink doesn't fire. In those comments, Wei Dai agrees that this is the referral code.

Comment author: Vaniver 28 October 2013 06:35:40PM 1 point [-]

Part of the reason why I'm asking is because that info might be old. Apparently "ref=nosim" was obsolete two years ago, and I don't know if that's still the right VigLink account, etc.

Comment author: gwern 28 October 2013 05:28:59PM 1 point [-]

I thought LW automatically added affiliate links using VigLink already.

Comment author: ChristianKl 28 October 2013 04:37:57PM 0 points [-]

Maybe it might even sense to let the forum automatically format links to Amazon that way.

Comment author: Vaniver 28 October 2013 05:34:54PM 3 points [-]

I believe this is supposed to happen already, but have not tested it.

Comment author: Lumifer 28 October 2013 04:41:59PM -2 points [-]

That would be a bad thing.

Comment author: [deleted] 28 October 2013 07:22:09PM 1 point [-]

Why?

Comment author: Lumifer 28 October 2013 08:07:50PM *  4 points [-]

I should have made more explicit that it's my opinion: "a bad thing from my point of view".

It's a quirk of mine -- I dislike marketing schemes injected into non-marketing contexts, especially if that is not very explicit. It is a mild dislike, not like I'm going to quit a site because of it or even write a rant against Amazon links rewriting.

Yes, I understand that Internet runs on such things. No, it does not make me like it more.

Comment author: fubarobfusco 28 October 2013 10:17:59PM *  4 points [-]

It isn't a marketing scheme. It's a monetization scheme.

(Marketing is presenting products for sale. Monetization is finding ways to extract revenue from previously non-revenue-generating activity.)

(And no, the Internet doesn't run on marketing or monetization. A few of your favorite Internet services probably do, though; but probably not all.)

Comment author: Lumifer 28 October 2013 11:02:16PM 2 points [-]

It isn't a marketing scheme. It's a monetization scheme.

I accept the correction.

I have another quirk: I dislike being monetized.

Comment author: shminux 31 October 2013 04:42:52PM *  5 points [-]

The Sequences probably contain more material than an undergraduate degree in philosophy, yet there is no easy way for a student to tell if they understood the material properly. Some posts contain an occasional question/koan/meditation which is sometimes answered in the same of a subsequent post, but these are pretty scarce. I wonder if anyone qualified would like to compile a problem set for each topic? Ideally with unambiguous answers.

Comment author: Vaniver 31 October 2013 07:16:42PM 0 points [-]

I also think this is a worthwhile endeavor, and speculate that the process and results may be useful for development of a general rationality test, which I know CFAR has some interest in.

Comment author: ciphergoth 29 October 2013 04:11:26PM 5 points [-]

Is Our Final Invention available as any kind of e-book anywhere? I can find it in hardback, but not for Kindle or any kind of ePub. I'm not going to start carrying around a pile of paper in order to read it!

Comment author: Douglas_Knight 29 October 2013 09:29:52PM *  0 points [-]

What do you mean by "anywhere"? As Vincent Yu mentions, it is available in the US. It hasn't been published in print or ebook in the UK. When you find it in hardback, it's imports, right? If it is published in the UK, it will probably be available as an ebook, but I don't know if that will happen before the US edition is pirated. If you are generally chomping at the bit to read American ebooks, it is worth investing the time to learn if any ebook sellers fails to check national boundaries. The publisher lists six for this book.

Probably not useful, but the US edition is available in France. (Rights to publish English-language books in countries that don't speak English aren't very valuable, so the monopolies to the US and UK usually include those rights. So you if you're in France, you can get the ebook first, regardless of whether it's published in the US or UK. Unless they forget to make it available in France.)

Comment author: VincentYu 29 October 2013 07:24:04PM 0 points [-]

I see a Kindle edition on Amazon.

Comment author: ciphergoth 29 October 2013 10:00:41PM 0 points [-]

That page only shows me a price for the hardcover version. I wonder if it's because I have a UK IP address? How much is the Kindle version?

Comment author: Kaj_Sotala 30 October 2013 07:37:24PM 0 points [-]

I can see it from Finland, lists the price as $16 for me.

Comment author: Douglas_Knight 29 October 2013 10:15:41PM 0 points [-]

I think it is more likely rejecting you based on being logged in than based on IP, since I can see UK and FR results. Google cache of that link, both at google.com and google.co.uk show me the kindle edition. ($11)

Comment author: Tenoke 29 October 2013 04:28:11PM 0 points [-]

You can order it to http://1dollarscan.com/ and still read it on your kindle.

Comment author: Lumifer 28 October 2013 04:00:02AM 5 points [-]

LW is mostly pure-text with no images except for occasional graphs. Why is that so? Are the reasons technical (due to reddit code), cultural (it's better without images), or historical (it's always been so)?

Comment author: Douglas_Knight 28 October 2013 02:49:03PM *  13 points [-]

I think most people are unaware that they can include images in comments.

alt text

Comment author: Khoth 28 October 2013 03:00:05PM 15 points [-]

A state of affairs which I hope continues.

Comment author: Lumifer 28 October 2013 03:55:57PM 6 points [-]

A state of affairs which I hope continues.

Ah, a vote for "it's better this way". Why do you prefer pure text? Is it because of the danger of being overrun with cat pictures and blinking gif smileys?

Comment author: hyporational 29 October 2013 03:47:56AM *  3 points [-]

Let's take that particular image. It covers a huge block that could have been filled by text otherwise and conveys relatively little information accurately. It distrupts my reading completely for a little while and getting back to the nice flow takes cognitive effort.

This moment I'm reading on my phone and the image fills the whole screen.

Comment author: Gunnar_Zarncke 29 October 2013 04:14:27PM 0 points [-]

It is because text can be copy-pasted and composed easily since browsers mostly allow selecting any text (this is more difficult in win apps).

Whereas images cannot be copy pasted as simple (mostly you have to find the URL and copy paste that) and images cannot be composed easily at all (you at least need some pic editor which often doesn't allow simple copy-paste).

This is the old problem that there is no graphical language. A problem that has evadad GUI designers since the beginning.

Comment author: Lumifer 29 October 2013 05:05:51PM 0 points [-]

Whereas images cannot be copy pasted as simple

Um. In Firefox, right-click on the image, select Copy Image. Looks pretty simple to me. Pretty sure it works the same way in Chrome as well.

This is the old problem that there is no graphical language.

I think you're missing the point of images. Their advantage is precisely that they are holistic, a gestalt -- you're supposed to take them in whole and not decompose them into elements.

Sure, if you want to construct a sequential narrative out of symbols, images are the wrong medium.

Comment author: Gunnar_Zarncke 29 October 2013 07:50:02PM 0 points [-]

Um. In Firefox, right-click on the image, select Copy Image.

And how do you insert it into a comment?

I think you're missing the point of images. Their advantage is precisely that they are holistic, a gestalt -- you're supposed to take them in whole and not decompose them into elements.

That may be true of some images but not all.

Comment author: gwern 28 October 2013 11:38:34PM 5 points [-]

I'd go with laziness and lack of overt demand. I know that people love graphs and images, but I don't especially feel the need when writing something, and it's additional work (one has to make the image somehow, name it, upload it somewhere, create special image syntax, make sure it's not too big that it'll spill out of the narrow column allotted articles etc). I can barely bring myself to include images for my own little statistical essays, though I've noticed that my more popular essays seem to include more images.

Comment author: luminosity 28 October 2013 10:02:15AM 5 points [-]

I haven't tried authoring an article myself, but a quick look now seems to indicate that you can't upload images, only link to them. This means images must be hosted on third parties, meaning you have to upload it there and if not directly under your control, it's vulnerable to link rot. It seems like this would be inconvenient.

Comment author: RichardKennaway 29 October 2013 01:25:29PM 1 point [-]

You can upload images to the LessWrong wiki, and then link them from comments or posts. It's a bit roundabout, but the feature is there. The question is then, should it be made easier?

Comment author: Douglas_Knight 29 October 2013 03:55:23PM 1 point [-]

I haven't tried it, but just knowing that it requires logging in to the wiki, I know that it's way too hard and I'll probably use imgur instead.

Comment author: Lumifer 28 October 2013 03:53:01PM *  0 points [-]

you can't upload images, only link to them

That's very common in online forums (for the server load reasons) but doesn't seem to stop some forums from being fairly image-heavy. It's not like there is a shortage of free image-hosting sites.

Yes, I understand the inconvenience argument, but the lack of images at LW is pretty stark.

Comment author: TheOtherDave 28 October 2013 04:43:56PM 1 point [-]

Do you think more people should include graphics in their posts?
Do you think more people should include graphics in their comments?
Do you think the image-heavy forums you mention get some benefit from being image-heavy that we would do well to pursue?

Comment author: Lumifer 28 October 2013 04:50:24PM 5 points [-]

I am hesitant to put forward a recommendation. I don't know yet and approach this as the Chesterton's Fence.

Comment author: TheOtherDave 28 October 2013 05:55:45PM 2 points [-]

That's fair.

I'll observe that I read your comments on this thread as implicitly recommending more images.

This is of course just my reading, but I figured I'd mention it anyway if you are hesitant to make a recommendation for fear of tearing that fence down in ignorance, on the off chance that I'm not entirely unique here.

Comment author: Lumifer 28 October 2013 07:53:32PM 0 points [-]

I understand where you are coming from (asking why this house is not blue is often perceived as implying that this house should be blue) -- but do you think there's any way to at least tone down this implication without putting in an explicit disclaimer?

Comment author: TheOtherDave 28 October 2013 08:18:49PM *  1 point [-]

do you think there's any way to at least tone down this implication without putting in an explicit disclaimer?

Well, if that were my goal, one thing I would try to avoid is getting into a dynamic where I ask people why they avoid X, and then when they provide some reasons I reply with counterarguments.

Another thing I would try to avoid is not questioning comments which seem to support doing X, for example by pointing out that it's easy to do, but questioning comments which seem to challenge those comments.

Also, when articulating possible reasons for avoiding X, I would take some care with the emotional connotations of my wording. This is of course difficult, but one easy way to better approximate it is to describe both the pro-X and anti-X positions using the same kind of language, rather than describing just one and leaving the other unmarked.

More generally, assymetry in how I handle the pro-X and anti-X cases will tend to get read as suggesting partiality; if I want to express impartiality, I would cultivate symmetry.

That said, it's probably easier to just express my preferences as preferences.

Comment author: Lumifer 28 October 2013 08:38:50PM *  0 points [-]

avoid is getting into a dynamic where I ask people why they avoid X, and then when they provide some reasons I reply with counterarguments

<shrug> I think it's fine. Reasons that people provide might be strong or might be weak -- it's OK to tap on them to see if they would fall down. I would do the same thing to comments which (potentially) said "Yay images, we need more of them!".

In general, I would prefer not to anchor the expectations of the thread participants, but not at the price of interference with figuring out of what does the territory actually look like.

describe both the pro-X and anti-X positions using the same kind of language

I didn't (and still don't) have a position to describe. Summarizing arguments pro and con seemed premature. This really was just a simple open question without a hidden agenda.

Comment author: TheOtherDave 28 October 2013 08:59:07PM 0 points [-]

All right.

Comment author: [deleted] 28 October 2013 07:36:15PM 0 points [-]

I read them this way too.

Comment author: Mestroyer 28 October 2013 09:05:34PM 0 points [-]

There's a good chance this is not a "fence", deliberately designed by some agent with us in mind, but a fallen tree that ended up there by accident/laziness.

Comment author: ChristianKl 28 October 2013 05:11:21PM 4 points [-]

There's a design choice on the part of LessWrong against avatar images. Text is supposed to speak for itself and not be judged by it's author. Avatar imaging would increase author recognition.

Comment author: Mestroyer 28 October 2013 09:07:11PM 2 points [-]

I think I agree with that. I do read author names, but I read them after I read the text usually. I frequently find myself mildly surprised that I've just upvoted someone I usually downvote, or vice versa.

Comment author: lmm 29 October 2013 05:39:20PM 0 points [-]

And yet names are visually quite distinct. I find authorship much more obvious here than on HN.

Comment author: TheOtherDave 28 October 2013 04:07:42AM 3 points [-]

Some people embed graphics in their articles, and this is seen by many as a good thing. I suspect it's just individuals choosing not to bother with images.

Comment author: gattsuru 28 October 2013 04:18:21PM 2 points [-]

I'd note that the short help for comments does not list the Markdown syntax for embedding images in comments, and even the "more comment formatting help" page is not especially clear. That LessWrong cultural encourages folk to write comments before writing Main or Discussion articles makes that fairly relevant.

Comment author: [deleted] 28 October 2013 07:17:02PM 2 points [-]

LW is mostly pure-text with no images except for occasional graphs. Why is that so?

Why shouldn't it be?

Comment author: lsparrish 28 October 2013 06:18:19PM *  1 point [-]

I find it harder to engage in System 2 when there are images around. Heck, even math glyphs usually trip me up. That's not to say graphics can't do more good than harm (for example, charts and diagrams can help cross inferential distance quickly, and may serve as useful intuition pumps) but I imagine that more images would mean more reliance on intuition and less on logic, hence less capacity for taking things to analytical extremes. So it could be harmful (given the nature of the site) to introduce more images.

Comment author: hyporational 29 October 2013 03:40:25AM 0 points [-]

I like my flow. I don't have anything against images if they are arranged in a way that doesn't distrupt reading. I'm not sure if lw platform allows for that.

Comment author: ChrisHallquist 28 October 2013 11:28:09PM 0 points [-]

Reading this comment... I suddenly feel very odd about the fact that I failed to include images in my Neuroscience basics for LessWrongians post, in spite of in a couple places saying "an image might be useful here." Though the lack of images was partly due to me having trouble finding good ones, so I won't change it at the moment.

Comment author: sixes_and_sevens 29 October 2013 11:40:42AM *  4 points [-]

Several months ago I set up a blog for writing intelligent, thought-provoking stuff. I've made two posts to it, and one of those is a photo of a page in Strategy of Conflict, because it hilariously featured the word "retarded". Something has clearly gone wrong somewhere.

I'm pretty sure there are other would-be bloggers on here who experience similar update-discipline issues. Would any of them like to form some loose cabal of blogging spotters, who can egg each other on, suggest topics, provide editorial and stylistic feedback, etc.?

EDIT: ITT: I'm a bit of a dick! Sorry, everyone!

Comment author: Vaniver 29 October 2013 04:40:02PM 7 points [-]

Something has clearly gone wrong somewhere.

Are you sure the error is that you're posting too little to the blog, rather than that you're trying to have a blog in the first place?

Comment author: sixes_and_sevens 29 October 2013 05:02:10PM 0 points [-]

Is this intended as snark, or an actual helpful comment?

Assuming the latter, I have what I consider to be sound motives for maintaining a blog. Unfortunately, I don't have sound habits for maintaining a blog, coupled with a bit of a cold-start problem. I doubt I am the only person in this position, and believe social commitment mechanisms may be a possible avenue for improvement.

Comment author: Vaniver 29 October 2013 05:19:24PM 6 points [-]

I was going for actual helpful comment. I personally don't have a blog because several attempts to have a blog failed. Afterwards, I was fairly sure that the reason why my blogs failed was because I like conversations too much and monologuing too little. I found that forums both had a reliable stream of content to react to, as well as a somewhat reliable stream of content to build off of. The incentive structure seemed a lot nicer in a number of ways.

More broadly, I think a good habit when plans fail is to ask the question "What information does this failure give me?", rather than the limited question "why did this plan fail me?". Sometimes you should revise the plan to avoid that failure mode; other times you should revise the plan to have entirely different goals.

My immediate practical suggestion is to create a LW draft editing circle. This won't give you the benefits of a blog distinct from LW, but eliminates most of the cold-start problem. It also adds to the potential interest base people who have ideas for posts but who don't have the confidence in their ability to write a post that is socially acceptable to LW (i.e. doesn't break some hidden protocol).

Comment author: hyporational 31 October 2013 05:35:03PM 3 points [-]

If you have any old material, you could consider posting those to get initial readership, even if you don't consider them especially high quality.

I have what I consider to be sound motives for maintaining a blog.

I'd interpret Vaniver's comment more generally to mean that parts of your brain might disagree with this assessment, and you experience this as procrastination.

Comment author: philh 29 October 2013 11:45:46PM 2 points [-]

Would any of them like to form some loose cabal of blogging spotters, who can egg each other on, suggest topics, provide editorial and stylistic feedback, etc.?

Yes.

(My current excuse for not even having made one post is that I started to experience wrist pain, and didn't want to make it worse by doing significant typing at home. It seems to be getting better now.)

Comment author: Lumifer 29 October 2013 05:38:46PM 2 points [-]

Consider your incentives. Actual (non-imaginary) incentives in your current life.

What are the incentives for maintaining a blog? What do you get (again, actually, not supposedly) when you make a post? What are the disincentives? (e.g. will a negative comment spoil your day?) Is there a specific goal you're trying to reach? Is posting to your blog a step on the path to the goal?

Comment author: sixes_and_sevens 29 October 2013 05:55:54PM 0 points [-]

Are you requesting answers for my specific case, or just providing me with advice?

(As an observation, which isn't meant to be a hostile response to your comment, people seem very keen to offer advice on LW, even when none has been requested.)

Comment author: Lumifer 29 October 2013 06:44:02PM 1 point [-]

Advice, I guess, in the sense that I think these are the questions you'd be interested in knowing the answers to (for yourself, not for posting here).

Comment author: Tenoke 30 October 2013 07:38:39AM 3 points [-]

Count me in if anything comes out of it.

Comment author: lmm 30 October 2013 03:16:58AM 1 point [-]

Maybe you should consider joining an existing blogging community - livejournal or tumblr or medium? They're good at giving you social prompts to write something.

Comment author: sixes_and_sevens 31 October 2013 12:04:26AM 4 points [-]

In retrospect, my previous response to this does seem pretty unwarranted. This was a perfectly reasonable and relevant comment that caught me at a bad time. I'd like to apologise.

Comment author: [deleted] 01 November 2013 03:46:21AM 0 points [-]

If I wanted to update a blog regularly, I would consider it imperative to put "update my blog" as a repeating item in my to-do list. For me, relying on memory is an atrocious way to ensure that something gets done; having a to-do list is enormously more effective.

Comment author: Ritalin 29 October 2013 12:56:45PM 0 points [-]

I tried translating the sequences. Gave up on the third post.

Comment author: JoshuaZ 28 October 2013 04:21:36AM 10 points [-]

Recent work suggests that dendrites may be able to do substantial computation themselves. This implies that getting decent uploads or getting a decent preservation from cryonics may require a more fine-grained approached than is often expected. Unfortunately, the paper itself seems to be not yet online, but it is by the same group which previously suggested that dendrites could be partially responsible for memory storage.

Comment author: VincentYu 29 October 2013 02:23:44AM 4 points [-]

Smith et al. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo.

Abstract (emphasis mine):

Neuronal dendrites are electrically excitable: they can generate regenerative events such as dendritic spikes in response to sufficiently strong synaptic input. Although such events have been observed in many neuronal types, it is not well understood how active dendrites contribute to the tuning of neuronal output in vivo. Here we show that dendritic spikes increase the selectivity of neuronal responses to the orientation of a visual stimulus (orientation tuning). We performed direct patch-clamp recordings from the dendrites of pyramidal neurons in the primary visual cortex of lightly anaesthetized and awake mice, during sensory processing. Visual stimulation triggered regenerative local dendritic spikes that were distinct from back-propagating action potentials. These events were orientation tuned and were suppressed by either hyperpolarization of membrane potential or intracellular blockade of NMDA (N-methyl-D-aspartate) receptors. Both of these manipulations also decreased the selectivity of subthreshold orientation tuning measured at the soma, thus linking dendritic regenerative events to somatic orientation tuning. Together, our results suggest that dendritic spikes that are triggered by visual input contribute to a fundamental cortical computation: enhancing orientation selectivity in the visual cortex. Thus, dendritic excitability is an essential component of behaviourally relevant computations in neurons.

Comment author: [deleted] 30 October 2013 02:48:00PM *  12 points [-]

Sillicon Valley's Ultimate Exit, a speech at Startup School 2013 by Balaji Srinivasan. He opens with the statement that America is the Microsoft of nations, goes into a discussion on Voice, Exit and good governence and continues with the wonderful observation that:

"There’s four cities that used to run the United States in the postwar era: Boston with higher ed; New York City with Madison Avenue, books, Wall Street, and newspapers; Los Angeles with movies, music, Hollywood; and, of course, DC with laws and regulations, formally running it."

He names this the Paper Belt, and claims the Valley has beem unintentionally dumping horse heads in all of their beds for the past 20 years. I would call it The Cathedral and note the NYT does not approve of this kind of talk:

First the slave South, now this.

No seriously, that is the very first line.

Comment author: [deleted] 05 November 2013 03:52:09PM 5 points [-]
Comment author: NancyLebovitz 07 November 2013 03:02:11PM 2 points [-]

I love this speech, but I suspect it's overoptimistic. I believe that bitcoin will be illegal as soon as it's actually needed.

Still, I appreciate his appreciation of immigration/emigration. I'm convinced that mmigration/emigration gets less respect than staying and fighting because it's less dramatic, less likely to get people killed, and more likely to work.

Comment author: Lumifer 07 November 2013 03:58:51PM 3 points [-]

I believe that bitcoin will be illegal as soon as it's actually needed.

That is likely, but note that torrenting Lady Gaga's mp3s is also illegal and yet I have absolutely zero difficulty in finding such torrents on the 'net.

Comment author: NancyLebovitz 07 November 2013 04:06:24PM 0 points [-]

Maintaining a currency takes a much more complicated information structure than letting people make unlimited copies of something.

Comment author: Lumifer 07 November 2013 04:18:19PM 1 point [-]

What do you mean, "maintaining"? Bitcoin was explicitly designed to function in a distributed manner without the need for any central authority.

Comment author: RolfAndreassen 07 November 2013 04:50:13PM 1 point [-]

And consequently it has a much more complicated information structure than torrents do. :) But this aside, while you can likely run the Bitcoin economy as such, if Bitcoins cannot be exchanged for dollars or directly for goods and services, they are worthless; and this is a bottleneck where a government has a lot of infrastructure to insert itself. I suggest that, if Bitcoins become illegal, buying hard drugs is the better analogy than downloading torrents: It won't be impossible, but it'll be much more difficult than setting up a free client and clicking "download".

Comment author: Lumifer 07 November 2013 05:04:35PM 2 points [-]

if Bitcoins become illegal, buying hard drugs is the better analogy than downloading torrents

The differences between the physical and the virtual worlds are very relevant here.

Silk Road was blatantly illegal and it took the authorities years to bust its operator, a US citizen. Once similar things are run by, say, Malaysian Chinese out of Dubai with hardware scattered across the world, the cost for the US authorities to combat them would be... unmanageable.

Comment author: drethelin 28 October 2013 06:51:15PM 11 points [-]

What probability should I assign to being completely wrong and brainwashed by Lesswrong? What steps would one take to get more actionable information on this topic? For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them? Am I going to burn in counter factual hell for even asking?

Comment author: Mitchell_Porter 29 October 2013 08:50:12AM 10 points [-]

In "The Inertia of Fear and the Scientific Worldview", by the Russian computer scientist and Soviet-era dissident Valentin Turchin, in the chapter "The Ideological Hierarchy", Soviet ideology was analyzed as having four levels: philosophical level (e.g. dialectical materialism), socioeconomic level (e.g. social class analysis), history of Soviet Communism (the Party, the Revolution, the Soviet state), and "current policies" (i.e. whatever was in Pravda op-eds that week).

According to Turchin, most people in the USSR regarded the day-to-day propaganda as empty and false, but a majority would still have agreed with the historical framework, for lack of any alternative view; and the number who explicitly questioned the philosophical and socioeconomic doctrines would be exceedingly small. (He appears to not be counting religious people here, who numbered in the tens of millions, and who he describes as a separate ideological minority.)

BaconServ writes that "LessWrong is the focus of LessWrong", though perhaps the idea would be more clearly expressed as, LessWrong is the chief sacred value of LessWrong. You are allowed to doubt the content, you are allowed to disdain individual people, but you must consider LW itself to be an oasis of rationality in an irrational world.

I read that and thought, meh, this is just the sophomoric discovery that groupings formed for the sake of some value have to value themselves too; the Omohundro drive to self-protection, at work in a collective intelligence rather than in an AI. It also overlooks the existence of ideological minorities who think that LW is failing at rationality in some way, but who hang around for various reasons.

However, these layered perspectives - which distinguish between different levels of dissent - may be useful in evaluating the ways in which one has incorporated LW-think into oneself. Of course, Less Wrong is not the Soviet Union; it's a reddit clone with meetups that recruits through fan fiction, not a territorial superpower with nukes and spies. Any search for analogies with Turchin's account, should look for differences as well as similarities. But the general idea, that one may disagree with one level of content but agree with a higher level, is something to consider.

Comment author: Viliam_Bur 29 October 2013 10:17:16AM 6 points [-]

you must consider LW itself to be an oasis of rationality in an irrational world.

Or not.

Comment author: hyporational 29 October 2013 03:16:07AM *  8 points [-]

Could people list philosophy oriented internet forums with high concentration of smart people and no significant memetic overlap so that one could test this? I don't know any and I think it's dangerous.

Comment author: JDelta 29 October 2013 01:41:20PM 2 points [-]

I would love to see this as well

Comment author: ChristianKl 28 October 2013 09:33:52PM 16 points [-]

If Lesswrong would be good at brainwashing I would expect much more people to have signed up for cryonics.

What steps would one take to get more actionable information on this topic?

Spend time outside of Lesswrong and discuss with smart people. Don't rely on a single community to give you your map of the world.

Comment author: Lumifer 29 October 2013 12:23:44AM *  4 points [-]

What probability should I assign to being completely wrong and brainwashed by Lesswrong?

Wrong about what? Different subjects call for different probability.

The probability that the Bayes Theorem is wrong is vanishingly small. The probability that the UFAI risk is completely overblown is considerably higher.

LW "ideology" is an agglomeration in the sense that accepting (or not) a part of it does not imply acceptance (or rejection) of other parts. One can be a good Bayesian, not care about UFAI, and be signed up for cryonics -- no logical inconsistencies here.

Comment author: TheOtherDave 28 October 2013 07:05:20PM *  3 points [-]

What steps would one take to get more actionable information on this topic?

I'd suggest starting by reading up on "brainwashing" and developing a sense of what signs characterize it (and, indeed, if it's even a thing at all).

For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?

Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.

Comment author: shminux 28 October 2013 07:18:11PM *  1 point [-]

Note that your suggestions are all within the framework of the "accepted LW wisdom". The best you can hope for is to detect some internal inconsistencies in this framework. One's best chance of "deconversion" is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that "worked" for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.

Comment author: TheOtherDave 28 October 2013 08:05:44PM 0 points [-]

Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I'm not sure how reading up on it is remaining inside the "accepted LW wisdom."

If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I'm being brainwashed. And, yes, if I conclude that it's likely that I'm being brainwashed, there are various deconversion techniques I can use to negate that.

Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.

Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn't lend itself to this approach so well... it's hard to know where to even start, there.

Comment author: ChristianKl 28 October 2013 09:39:32PM *  2 points [-]

If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I'm being brainwashed.

Reading up on brainwashing can mean reading gwern's essay which concludes that brainwashing doesn't really work. Of course that's exactly what someone who wants to brainwash yourself would tell you, wouldn't it?

Comment author: TheOtherDave 28 October 2013 10:59:10PM 0 points [-]

Sure. I'm not exactly sure why you'd choose to interpret "read up on brainwashing" in this context as meaning "read what a member of the group you're concerned about being brainwashed by has to say about brainwashing," but I certainly agree that it's a legitimate example, and it has exactly the failure mode you imply.

Comment author: Nornagest 28 October 2013 09:57:59PM *  0 points [-]

For what it's worth, gwern's findings are consistent with mine (see this thread). I'd rather restrict "brainwashing" to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It's difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism -- more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee's religion) are much more common -- but if you read between the lines they seem to be higher.

Deprogramming techniques aren't much better, incidentally -- from everything I've read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn't apply most of them to yourself, and wouldn't want to in any case.

Comment author: [deleted] 28 October 2013 07:30:57PM 10 points [-]

The first thing you should probably do is narrow down what specifically you feel like you may be brainwashed about. I posted some possible sample things below. Since you mention Messianic groupthink as a specific concern, some of these will relate to Yudkowsky, and some of them are Less Wrong versions of cult related control questions. (Things that are associated with cultishness in general, just rephrased to be Less Wrongish)

Do you/Have you:

1: Signed up for Cyonics.

2: Agressively donated to MIRI.

3: Check for updates on HPMOR more often then Yudkowsky said there would be on the off hand chance he updated early.

4: Gone to meet ups.

5: Went out of your way to see Eliezer Yudkowsky in person.

6: Spend time thinking, when not on Less Wrong: "That reminds me of Less Wrong/Eliezer Yudkowsky."

7: Played an AI Box experiment with money on the line.

8: Attempted to engage in a quantified self experiment.

9: Cut yourself off from friends because they seem irrational.

10: Stopped consulting other sources outside of Less Wrong.

11: Spent money on a product recommended by someone with high Karma (Example: Metamed)

12: Tried to recruit other people to Less Wrong and felt negatively if they declined.

13: Written rationalist fanfiction.

14: Decided to become polyamorous.

15: Feel as if you have sinned any time you receive even a single downvote.

16: Gone out of your way to adopt Less Wrong styled phrasing in dialogue with people that don't even follow the site.

For instance, after reviewing that list, I increased my certainty I was not brainwashed by Less Wrong because there are a lot of those I haven't done or don't do, but I also know which questions are explicitly cult related, so I'm biased. Some of these I don't even currently know anyone on the site who would say yes to them.

Comment author: drethelin 29 October 2013 03:57:02AM 1 point [-]
  1. No

  2. I'm a top 20 donor

  3. Nope

  4. yes.

  5. Not really? That was probably some motivation for going to a mincamp but not most of it.

  6. Nope.

  7. Nope.

  8. a tiny amount? I've tracked weight throughout diet changes.

  9. Not that I can think of. Certainly no one closer than a random facebook friend.

  10. Nope.

  11. I've spent money on Modafinil after it's been recommended on here. I could count Melatonin but my dad told me about that years ago.

  12. Yes.

  13. Nope.

  14. I was in an open relationship before I ever heard of Leswrong.

  15. HAHAHAHAHAH no.

  16. This one is hard to analyze, I've talked about EM hell and so on outside of the context of Lesswrong. Dunno.

  17. Seriously considering moving to the bay area.

Comment author: jkaufman 05 November 2013 08:51:01PM 0 points [-]

Beware: you've created a lesswrong purity test.

Comment author: ChrisHallquist 28 October 2013 11:37:19PM 0 points [-]

I'm in the process of doing 1, have maybe done 2 depending on your definition of aggressively (made only a couple donations, but largest was ~$1000), and done 4.

Oh, and 11, I got Amazon Prime on Yvain's recommendation, and started taking melatonin on gwern's. Both excellent decisions, I think.

And 14, sort of. I once got talked into a "polyamorous relationship" by a woman I was sleeping with, no connection whatsoever to LessWrong. But mostly I just have casual sex and avoid relationships entirely.

Comment author: DanielLC 30 October 2013 03:59:39AM 1 point [-]

Am I going to burn in counter factual hell for even asking?

You'd be crazy not to ask. The views of people on this site are suspiciously similar. We might agree because we're more rational than most, but you'd be a fool to reject the alternative hypothesis out of hand. Especially since they're not mutually exclusive.

Comment author: Risto_Saarelma 28 October 2013 07:50:48PM *  1 point [-]

What steps would one take to get more actionable information on this topic?

Use culture to contrast with culture. Avoid being a man of a single book and get familiar with some past or present intellectual traditions that are distant from the LW cultural cluster. Try to get a feel for the wider cultural map and how LW fits into it.

Comment author: Nisan 28 October 2013 07:38:39PM 1 point [-]

Less Wrong has some material on this topic :)

Seriously though, I'd love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I've seen some examples already, but more is good.

Comment author: JDelta 29 October 2013 01:57:57PM 0 points [-]

The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.

The existence of the past and future (and within most people's reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.

Comment author: TheOtherDave 29 October 2013 02:19:32PM 4 points [-]

The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.

In your opinion, is there some other form of reasoning that avoids this weakness?

Comment author: JDelta 29 October 2013 11:46:05PM 0 points [-]

That's a very complicated question but I'll try to do my best to answer.

In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. "In my heart I know..."

In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they 'want' ... how they define themselves. Consciously, and unconsciously.

For "reasoning", no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one's "presumptive model" of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.

Probabilities can't cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.

When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their 'foundation' is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.

Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.

Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you'll make a lot less mistakes.

Comment author: TheOtherDave 30 October 2013 12:24:06AM 1 point [-]

All right. Thanks for clarifying.

Comment author: passive_fist 29 October 2013 04:49:48AM 0 points [-]

I'd say you should assign a very high probability for your beliefs being aligned in the direction LessWrong's are, even in cases where such beliefs are wrong. It's just how the human brain and human society works; there's no getting around it. However, how much of that alignment is due to self-selection bias (choosing to be a part of LessWrong because you are that type of person) or brainwashing is a more difficult question.

Comment author: ChrisHallquist 28 October 2013 11:42:01PM 0 points [-]

For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?

As long as the number is small, I wouldn't update at all, because I already expect slow trickle of those people on my current information, so seeing that expectation confirmed isn't new evidence. If LW achieved a Scientology-like place in popular opinion, though, I'd be worried.

Am I going to burn in counter factual hell for even asking?

No.

Comment author: Vaniver 30 October 2013 06:02:09PM 3 points [-]

Eliezer posted to Facebook:

In My Little Pony: Friendship is Signaling, Twilight Sparkle and her companions defeat Nightmare Moon by using the Elements of Cynicism to prove to her that she doesn't really care about darkness.

My stab at it. I'm probably going to post it to FIMFiction in a day or so, but it's basically a first draft at this point and could doubtless use editing / criticism.

Comment author: VincentYu 29 October 2013 03:58:20AM *  3 points [-]

Please add the open_thread tag (with the underscore) to the post.

Comment author: Vladimir_Nesov 29 October 2013 09:08:44AM 2 points [-]

Fixed.

Comment author: NancyLebovitz 30 October 2013 07:24:50PM 2 points [-]

Since I'm not sure whether this advice would be welcome in a recent discussion, I'm just going to start cold by describing something which has worked for me.

In an initial post, I explain what kind of advice I'm looking for, and I'm specific about preferring advice from people who've gotten improvement in [specific situation]. I normally say other advice is welcome, but you'd be amazed how little of it I get.

I believe it's important to head off unwanted advice early. I can't remember whether I normally put my limiting request at the beginning or end of a post, but I think it helps if you can keep your commenters from becoming a mutually reinforcing advice-giving crowd.

I suggest that starting by being specific about what you do and don't want is (among other things) an assertion of status, and this has some effects on the advice-giving dynamic.

I normally do want advice from people who've had appropriate experience. Has anyone tried being clear at the beginning that they don't want advice?

Comment author: TheOtherDave 30 October 2013 07:38:26PM 1 point [-]

In my social circle, explicitly tagging posts as "I'm not looking for advice" seems to work pretty well at discouraging advice. I don't do it often myself though.

And you're right, of course, that it is among other things an assertion of status, though of course it's also a useful piece of explicit information.

Comment author: Vaniver 30 October 2013 01:51:24PM 2 points [-]

Steve Sailer on the Trolley Problem: [1] and [2]. Basically, to what degree is the unwillingness of people in the thought experiment to attempt to push the fat man the realization that pushing the fat man is an inherently riskier prospect than pulling a lever?

Comment author: Alejandro1 31 October 2013 07:56:48PM 8 points [-]

Noah Millman also comments:

“Throw the switch or not” is a natural choice actually presented by real conditions – switches imply choices by definition. “Push the fat man or don’t” isn’t a natural choice presented by real conditions – it’s a scenario concocted for an experiment. By definition, those cannot be the only options in the universe. And our brains can tell.

It seems to me that what characterizes the people who choose the “logical” answer – push the fat man – is not that they gave a less-emotional response but that they gave a less-inuitive, less-gestalt-based response. They were willing to accept the conditions of the problem as given without question. That’s a response to authority – they are turning off the part of their brains that feels the situation as a real one, and sticking with the part of the brain that reasons from unquestionable givens to undeniable conclusions.

There’s a place for that kind of response – but I would argue that answering questions of great moral import is emphatically not that place. Indeed, from the French Revolution to the Iraq War, modernity is littered with the corpses of those whose deaths were logically necessary for some hypothesized outcome that could not actually have been known with remotely the necessary level of certainty. In that regard, I suspect an aversion to following logic problems to fatal conclusions is not merely a kind of moral appendix handed down from our Stone Age ancestors, but remains positively adaptive.

Comment author: Ritalin 29 October 2013 12:50:06PM 2 points [-]

I've been out of things for a while; how goes Eliezer's book?

Comment author: [deleted] 29 October 2013 11:12:46PM *  2 points [-]

The rationality book?

this is the last I've seen

http://lesswrong.com/lw/i3a/miris_2013_summer_matching_challenge/9gth

Comment author: NancyLebovitz 29 October 2013 03:50:04PM 0 points [-]

http://hpmor.com/notes/progress-13-10-0/

Eliezer says he'll do a progress report on 11/1. I haven't heard any news otherwise.

Comment author: Tenoke 29 October 2013 04:06:06PM *  2 points [-]

I don't think that 'Eliezer's book' refers to HPMOR. I think it is more likely that he is asking about the book based on the Sequences (for which this is probably the most recent thread).

Comment author: closeness 29 October 2013 11:20:31AM 3 points [-]

Why isn't there a pill that makes a broken heart go away?

Comment author: Lumifer 30 October 2013 03:08:37PM 2 points [-]

Just time is both necessary and sufficient.

Comment author: Kaj_Sotala 29 October 2013 04:37:57PM 2 points [-]
Comment author: cousin_it 29 October 2013 02:29:53PM 2 points [-]

Something like that was discussed previously. Kevin recommended antidepressants in the comments.

Comment author: Ritalin 29 October 2013 01:00:11PM *  2 points [-]

Ask Gwern, he probably knows something that's good enough.

Comment author: Nisan 29 October 2013 07:00:54PM 1 point [-]

It will get better over time.

Comment author: joaolkf 05 November 2013 11:06:20AM 0 points [-]
Comment author: Pablo_Stafforini 28 October 2013 03:53:01PM *  3 points [-]

The main problem in learning a new skill is maintaining the required motivation and discipline, especially in the early stages. Gamification deals this problem better than any of the other approaches I’m familiar with. Over the past few months, I’ve managed to study maths, languages, coding, Chinese characters, and more on a daily basis, with barely any interruptions. I accomplished this by simply taking advantage of the many gamified learning resources available online for free. Here are the sites I have tried and can recommend:

  • Codecademy. For learning computer languages (Ruby, Python, PHP, and others).
  • Duolingo. For learning the major Indo-European languages (English, German, French, Italian, Portuguese and Spanish).
  • Khan Academy. For learning maths. They also teach several other disciplines, but they offer mostly videos with only a few exercises.
  • Memrise. For memorizing stuff, especially vocabulary. The courses vary in quality; the ones on Mandarin Chinese are excellent.
  • Vocabulary.com. For memorizing English vocabulary.

Are you familiar with other good resources not listed above? If so, please mention them in the comments.

(Crossposted to my blog.)

Comment author: Emile 28 October 2013 06:20:57PM 1 point [-]

I've been using Anki daily these past two or three months, and regularly-but-not-quite-daily maybe a year before that. I use it for a fair amount of different things (code, psychology, languages, ...)I recommend it, though it's not really "gamified".

Comment author: ColonelMustard 28 October 2013 04:20:24AM 3 points [-]

Not sure where this goes: how can I submit an article to discussion? I've written it and saved it as a draft, but I haven't figured out a way to post it.

Comment author: shminux 28 October 2013 05:01:27AM 4 points [-]

You don't have enough karma to post yet. Consider making some quality comments first.

Comment author: ColonelMustard 28 October 2013 08:44:11AM 3 points [-]

Thank you! One more - how much karma do I need? I was under the impression one needed 2 to post to discussion (20 to main), but presumably this is not the case. Is there an up to date list?

Comment author: Tenoke 28 October 2013 10:22:36AM 2 points [-]

I think the requirement is currently 5 karma to post to discussion.

Comment author: Tenoke 29 October 2013 11:26:33AM *  6 points [-]

Am I the only person getting more and more annoyed of the cult thing? If the whole 'lesswrong is a cult' thing is not a meme that's spreading just because people are jumping on the bandwagon then I don't know what is. Can you seriously not tell? Additionally, From my POV it seems like people starting a 'are we a cult' threads/conversations do it mainly for signaling purposes.

Also, I bet new members wouldn't usually even think about whether we are a cult or not if older members were not talking about it like it is a real possibility all the bloody time. (and yes I know, the claim is not made only by people who are part of the community)

/rant

Comment author: Mestroyer 29 October 2013 04:27:54PM 7 points [-]

It especially annoys me when people respond to evidence-based arguments that LessWrong is not a cult with, "Well where did you come to believe all that stuff about evidence, LessWrong?"

Before LessWrong, my epistemology was basically a more clumsy version of what is now. If you described my present self to my past self, and said "Is this guy a cult victim?" he would ask for evidence. He wouldn't be thinking in terms of Bayes's theorem, but he would be thinking with a bunch of verbally expressed heuristics and analogies that usually added up to the same thing. I used to say things like "Absence of evidence is actually evidence of absence, but only if you would expect to see the evidence if the thing was true and you've checked for the evidence," which I was later delighted to see validated and formalized by probability theory.

You could of course say, "Well, that's not actually your past self, that's your present self (the cult victim)'s memories, which are distorted by mad thinking," but then you're getting into brain-in-a-vat territory. I have to think using some process. If that process is wrong but unable to detect its own wrongness, I'm screwed. Adding infinitely recursive meta-doubt to the process just creates a new one to which the same problem applies.

I'm not particularly worried that my epistemology is completely wrong, because the pieces of my epistemology, when evaluated by my epistemology, appear to do what they're supposed to. I can see why they would do what they're supposed to by simulating how they would work, and they have a track record of doing what they're supposed to. There may be other epistemologies that would evaluate mine as wrong. But they are not my epistemology, so I don't believe what they recommend me to believe.

This is what someone with a particular kind of corrupt epistemology (one that was internally consistent) would say. But it is also the best anyone with an optimal epistemology could say. So why Mestroyer::should my saying it be cause for concern? (this is an epistemic "Mestroyer::should")

Comment author: passive_fist 29 October 2013 08:06:38PM 4 points [-]

I can identify with this. Reading through the sequences wasn't a magical journey of enlightenment, it was more "Hey, this is what I thought as well. I'm glad Elezier wrote all this down so that I don't have to."

Comment author: Mitchell_Porter 29 October 2013 08:07:19PM 2 points [-]

Something more than a salon and less than a movement, then... English has a big vocabulary, there must be a good word for it.

Comment author: Risto_Saarelma 30 October 2013 07:36:30AM 2 points [-]

What's wrong with 'forum'?

Comment author: Lumifer 31 October 2013 08:27:14PM 0 points [-]

What's wrong with 'forum'?

Ah, just like 4Chan then? X-D

Comment author: NancyLebovitz 29 October 2013 03:48:19PM 1 point [-]

From my POV it seems like people starting a 'are we a cult' threads/conversations do it mainly for signaling purposes.

I think people also say things just to get conversation going. We need to look at making it easier to find useful ways of getting attention.

Comment author: TheOtherDave 29 October 2013 06:56:58PM 0 points [-]

We need to look at making it easier to find useful ways of getting attention.

Can you expand on this?

Comment author: NancyLebovitz 29 October 2013 07:54:03PM 1 point [-]

I believe that one of the reasons people are boring and/or irritating is that they don't know good ways of getting attention. However, being clever or reassuring or whatever might adequately repay attention isn't necessarily easy. Could it be made easier?

Comment author: TheOtherDave 29 October 2013 08:45:33PM 6 points [-]

(nods) Thank you.

I wonder how far a community interested in solving the "boring/irritating people" problem could get by creating a forum whose stated purpose was to respond in an engaged, attentive way to anything anyone posts there. It could be staffed by certified volunteers who were trained in techniques of nonviolent communication and committed to continuing to engage with anyone who posted there, for as long as they chose to keep doing so, and nobody but staff would be permitted to reply to posters.

Perhaps giving them easier-to-obtain attention will cause them to leave other forums where attention requires being clever or reassuring or similarly difficult valuable tings.

I'm inclined to doubt it, though.

I am somewhat tangentially reminded of a "suicide hotline" (more generally, a "call us if you're having trouble coping" hotline) where I went to college, which had come to the conclusion that they needed to make it more okay to call them, get people in the habit of doing so, so that people would use their service when they needed it. So they explicitly started the campaign of "you can call us for anything. Help on your problem sets. The Gross National Product of Kenya. The average mass of an egg. We might not know, but you can call us anyway." (This was years before the Web, let alone Google, of course.)

Comment author: hyporational 31 October 2013 05:12:54PM 0 points [-]

When when will these rants go meta?

Comment author: drethelin 29 October 2013 06:40:05PM 0 points [-]
Comment author: shminux 29 October 2013 04:30:53PM *  0 points [-]

You say "cult" like it's a bad thing.

Seriously though, using a term with negative connotations is not a rational approach t begin with. Like asking "is this woman a slut?". It presumes that a higher-than-average number of sexual partners is necessarily bad or immoral. Back to the cult thing: why does this term have a derogatory connotation? Says wikipedia:

In the mass media, and among average citizens, "cult" gained an increasingly negative connotation, becoming associated with things like kidnapping, brainwashing, psychological abuse, sexual abuse and other criminal activity, and mass suicide. While most of these negative qualities usually have real documented precedents in the activities of a very small minority of new religious groups, mass culture often extends them to any religious group viewed as culturally deviant, however peaceful or law abiding it may be.

Secular cult opponents like those belonging to the anti-cult movement tend to define a "cult" as a group that tends to manipulate, exploit, and control its members. Specific factors in cult behavior are said to include manipulative and authoritarian mind control over members, communal and totalistic organization, aggressive proselytizing, systematic programs of indoctrination, and perpetuation in middle-class communities.

Some of the above clearly does not apply ("kidnapping"), and some clearly does ("systematic programs of indoctrination, and perpetuation in middle-class communities" -- CFAR workshops, Berkeley rationalists, meetups). Applicability of other descriptions is less clear. Do the Sequences count as brainwashing? Does the (banned) basilisk count as psychological abuse?

Matching of LW activities and behaviors to those of a cult (a New Religious Movement is a more neutral term) does not answer the original implicit accusation: that becoming affiliated, even informally, with LW/CFAR/MIRI is a bad thing, for some definition of "bad". It is this definition of badness that is worth discussing first, when a cult accusation is hurled, and only then whether a certain LW pattern is harmful in this previously defined way.

Comment author: drethelin 29 October 2013 06:31:12PM 4 points [-]

Lesswrong is the Rocky Horror of atheist/skeptic groups!

Comment author: Tenoke 29 October 2013 04:33:00PM *  5 points [-]

You say "cult" like it's a bad thing.

*I say "cult" like it carries negative connotations for most people.

Comment author: shminux 29 October 2013 05:02:18PM 1 point [-]

I expanded on what I meant in my reply. Sorry about the ninja edit.

Comment author: ChristianKl 29 October 2013 02:20:56PM 0 points [-]

Being a cult is a failure mode for a group like this. Discussing failure modes has some importance.

There also the issue that cult are powerful and speaking of Lesswrong as a cult implies that Lesswrong has a certain power.

Comment author: Tenoke 29 October 2013 02:37:24PM 4 points [-]

Being a cult is a failure mode for a group like this. Discussing failure modes has some importance.

A really unlikely failure mode. The cons of discussing whether we are a cult outweigh the pros in my book - especially when it is discussed all the time.

Comment author: Ritalin 29 October 2013 01:04:56PM 0 points [-]

I believe we are cult. The best cult in the world. The one whose beliefs works. Otherwise, we're the same; an unusual cause, a charismatic ideological leader, and, what distinguishes us from a school of philosophy or even a political party, is that we have an excathology to worry about; an end-of-the-world scenario. Unlike other cults, though, we wish to prevent or at least minimize the damage of that scenario, while most of them are enthusiastic in hastening it. For a cult, we're also extremely loose on rules to follow, we don't ask people to cast off material posessions (though we encourage donations) or to cut ties with old family and friends (it can end up happening because of de-religion-ing, but that's an unfortunate side-effect, and it's usually avoidable).

I could list off a few more traits, but the gist of it is this; we share a lot of traits with a cult, most of which are good or double-edged at worst, and we don't share most of the common bad traits of cults. Regardless of whether one chooses to call us a cult or not, this does not change what we are.

Comment author: Tenoke 29 October 2013 01:24:14PM *  3 points [-]

You are using a very loose definition of a cult. Surely you know that 'cult' carries some different (negative) connotations for other people?

Regardless of whether one chooses to call us a cult or not, this does not change what we are.

It might not change what we are but it has some negative consequences. People like you who call us a cult while using a different meaning of 'cult' turn new members away because they hear that LessWrong is a cult and they don't hear your different meaning of the word (which excludes most of the negative traits of bloody cults).

Comment author: JDelta 29 October 2013 01:14:41PM 0 points [-]

Yes, I get definite cultist vibes from some members. A cult is basically an organization of a small number of members who hold that their beliefs make them superior (in one or more ways) to others, with an added implication of social tightness, shared activities, internal slang, difficult for outsiders to understand. Many LW people often appear to behave like this.

Comment author: Tenoke 29 October 2013 01:24:56PM *  1 point [-]

You too are using an even looser definition of a cult. Surely you know that 'cult' carries some different (negative) connotations for other people?

Comment author: JDelta 29 October 2013 01:33:37PM 0 points [-]

I never stated LW is a cult. It clearly isn't. It does however have at least several, possibly many, members who appear to think about LW in the way many cult members think of their cult.

Comment author: RichardKennaway 29 October 2013 02:14:30PM 3 points [-]

Observe the progression:

Yes, I get definite cultist vibes from some members.

...

Many LW people often appear to behave like this.

...

It does however have at least several, possibly many, members who appear to think about LW in the way many cult members think of their cult.

At this point, are you saying anything at all?

Comment author: TheOtherDave 29 October 2013 03:06:43PM 1 point [-]

Which members?

Comment author: hyporational 30 October 2013 02:04:31PM *  1 point [-]

A med student colleague of mine, a devout christian, is going to give a lecture on psychosexual development for our small group in a couple of days. She's probably going to sneak in an unknown amount of propaganda. With delicious improbability, there happen to be two transgender med students in our group she probably isn't aware of. To this day, relations in our group have been very friendly.

Any tips on how to avoid the apocalypse? Pre-emptive maneuvers are out of the question, I want to see what happens.

ETA: Nothing happened. Caused a significant update.

Comment author: fubarobfusco 30 October 2013 05:04:28PM *  4 points [-]

This sounds like a situation in which some people present may consider some other people's beliefs to be an individual-level existential threat — whether to their identity, to their lives, or to their immortal souls. In other words, the problem is not just that these folks disagree with each other, but that they may feel threatened by one another, and by the propagation of one another's beliefs.

Consider:
"If you convince people of your belief, people are more likely to try to kill me."
"If you convince people of your belief, I am more likely to become corrupted."

We are surprised when a local NAACP leader has a calm meeting with a KKK leader. (But possibly not as surprised as the national NAACP leadership were.)

One framework for dealing with situations like this is called liberalism. In liberalism, we imagine moral boundaries called "rights" around individuals, and we agree that no matter what other beliefs we may arrive at, that it would be wrong to transgress these boundaries. (We imagine individuals, not groups or ideas, as having rights; and that every individual has the same rights, regardless of properties such as their race, sex, sexuality, or religion.)

Agreeing on rights allows us to put boundaries around the effects of certain moral disagreements, which makes them less scary and more peaceful. If your Christian colleague will agree, for instance, that it is wrong to kidnap and torture someone in an effort to change that person's sexual identity, they may be less threatening to the others.

Comment author: Lumifer 30 October 2013 03:12:35PM 0 points [-]

What would constitute an apocalypse? When you say "I want to see what happens" do you mean you want to let the situation develop organically but set certain boundaries, a cap on damages, so to say?

Comment author: hyporational 30 October 2013 04:05:18PM *  0 points [-]

That's exactly what I mean. I'm not directing the situation, but will be participating.

I'd like to confront and see people confront her religious bias, without the result being excessive flame or her being backed in a corner without a chance to even marginally nudge her mind in the right direction. She's smart, will not make explicit religious statements, and will back her claims with cherry picked reseach. Naturally the level of mindkill will depend on other participants too, and I will treat this as some sort of a rationality test if they manage to keep their calm. If they lose it I guess it's understandable.

I guess I'll be using lots of some version of "agree denotationally, disagree connotationally".

Comment author: Lumifer 30 October 2013 04:15:23PM 0 points [-]

Are the participants Finnish? I am tempted to start remembering jokes about the volatile and emotional character of Finns... :-)

Comment author: fubarobfusco 28 October 2013 10:12:52PM 1 point [-]

I formerly thought I had a politically-motivated stalker who was going through all my old comments to downvote them.

Now I wonder if I have a stalker who is trying to keep me at ~6000 total, ~200 30-day karma.

Comment author: niceguyanon 28 October 2013 07:24:16AM 1 point [-]

Is there any research suggesting simulated out-of-body-experiences (OBE)(like this), can be used for self improvement? For example potential areas of benefits include triggering OBEs to help patients suffering from incorrect body identities, which is exciting.

For some time now, I have had this very strange fascination with OBE and using it to over come akrasia. Of course I have no scientific evidence for it, yet I have this strong intuition that makes me believe so. I'll do my best to explain my rationale. Often I get this idea, that I can trick myself into doing what I want, if I pretend that I am not me but just someone observing me. This disconnects my body from my identity, so that the real me can control the body me. This gives me motivation to do things for the body me. I am not studying, my body me is studying to level up. I'm not hitting the gym, the body me is hitting the gym to level up. An even more powerful effect is present for social anxiety. Things like public speaking and rejection therapy are terrifying but by disconnecting my identity from my body, I would find that rejection is not as personal, just directed at this avatar that I control. Negative self-conscious thoughts and embarrassment seem to have a lessened impact.

Am I off my rocker?

Comment author: TheOtherDave 28 October 2013 02:58:53PM 4 points [-]

A few potentially relevant observations:

  • The kind of dissociation you talk about here, where I experience my "self" as unrelated to my body, is commonly reported as spontaneously occurring during various kinds of emotional stress. I've had it happen to me many times.

  • It would not be surprising if the same mechanism that leads to spontaneous dissociation in some cases can also lead to the strong intuition that dissociation would be a really good idea.

  • Just because there's a mechanism that leads me to strongly intuit that something would be a really good idea doesn't necessarily mean that it actually would be.

  • All of that said: after my stroke, I experienced a lot of limb-dissociation... my arm didn't really feel like part of me, etc. This did have the advantage you described, where I could tell my arm to keep doing some PT exercise and it would, and yes, my arm hurt, and I sort of felt bad for it, but it's not like it was me hurting, and I knew I'd be better off for doing the exercise. It is indeed a useful trick.

  • I suspect there are healthier ways to get the same effect.

Comment author: ChristianKl 28 October 2013 05:03:22PM 2 points [-]

Do you have experience with OBEs? I personally have limited experience. I'm no expert but I know a bit.

In my experience the kind of people who have the skills for engaging in out-of-body-experiences usually don't get a lot done. It rather increases akrasia then decreasing it. If you want to decrease akrasia associating more with your body is a better strategy than getting outside of it.

An even more powerful effect is present for social anxiety. Things like public speaking and rejection therapy are terrifying but by disconnecting my identity from my body, I would find that rejection is not as personal, just directed at this avatar that I control.

That effect is really there. You are making a trade. You lose empathy. Stopping to care about other people means that you can't have genuine relationships.

On the other hand rejections don't hurt as much and you can more easily put yourself into such a situation.

Comment author: NancyLebovitz 29 October 2013 08:19:20AM 0 points [-]

I don't think you're off your rocker, though dissociating at the gym might increase the risk of injury.

I tentatively suggest that you explore becoming comfortable enough in your life that you don't need the hack, but I'm not sure that the hack is necessarily a bad strategy at present.

Comment author: DataPacRat 28 October 2013 01:07:25AM 0 points [-]

MWI gives an interesting edge to an old quote:

"... there are an infinite number of alternate dimensions out there. And somewhere out there you can find anything you might imagine. What I imagine is out there is a bunch of evil characters bent on destroying our time stream!" -- Lord Simultaneous

... does the fact that there's been no obvious contact suggest that the answer to the transdimensional variant of the Fermi paradox is that once you've gone down one leg of the Trousers of Time, there's no way to affect any other leg, no matter how much you try to cheat?

Comment author: Risto_Saarelma 29 October 2013 07:04:17AM 3 points [-]

The Fermi paradox includes us knowing a lot about the density of stuff in the visible universe. You'd expect expansionistic life to populate most of a galaxy in short order since there are only the three dimensions to expand in. The Everett multiverse is a bit bigger. Would you still get a similar expansion model for a difficult to discover cheat, or could we end up with effects only observable in a minuscule fraction of all branches even if a cheat was possible, but was difficult enough to discover?

Comment author: niceguyanon 31 October 2013 05:28:48PM *  1 point [-]

Does anyone have any book recommendations on the topic of evidence based negotiation tactics? I have read Influence; Cialdini, thinking fast, and slow ; Kahneman , and Art of Strategy; Dixit and Nalebuff. These are great books to read but I am looking for something with a more narrow focus, there are lot's of books on amazon that get good reviews but I am unsure of which one would suit me best.

Comment author: Vaniver 31 October 2013 07:14:26PM 2 points [-]

Getting to Yes is a standard negotiation book; Difficult Conversations seems useful as a supplement for negotiation in non-business contexts (but, as a general communication book, has obvious business applications as well).

Comment author: niceguyanon 31 October 2013 09:50:38PM 1 point [-]

I picked these up per your suggestion. Thanks.

Comment author: NancyLebovitz 30 October 2013 09:00:28AM 1 point [-]

Hofstadter and AI-- trying to understand how people actually think rather than producing brute-force simulations for specific problems.

Comment author: Omid 30 October 2013 01:30:56AM 1 point [-]

Is it rude to buy a treadmill if you live on the second floor of the apartment ?

Comment author: Jayson_Virissimo 30 October 2013 05:42:47AM *  2 points [-]

Buying it? No. Using it while your downstairs-neighbor is home? Yes. A repetitive thumping can make trying to study hellishly difficult (for people sufficiently similar to me).

Comment author: Dorikka 30 October 2013 04:12:23AM 0 points [-]

To the extent that you believe the preferences of the person below you mirror your own, would it annoy you if the person above you started using a treadmill in their apt?

Comment author: Omid 30 October 2013 04:38:57AM 3 points [-]

I don't know what a treadmill upstairs sounds like.

Comment author: Desrtopa 30 October 2013 04:11:04AM 0 points [-]

Possibly; an elliptical machine may be more considerate, as it's less likely to produce noise or impact which will be noticed downstairs.

Comment author: Emily 31 October 2013 09:32:11AM 0 points [-]

Does anyone know of a good online source for reading about general programming concepts? In particular, I'm interested in learning a bit more about pointers and content-addressability, and the Wikipedia material doesn't seem very good. I don't care about the language - ideally I'm looking for a source more general than that.

Comment author: Risto_Saarelma 31 October 2013 10:33:12AM 1 point [-]

Try the r/learnprogramming resource pages: free books, online stuff.

Can't actually name a good general article on pointers. They're the big sticking point for anyone trying to learn C for the first time, but they end up just being this sort of ubiquitous background knowledge everyone takes for granted pretty fast. I did stumble into Learn C the Hard Way, which does get around to pointers.

The C2 wiki is an old site for general programming knowledge. It's old, the navigation is weird, and the pages sometimes devolve into weird arguments where you have no idea who's saying what. But there's interesting opinionated content to find there, where sometimes the opinionators even have some idea what they're talking about. Here's one page on what they have to say about pointers.

Also I'm just going to link this article about soft skills involved in programming, because it's neat.

Comment author: Dorikka 31 October 2013 05:36:24AM 0 points [-]

Has anyone read Daniel Goleman's new book? Opinions?

Comment author: patrickmclaren 30 October 2013 01:42:22PM 0 points [-]

Could anyone provide me with some rigorous mathematical references on Statistical Hypotheses Testing, and Bayesian Decision Theory? I am not an expert in this area, and am not aware of the standard texts. So far I have found

  • Statistical Decision Theory and Bayesian Analysis - Berger
  • Bayesian and Frequentist Regression Methods - Wakefield

Currently, I am leaning towards purchasing Berger's book. I am looking for texts similar in style and content to those of Springer's GTM series. It looks like the Springer Series in Statistics may be sufficient.

Comment author: lukeprog 30 October 2013 08:59:22PM 4 points [-]

Berger is highly technical, not much of an introduction.

On Bayesian statistics, Bayesian Data Analaysis is a classic.

"Bayesian decision theory" usually just means "normal decision theory," so you could start with my FAQ. Though when decision theory is taught from a statistics book rather than an economic book, they use slightly different terminology, e.g. they set things up with a loss function rather than a utility function. For an intro to decision theory from the Bayesian statistics angle, Introduction to Statistical Decision Theory is pretty thorough, and more accessible than Berger.

Comment author: patrickmclaren 30 October 2013 10:35:02PM 0 points [-]

Great, thank you very much for the references. I am now reading your FAQ before moving onto the texts, I'll post any comments I have there.