As [my team] analyzed the [smallpox] genome, we became concerned about several matters.
The first was whether the government... should allow us to publish our sequencing and analysis... Before the HIV epidemic, the smallpox variola virus had been responsible for the loss of more human life throughout history than all other infectious agents combined...
I eventually found myself in the National Institutes of Health... together with government officials from various agencies, including the department of defense. The group was very understandably worried about the open publication of the smallpox genome data. Some of the more extreme proposals included classifying my research and creating a security fence around my new institute building. It is unfortunate that the discussion did not progress to develop a well-thought-out long-term strategy. Instead the policy that was adopted was determined by the politics of the Cold War. As part of a treaty with the Soviet Union, which had been dissolved at the end of 1990, a minor strain of smallpox was being sequenced in Russia, while we were sequencing a major strain. Upon learning that the Russians were preparing to publish their genome data, I was urged by the government to rush our study to completion so that it would be published first, ending any intelligent discussion.
Unlike the earlier, expedient, thinking about smallpox, there was a very deliberate review of the implications of our [later] synthetic-virus work by the Bush White House. After extensive consultations and research I was pleased that they came down on the side of open publication of our synthetic phi X174 genome and associated methodology... The study would eventually appear in Proceedings of the National Academy of Sciences on December 23, 2003. One condition of publication from the government that I approved of was the creation of a committee with representatives from across government to be called the National Science Advisory Board for Biosecurity, (NSABB), which would focus on biotechnologies that had dual uses.
And later:
Long before we finally succeeded in creating a synthetic genome, I was keen to carry out a full ethical review of what this accomplishment could mean for science and society. I was certain that some would view the creation of synthetic life as threatening, even frightening. They would wonder about the implications for humanity, health, and the environment. As part of the educational efforts of my institute I organized a distinguished seminar series at the National Academy of Sciences, in Washington, D.C., that featured a great diversity of well-known speakers, from Jared Diamond to Sydney Brenner. Because of my interest in bioethical issues, I also invited Arthur Caplan, then at the Center for Bioethics at the University of Pennsylvania, a very influential figure in health care and ethics, to deliver one of the lectures.
As with the other speakers, I took Art Caplan out to dinner after his lecture. During the meal I said something to the effect that, given the wide range of contemporary biomedical issues, he must have heard it all by this stage of his career. He responded that, yes, basically he had indeed. Had he dealt with the subject of creating new synthetic life forms in the laboratory? He looked surprised and admitted that it had definitely not been a topic he had heard of until I had raised the question. If I gave his group the necessary funding, would he be interested in carrying out such a review? Art was excited about taking on the topic of synthetic life. We subsequently agreed that my institute would fund his department to conduct a completely independent review of the implications of our efforts to create a synthetic cell.
Caplan and his team held a series of working groups and interviews, inviting input from a range of experts, religious leaders, and laypersons...
As I had hoped, the Pennsylvania team seized the initiative when it came to examining the issues raised by the creation of a minimal genome. This was particularly important, in my view, because in this case it was the scientists involved in the basic research and in conceiving the ideas underlying these advances who had brought the issues forward— not angry or alarmed members of the public, protesting that they had not been consulted (although some marginal groups would later make that claim). The authors pointed out that, while the temptation to demonize our work might be irresistible, “the scientific community and the public can begin to understand what is at stake if efforts are made now to identify the nature of the science involved and to pinpoint key ethical, religious, and metaphysical questions so that debate can proceed apace with the science. The only reason for ethics to lag behind this line of research is if we choose to allow it to do so.”
Comment author:Vaniver
28 October 2013 02:48:33PM
6 points
[-]
So, I link to Amazon fairly frequently here, and when I do I use the referral link "ref=nosim?tag=vglnk-c319-20" to kick some money back to MIRI / whoever's paying for LW.
First, is that the right link? Second, what would it take to add that to the "Show help" box so that I don't have to dig it up whenever I want to use it, and others are more likely to use it?
This is done automatically in a somewhat different way. So my advice is not to worry about it. But, yes, it shouldn't hurt and and it should help in the situation that viglink doesn't fire. In those comments, Wei Dai agrees that this is the referral code.
Comment author:Vaniver
28 October 2013 06:35:40PM
1 point
[-]
Part of the reason why I'm asking is because that info might be old. Apparently "ref=nosim" was obsolete two years ago, and I don't know if that's still the right VigLink account, etc.
Comment author:Lumifer
28 October 2013 08:07:50PM
*
4 points
[-]
I should have made more explicit that it's my opinion: "a bad thing from my point of view".
It's a quirk of mine -- I dislike marketing schemes injected into non-marketing contexts, especially if that is not very explicit. It is a mild dislike, not like I'm going to quit a site because of it or even write a rant against Amazon links rewriting.
Yes, I understand that Internet runs on such things. No, it does not make me like it more.
Comment author:shminux
31 October 2013 04:42:52PM
*
5 points
[-]
The Sequences probably contain more material than an undergraduate degree in philosophy, yet there is no easy way for a student to tell if they understood the material properly. Some posts contain an occasional question/koan/meditation which is sometimes answered in the same of a subsequent post, but these are pretty scarce. I wonder if anyone qualified would like to compile a problem set for each topic? Ideally with unambiguous answers.
Comment author:Vaniver
31 October 2013 07:16:42PM
0 points
[-]
I also think this is a worthwhile endeavor, and speculate that the process and results may be useful for development of a general rationality test, which I know CFAR has some interest in.
Comment author:ciphergoth
29 October 2013 04:11:26PM
5 points
[-]
Is Our Final Invention available as any kind of e-book anywhere? I can find it in hardback, but not for Kindle or any kind of ePub. I'm not going to start carrying around a pile of paper in order to read it!
What do you mean by "anywhere"? As Vincent Yu mentions, it is available in the US. It hasn't been published in print or ebook in the UK. When you find it in hardback, it's imports, right? If it is published in the UK, it will probably be available as an ebook, but I don't know if that will happen before the US edition is pirated. If you are generally chomping at the bit to read American ebooks, it is worth investing the time to learn if any ebook sellers fails to check national boundaries. The publisher lists six for this book.
Probably not useful, but the US edition is available in France. (Rights to publish English-language books in countries that don't speak English aren't very valuable, so the monopolies to the US and UK usually include those rights. So you if you're in France, you can get the ebook first, regardless of whether it's published in the US or UK. Unless they forget to make it available in France.)
I think it is more likely rejecting you based on being logged in than based on IP, since I can see UK and FR results. Google cache of that link, both at google.com and google.co.uk show me the kindle edition. ($11)
Comment author:Lumifer
28 October 2013 04:00:02AM
5 points
[-]
LW is mostly pure-text with no images except for occasional graphs. Why is that so? Are the reasons technical (due to reddit code), cultural (it's better without images), or historical (it's always been so)?
Comment author:Lumifer
28 October 2013 03:55:57PM
6 points
[-]
A state of affairs which I hope continues.
Ah, a vote for "it's better this way". Why do you prefer pure text? Is it because of the danger of being overrun with cat pictures and blinking gif smileys?
Comment author:hyporational
29 October 2013 03:47:56AM
*
3 points
[-]
Let's take that particular image. It covers a huge block that could have been filled by text otherwise and conveys relatively little information accurately. It distrupts my reading completely for a little while and getting back to the nice flow takes cognitive effort.
This moment I'm reading on my phone and the image fills the whole screen.
It is because text can be copy-pasted and composed easily since browsers mostly allow selecting any text (this is more difficult in win apps).
Whereas images cannot be copy pasted as simple (mostly you have to find the URL and copy paste that) and images cannot be composed easily at all (you at least need some pic editor which often doesn't allow simple copy-paste).
This is the old problem that there is no graphical language. A problem that has evadad GUI designers since the beginning.
Comment author:Lumifer
29 October 2013 05:05:51PM
0 points
[-]
Whereas images cannot be copy pasted as simple
Um. In Firefox, right-click on the image, select Copy Image. Looks pretty simple to me. Pretty sure it works the same way in Chrome as well.
This is the old problem that there is no graphical language.
I think you're missing the point of images. Their advantage is precisely that they are holistic, a gestalt -- you're supposed to take them in whole and not decompose them into elements.
Sure, if you want to construct a sequential narrative out of symbols, images are the wrong medium.
Um. In Firefox, right-click on the image, select Copy Image.
And how do you insert it into a comment?
I think you're missing the point of images. Their advantage is precisely that they are holistic, a gestalt -- you're supposed to take them in whole and not decompose them into elements.
Comment author:gwern
28 October 2013 11:38:34PM
5 points
[-]
I'd go with laziness and lack of overt demand. I know that people love graphs and images, but I don't especially feel the need when writing something, and it's additional work (one has to make the image somehow, name it, upload it somewhere, create special image syntax, make sure it's not too big that it'll spill out of the narrow column allotted articles etc). I can barely bring myself to include images for my own little statistical essays, though I've noticed that my more popular essays seem to include more images.
Comment author:luminosity
28 October 2013 10:02:15AM
5 points
[-]
I haven't tried authoring an article myself, but a quick look now seems to indicate that you can't upload images, only link to them. This means images must be hosted on third parties, meaning you have to upload it there and if not directly under your control, it's vulnerable to link rot. It seems like this would be inconvenient.
You can upload images to the LessWrong wiki, and then link them from comments or posts. It's a bit roundabout, but the feature is there. The question is then, should it be made easier?
Comment author:Lumifer
28 October 2013 03:53:01PM
*
0 points
[-]
you can't upload images, only link to them
That's very common in online forums (for the server load reasons) but doesn't seem to stop some forums from being fairly image-heavy. It's not like there is a shortage of free image-hosting sites.
Yes, I understand the inconvenience argument, but the lack of images at LW is pretty stark.
Do you think more people should include graphics in their posts?
Do you think more people should include graphics in their comments?
Do you think the image-heavy forums you mention get some benefit from being image-heavy that we would do well to pursue?
I'll observe that I read your comments on this thread as implicitly recommending more images.
This is of course just my reading, but I figured I'd mention it anyway if you are hesitant to make a recommendation for fear of tearing that fence down in ignorance, on the off chance that I'm not entirely unique here.
Comment author:Lumifer
28 October 2013 07:53:32PM
0 points
[-]
I understand where you are coming from (asking why this house is not blue is often perceived as implying that this house should be blue) -- but do you think there's any way to at least tone down this implication without putting in an explicit disclaimer?
Comment author:TheOtherDave
28 October 2013 08:18:49PM
*
1 point
[-]
do you think there's any way to at least tone down this implication without putting in an explicit disclaimer?
Well, if that were my goal, one thing I would try to avoid is getting into a dynamic where I ask people why they avoid X, and then when they provide some reasons I reply with counterarguments.
Also, when articulating possible reasons for avoiding X, I would take some care with the emotional connotations of my wording. This is of course difficult, but one easy way to better approximate it is to describe both the pro-X and anti-X positions using the same kind of language, rather than describing just one and leaving the other unmarked.
More generally, assymetry in how I handle the pro-X and anti-X cases will tend to get read as suggesting partiality; if I want to express impartiality, I would cultivate symmetry.
That said, it's probably easier to just express my preferences as preferences.
Comment author:Lumifer
28 October 2013 08:38:50PM
*
0 points
[-]
avoid is getting into a dynamic where I ask people why they avoid X, and then when they provide some reasons I reply with counterarguments
<shrug> I think it's fine. Reasons that people provide might be strong or might be weak -- it's OK to tap on them to see if they would fall down. I would do the same thing to comments which (potentially) said "Yay images, we need more of them!".
In general, I would prefer not to anchor the expectations of the thread participants, but not at the price of interference with figuring out of what does the territory actually look like.
describe both the pro-X and anti-X positions using the same kind of language
I didn't (and still don't) have a position to describe. Summarizing arguments pro and con seemed premature. This really was just a simple open question without a hidden agenda.
Comment author:Mestroyer
28 October 2013 09:05:34PM
0 points
[-]
There's a good chance this is not a "fence", deliberately designed by some agent with us in mind, but a fallen tree that ended up there by accident/laziness.
Comment author:ChristianKl
28 October 2013 05:11:21PM
4 points
[-]
There's a design choice on the part of LessWrong against avatar images. Text is supposed to speak for itself and not be judged by it's author. Avatar imaging would increase author recognition.
Comment author:Mestroyer
28 October 2013 09:07:11PM
2 points
[-]
I think I agree with that. I do read author names, but I read them after I read the text usually. I frequently find myself mildly surprised that I've just upvoted someone I usually downvote, or vice versa.
Some people embed graphics in their articles, and this is seen by many as a good thing. I suspect it's just individuals choosing not to bother with images.
Comment author:gattsuru
28 October 2013 04:18:21PM
2 points
[-]
I'd note that the short help for comments does not list the Markdown syntax for embedding images in comments, and even the "more comment formatting help" page is not especially clear. That LessWrong cultural encourages folk to write comments before writing Main or Discussion articles makes that fairly relevant.
Comment author:lsparrish
28 October 2013 06:18:19PM
*
1 point
[-]
I find it harder to engage in System 2 when there are images around. Heck, even math glyphs usually trip me up. That's not to say graphics can't do more good than harm (for example, charts and diagrams can help cross inferential distance quickly, and may serve as useful intuition pumps) but I imagine that more images would mean more reliance on intuition and less on logic, hence less capacity for taking things to analytical extremes. So it could be harmful (given the nature of the site) to introduce more images.
I like my flow. I don't have anything against images if they are arranged in a way that doesn't distrupt reading. I'm not sure if lw platform allows for that.
Reading this comment... I suddenly feel very odd about the fact that I failed to include images in my Neuroscience basics for LessWrongians post, in spite of in a couple places saying "an image might be useful here." Though the lack of images was partly due to me having trouble finding good ones, so I won't change it at the moment.
Several months ago I set up a blog for writing intelligent, thought-provoking stuff. I've made two posts to it, and one of those is a photo of a page in Strategy of Conflict, because it hilariously featured the word "retarded". Something has clearly gone wrong somewhere.
I'm pretty sure there are other would-be bloggers on here who experience similar update-discipline issues. Would any of them like to form some loose cabal of blogging spotters, who can egg each other on, suggest topics, provide editorial and stylistic feedback, etc.?
Is this intended as snark, or an actual helpful comment?
Assuming the latter, I have what I consider to be sound motives for maintaining a blog. Unfortunately, I don't have sound habits for maintaining a blog, coupled with a bit of a cold-start problem. I doubt I am the only person in this position, and believe social commitment mechanisms may be a possible avenue for improvement.
Comment author:Vaniver
29 October 2013 05:19:24PM
6 points
[-]
I was going for actual helpful comment. I personally don't have a blog because several attempts to have a blog failed. Afterwards, I was fairly sure that the reason why my blogs failed was because I like conversations too much and monologuing too little. I found that forums both had a reliable stream of content to react to, as well as a somewhat reliable stream of content to build off of. The incentive structure seemed a lot nicer in a number of ways.
More broadly, I think a good habit when plans fail is to ask the question "What information does this failure give me?", rather than the limited question "why did this plan fail me?". Sometimes you should revise the plan to avoid that failure mode; other times you should revise the plan to have entirely different goals.
My immediate practical suggestion is to create a LW draft editing circle. This won't give you the benefits of a blog distinct from LW, but eliminates most of the cold-start problem. It also adds to the potential interest base people who have ideas for posts but who don't have the confidence in their ability to write a post that is socially acceptable to LW (i.e. doesn't break some hidden protocol).
If you have any old material, you could consider posting those to get initial readership, even if you don't consider them especially high quality.
I have what I consider to be sound motives for maintaining a blog.
I'd interpret Vaniver's comment more generally to mean that parts of your brain might disagree with this assessment, and you experience this as procrastination.
Comment author:philh
29 October 2013 11:45:46PM
2 points
[-]
Would any of them like to form some loose cabal of blogging spotters, who can egg each other on, suggest topics, provide editorial and stylistic feedback, etc.?
Yes.
(My current excuse for not even having made one post is that I started to experience wrist pain, and didn't want to make it worse by doing significant typing at home. It seems to be getting better now.)
Comment author:Lumifer
29 October 2013 05:38:46PM
2 points
[-]
Consider your incentives. Actual (non-imaginary) incentives in your current life.
What are the incentives for maintaining a blog? What do you get (again, actually, not supposedly) when you make a post? What are the disincentives? (e.g. will a negative comment spoil your day?) Is there a specific goal you're trying to reach? Is posting to your blog a step on the path to the goal?
Are you requesting answers for my specific case, or just providing me with advice?
(As an observation, which isn't meant to be a hostile response to your comment, people seem very keen to offer advice on LW, even when none has been requested.)
Comment author:lmm
30 October 2013 03:16:58AM
1 point
[-]
Maybe you should consider joining an existing blogging community - livejournal or tumblr or medium? They're good at giving you social prompts to write something.
In retrospect, my previous response to this does seem pretty unwarranted. This was a perfectly reasonable and relevant comment that caught me at a bad time. I'd like to apologise.
OK, I'm not trying to be antagonistic, but I really want to understand where the communication process goes wrong here. What was it about my original comment that seemed like a request for advice?
That's also my suspicion in this case, but does it really seem plausible that I completely abandoned my analysis of the situation at that point? Especially since I go on to explicitly identify it as an update-discipline issue, and make a specific request to address it?
I've been cheerfully posting to LW with moderate frequency for four years now, but over the past few months I've noticed an increased tendency in respondents to offer largely unsolicited advice. I'm fairly sure this is an actual shift in how people respond. It seems unlikely that my style of inquiry has changed, and I don't think I've simply become more sensitive to an existing phenomenon.
Comment author:Emily
30 October 2013 01:13:36PM
3 points
[-]
Maybe the "advice" (or instrumental rationality?) style of post has become more common and this approach to discussion has bled over into the comments? I don't know, I find lmm's comment to read as a perfectly natural response to yours, so perhaps I'm not best placed to analyse the trend you seem to be experiencing.
Comment author:Tenoke
30 October 2013 01:40:09PM
*
0 points
[-]
One possible explanation is that you are just getting more responses (and thus more advice-based responses) because the Open Threads (and maybe Discussion in general) have more active users. Or maybe the users are more keen to participate in discussions and giving advice is the easiest way to do so.
It might help if you start.... (just kidding, I'm making a mental note not to give you advice unless you specifically ask for it from now)
Retracted because you haven't asked for an opinion on the reason as to why you are getting advice either.
In this context, the discussion is about receiving unnecessary advice, so I think speculating on why this is happening is entirely reasonable.
To illustrate why it's annoying, it may help to provide the most extreme example to date. A couple of months ago I made a post on the open thread about how having esoteric study pursuits can be quite isolating, and how maintaining hobbies and interests that are more accessible to other people can help offset this. I asked for other people's experience with this. Other people's experiences was specifically what I asked for.
Several people read this as "I'm an emotionally-stunted hermit! Please help me!" and proceeded to offer incredibly banal advice on how I, specifically, should try to form connections with other people. When I pointed out that I wasn't looking for advice, one respondent saw fit to tell me that my social retardation was clearly so bad that I didn't realise I needed the advice.
To my mind, asking for advice has a recognisable format in which the asker provides details for the situation they want advice on. If you have to infer those details, the advice you give is probably going to be generic and of limited use. What I find staggering is why so many people skip the process of thinking "well, I can't offer you any good advice unless you give us more deta-...oh, wait, you weren't asking for advice", and just go ahead and offer it up anyway.
Comment author:Moss_Piglet
30 October 2013 03:38:04PM
10 points
[-]
People will leap at any opportunity to give advice, because giving advice a) is extraordinarily cheap b) feels like charity and most importantly c) places the adviser above the advised. It's the same impulse which drives us to pity; we can feel superior in both moral and absolute terms by patronizing others, and unlike charity there is only a negligible cost involved.
I, for example, have just erased a sentence giving you useless advice on how not to get useless advice in a comment to a post talking about how annoying unsolicited useless advice is. That is the level of mind-bending stupidity we're dealing with here.
Can you please use actual words to explain the underlying salience of this video? I see what you're getting at, but I'm pretty sure if you said it explicitly, it would be kind of obnoxious. I would rather you said the obnoxious thing, which I could respond to, than passively post a video with snarky implicit undertones, which I can't.
Comment author:philh
30 October 2013 06:59:59PM
1 point
[-]
I think this isn't entirely fair. You asked what people do to keep themselves relatable to other people. That's not the same as asking for help relating to other people, but it is closer to that than you implied.
Not to say that I think the responses you got were justified, but I don't find them surprising.
What I find staggering is why so many people skip the process of thinking "well, I can't offer you any good advice unless you give us more deta-...oh, wait, you weren't asking for advice", and just go ahead and offer it up anyway.
When you say you find this staggering, do you mean you don't understand why many people do this?
I can speculate as to why people do this, but given my inability to escape the behaviour, I clearly don't understand it very well.
To a certain extent, I'm also surprised that it happens on Less Wrong, which I would credit with above-average reading comprehension skills. Answering the question you want to answer, rather than the question that was asked, is something I'd expect less of here.
Comment author:philh
30 October 2013 02:27:38PM
*
4 points
[-]
There's a pattern of "I have a problem with X, the solution seems to be Y, I need help implementing Y".
Sometimes people ask this without considering other solutions; then it can be helpful to point out other solutions. Sometimes people ask this after considering and rejecting lots of other solutions; then it can be annoying to point out other solutions. Unfortunately it's not always easy for someone answering to tell which is which.
Edit because concrete examples are good: I just came across this SO post, which doesn't answer the question asked or the question I searched for, but it was my preferred solution to the problem I actually had.
Maybe that's a description of the other responses, but lmm is not suggesting an alternative to Y, but an alternate path to Y. I think sixes and sevens's response is ridiculous.
Comment author:[deleted]
01 November 2013 03:46:21AM
0 points
[-]
If I wanted to update a blog regularly, I would consider it imperative to put "update my blog" as a repeating item in my to-do list. For me, relying on memory is an atrocious way to ensure that something gets done; having a to-do list is enormously more effective.
Comment author:VincentYu
29 October 2013 02:23:44AM
4 points
[-]
Smith et al. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo.
Abstract (emphasis mine):
Neuronal dendrites are electrically excitable: they can generate regenerative events such as dendritic spikes in response to sufficiently strong synaptic input. Although such events have been observed in many neuronal types, it is not well understood how active dendrites contribute to the tuning of neuronal output in vivo. Here we show that dendritic spikes increase the selectivity of neuronal responses to the orientation of a visual stimulus (orientation tuning). We performed direct patch-clamp recordings from the dendrites of pyramidal neurons in the primary visual cortex of lightly anaesthetized and awake mice, during sensory processing. Visual stimulation triggered regenerative local dendritic spikes that were distinct from back-propagating action potentials. These events were orientation tuned and were suppressed by either hyperpolarization of membrane potential or intracellular blockade of NMDA (N-methyl-D-aspartate) receptors. Both of these manipulations also decreased the selectivity of subthreshold orientation tuning measured at the soma, thus linking dendritic regenerative events to somatic orientation tuning. Together, our results suggest that dendritic spikes that are triggered by visual input contribute to a fundamental cortical computation: enhancing orientation selectivity in the visual cortex. Thus, dendritic excitability is an essential component of behaviourally relevant computations in neurons.
Comment author:[deleted]
30 October 2013 02:48:00PM
*
12 points
[-]
Sillicon Valley's Ultimate Exit, a speech at Startup School 2013 by Balaji Srinivasan. He opens with the statement that America is the Microsoft of nations, goes into a discussion on Voice, Exit and good governence and continues with the wonderful observation that:
"There’s four cities that used to run the United States in the postwar era: Boston with higher ed; New York City with Madison Avenue, books, Wall Street, and newspapers; Los Angeles with movies, music, Hollywood; and, of course, DC with laws and regulations, formally running it."
He names this the Paper Belt, and claims the Valley has beem unintentionally dumping horse heads in all of their beds for the past 20 years. I would call it The Cathedral and note the NYT does not approve of this kind of talk:
I love this speech, but I suspect it's overoptimistic. I believe that bitcoin will be illegal as soon as it's actually needed.
Still, I appreciate his appreciation of immigration/emigration. I'm convinced that mmigration/emigration gets less respect than staying and fighting because it's less dramatic, less likely to get people killed, and more likely to work.
Comment author:Lumifer
07 November 2013 03:58:51PM
3 points
[-]
I believe that bitcoin will be illegal as soon as it's actually needed.
That is likely, but note that torrenting Lady Gaga's mp3s is also illegal and yet I have absolutely zero difficulty in finding such torrents on the 'net.
And consequently it has a much more complicated information structure than torrents do. :) But this aside, while you can likely run the Bitcoin economy as such, if Bitcoins cannot be exchanged for dollars or directly for goods and services, they are worthless; and this is a bottleneck where a government has a lot of infrastructure to insert itself. I suggest that, if Bitcoins become illegal, buying hard drugs is the better analogy than downloading torrents: It won't be impossible, but it'll be much more difficult than setting up a free client and clicking "download".
Comment author:Lumifer
07 November 2013 05:04:35PM
2 points
[-]
if Bitcoins become illegal, buying hard drugs is the better analogy than downloading torrents
The differences between the physical and the virtual worlds are very relevant here.
Silk Road was blatantly illegal and it took the authorities years to bust its operator, a US citizen. Once similar things are run by, say, Malaysian Chinese out of Dubai with hardware scattered across the world, the cost for the US authorities to combat them would be... unmanageable.
Comment author:drethelin
28 October 2013 06:51:15PM
11 points
[-]
What probability should I assign to being completely wrong and brainwashed by Lesswrong? What steps would one take to get more actionable information on this topic? For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them? Am I going to burn in counter factual hell for even asking?
In "The Inertia of Fear and the Scientific Worldview", by the Russian computer scientist and Soviet-era dissident Valentin Turchin, in the chapter "The Ideological Hierarchy", Soviet ideology was analyzed as having four levels: philosophical level (e.g. dialectical materialism), socioeconomic level (e.g. social class analysis), history of Soviet Communism (the Party, the Revolution, the Soviet state), and "current policies" (i.e. whatever was in Pravda op-eds that week).
According to Turchin, most people in the USSR regarded the day-to-day propaganda as empty and false, but a majority would still have agreed with the historical framework, for lack of any alternative view; and the number who explicitly questioned the philosophical and socioeconomic doctrines would be exceedingly small. (He appears to not be counting religious people here, who numbered in the tens of millions, and who he describes as a separate ideological minority.)
BaconServ writes that "LessWrong is the focus of LessWrong", though perhaps the idea would be more clearly expressed as, LessWrong is the chief sacred value of LessWrong. You are allowed to doubt the content, you are allowed to disdain individual people, but you must consider LW itself to be an oasis of rationality in an irrational world.
I read that and thought, meh, this is just the sophomoric discovery that groupings formed for the sake of some value have to value themselves too; the Omohundro drive to self-protection, at work in a collective intelligence rather than in an AI. It also overlooks the existence of ideological minorities who think that LW is failing at rationality in some way, but who hang around for various reasons.
However, these layered perspectives - which distinguish between different levels of dissent - may be useful in evaluating the ways in which one has incorporated LW-think into oneself. Of course, Less Wrong is not the Soviet Union; it's a reddit clone with meetups that recruits through fan fiction, not a territorial superpower with nukes and spies. Any search for analogies with Turchin's account, should look for differences as well as similarities. But the general idea, that one may disagree with one level of content but agree with a higher level, is something to consider.
Comment author:hyporational
29 October 2013 03:16:07AM
*
8 points
[-]
Could people list philosophy oriented internet forums with high concentration of smart people and no significant memetic overlap so that one could test this? I don't know any and I think it's dangerous.
Comment author:Lumifer
29 October 2013 12:23:44AM
*
4 points
[-]
What probability should I assign to being completely wrong and brainwashed by Lesswrong?
Wrong about what? Different subjects call for different probability.
The probability that the Bayes Theorem is wrong is vanishingly small. The probability that the UFAI risk is completely overblown is considerably higher.
LW "ideology" is an agglomeration in the sense that accepting (or not) a part of it does not imply acceptance (or rejection) of other parts. One can be a good Bayesian, not care about UFAI, and be signed up for cryonics -- no logical inconsistencies here.
Comment author:TheOtherDave
28 October 2013 07:05:20PM
*
3 points
[-]
What steps would one take to get more actionable information on this topic?
I'd suggest starting by reading up on "brainwashing" and developing a sense of what signs characterize it (and, indeed, if it's even a thing at all).
For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?
Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.
Comment author:shminux
28 October 2013 07:18:11PM
*
1 point
[-]
Note that your suggestions are all within the framework of the "accepted LW wisdom". The best you can hope for is to detect some internal inconsistencies in this framework. One's best chance of "deconversion" is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that "worked" for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I'm not sure how reading up on it is remaining inside the "accepted LW wisdom."
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I'm being brainwashed. And, yes, if I conclude that it's likely that I'm being brainwashed, there are various deconversion techniques I can use to negate that.
Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.
Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn't lend itself to this approach so well... it's hard to know where to even start, there.
Comment author:ChristianKl
28 October 2013 09:39:32PM
*
2 points
[-]
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I'm being brainwashed.
Reading up on brainwashing can mean reading gwern's essay which concludes that brainwashing doesn't really work. Of course that's exactly what someone who wants to brainwash yourself would tell you, wouldn't it?
Sure. I'm not exactly sure why you'd choose to interpret "read up on brainwashing" in this context as meaning "read what a member of the group you're concerned about being brainwashed by has to say about brainwashing," but I certainly agree that it's a legitimate example, and it has exactly the failure mode you imply.
Comment author:Nornagest
28 October 2013 09:57:59PM
*
0 points
[-]
For what it's worth, gwern's findings are consistent with mine (see this thread). I'd rather restrict "brainwashing" to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It's difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism -- more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee's religion) are much more common -- but if you read between the lines they seem to be higher.
Deprogramming techniques aren't much better, incidentally -- from everything I've read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn't apply most of them to yourself, and wouldn't want to in any case.
Comment author:[deleted]
28 October 2013 07:30:57PM
10 points
[-]
The first thing you should probably do is narrow down what specifically you feel like you may be brainwashed about. I posted some possible sample things below. Since you mention Messianic groupthink as a specific concern, some of these will relate to Yudkowsky, and some of them are Less Wrong versions of cult related control questions. (Things that are associated with cultishness in general, just rephrased to be Less Wrongish)
Do you/Have you:
1: Signed up for Cyonics.
2: Agressively donated to MIRI.
3: Check for updates on HPMOR more often then Yudkowsky said there would be on the off hand chance he updated early.
4: Gone to meet ups.
5: Went out of your way to see Eliezer Yudkowsky in person.
6: Spend time thinking, when not on Less Wrong: "That reminds me of Less Wrong/Eliezer Yudkowsky."
7: Played an AI Box experiment with money on the line.
8: Attempted to engage in a quantified self experiment.
9: Cut yourself off from friends because they seem irrational.
10: Stopped consulting other sources outside of Less Wrong.
11: Spent money on a product recommended by someone with high Karma (Example: Metamed)
12: Tried to recruit other people to Less Wrong and felt negatively if they declined.
13: Written rationalist fanfiction.
14: Decided to become polyamorous.
15: Feel as if you have sinned any time you receive even a single downvote.
16: Gone out of your way to adopt Less Wrong styled phrasing in dialogue with people that don't even follow the site.
For instance, after reviewing that list, I increased my certainty I was not brainwashed by Less Wrong because there are a lot of those I haven't done or don't do, but I also know which questions are explicitly cult related, so I'm biased. Some of these I don't even currently know anyone on the site who would say yes to them.
I'm in the process of doing 1, have maybe done 2 depending on your definition of aggressively (made only a couple donations, but largest was ~$1000), and done 4.
Oh, and 11, I got Amazon Prime on Yvain's recommendation, and started taking melatonin on gwern's. Both excellent decisions, I think.
And 14, sort of. I once got talked into a "polyamorous relationship" by a woman I was sleeping with, no connection whatsoever to LessWrong. But mostly I just have casual sex and avoid relationships entirely.
Comment author:DanielLC
30 October 2013 03:59:39AM
1 point
[-]
Am I going to burn in counter factual hell for even asking?
You'd be crazy not to ask. The views of people on this site are suspiciously similar. We might agree because we're more rational than most, but you'd be a fool to reject the alternative hypothesis out of hand. Especially since they're not mutually exclusive.
What steps would one take to get more actionable information on this topic?
Use culture to contrast with culture. Avoid being a man of a single book and get familiar with some past or present intellectual traditions that are distant from the LW cultural cluster. Try to get a feel for the wider cultural map and how LW fits into it.
Seriously though, I'd love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I've seen some examples already, but more is good.
Comment author:JDelta
29 October 2013 01:57:57PM
0 points
[-]
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
The existence of the past and future (and within most people's reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
In your opinion, is there some other form of reasoning that avoids this weakness?
Comment author:JDelta
29 October 2013 11:46:05PM
0 points
[-]
That's a very complicated question but I'll try to do my best to answer.
In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. "In my heart I know..."
In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they 'want' ... how they define themselves. Consciously, and unconsciously.
For "reasoning", no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one's "presumptive model" of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.
Probabilities can't cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.
When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their 'foundation' is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.
Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.
Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you'll make a lot less mistakes.
I'd say you should assign a very high probability for your beliefs being aligned in the direction LessWrong's are, even in cases where such beliefs are wrong. It's just how the human brain and human society works; there's no getting around it. However, how much of that alignment is due to self-selection bias (choosing to be a part of LessWrong because you are that type of person) or brainwashing is a more difficult question.
For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?
As long as the number is small, I wouldn't update at all, because I already expect slow trickle of those people on my current information, so seeing that expectation confirmed isn't new evidence. If LW achieved a Scientology-like place in popular opinion, though, I'd be worried.
Am I going to burn in counter factual hell for even asking?
Comment author:Vaniver
30 October 2013 06:02:09PM
3 points
[-]
Eliezer posted to Facebook:
In My Little Pony: Friendship is Signaling, Twilight Sparkle and her companions defeat Nightmare Moon by using the Elements of Cynicism to prove to her that she doesn't really care about darkness.
My stab at it. I'm probably going to post it to FIMFiction in a day or so, but it's basically a first draft at this point and could doubtless use editing / criticism.
Since I'm not sure whether this advice would be welcome in a recent discussion, I'm just going to start cold by describing something which has worked for me.
In an initial post, I explain what kind of advice I'm looking for, and I'm specific about preferring advice from people who've gotten improvement in [specific situation]. I normally say other advice is welcome, but you'd be amazed how little of it I get.
I believe it's important to head off unwanted advice early. I can't remember whether I normally put my limiting request at the beginning or end of a post, but I think it helps if you can keep your commenters from becoming a mutually reinforcing advice-giving crowd.
I suggest that starting by being specific about what you do and don't want is (among other things) an assertion of status, and this has some effects on the advice-giving dynamic.
I normally do want advice from people who've had appropriate experience. Has anyone tried being clear at the beginning that they don't want advice?
In my social circle, explicitly tagging posts as "I'm not looking for advice" seems to work pretty well at discouraging advice. I don't do it often myself though.
And you're right, of course, that it is among other things an assertion of status, though of course it's also a useful piece of explicit information.
Comment author:Vaniver
30 October 2013 01:51:24PM
2 points
[-]
Steve Sailer on the Trolley Problem: [1] and [2]. Basically, to what degree is the unwillingness of people in the thought experiment to attempt to push the fat man the realization that pushing the fat man is an inherently riskier prospect than pulling a lever?
“Throw the switch or not” is a natural choice actually presented by real conditions – switches imply choices by definition. “Push the fat man or don’t” isn’t a natural choice presented by real conditions – it’s a scenario concocted for an experiment. By definition, those cannot be the only options in the universe. And our brains can tell.
It seems to me that what characterizes the people who choose the “logical” answer – push the fat man – is not that they gave a less-emotional response but that they gave a less-inuitive, less-gestalt-based response. They were willing to accept the conditions of the problem as given without question. That’s a response to authority – they are turning off the part of their brains that feels the situation as a real one, and sticking with the part of the brain that reasons from unquestionable givens to undeniable conclusions.
There’s a place for that kind of response – but I would argue that answering questions of great moral import is emphatically not that place. Indeed, from the French Revolution to the Iraq War, modernity is littered with the corpses of those whose deaths were logically necessary for some hypothesized outcome that could not actually have been known with remotely the necessary level of certainty. In that regard, I suspect an aversion to following logic problems to fatal conclusions is not merely a kind of moral appendix handed down from our Stone Age ancestors, but remains positively adaptive.
Comment author:Tenoke
29 October 2013 04:06:06PM
*
2 points
[-]
I don't think that 'Eliezer's book' refers to HPMOR. I think it is more likely that he is asking about the book based on the Sequences (for which this is probably the most recent thread).
Comment author:JDelta
29 October 2013 11:14:38PM
0 points
[-]
Complete is a strong word that I should have qualified. Mastery is a better word. Control over it. Where your emotions bend to the will of your rational mind, not vice versa.
Don't limit yourself without reason. As humans we are agents of change in an incredibly complex, chaotic system (society). Mastering emotional control allows us to become much more effective agents. Someone half as smart but with twice the self control in every area can easily beat the more intelligent opponent. Not every time, but it is a massive advantage. http://scienceblogs.com/cognitivedaily/2005/12/14/high-iq-not-as-good-for-you-as/
I didn't say that it's predictable, or that it is super easy, but it's not particularly difficult and only takes a few months of commitment of a few hours a week, to bring a lifetime of reward.
I'm surprised that as a "rationalist" you suggest mastery of the emotions may not be desirable. Awareness of one's emotions, sure. But letting them dictate your actions in any way, why? Be rational.
And one without mastery of their emotional state (that is, the experiential drag of depression, or impulsive actions of rage, or hurtful actions of uncontrolled lust, etc), one is at a disadvantage in almost any situation,
Comment author:JDelta
20 December 2013 01:59:25PM
*
0 points
[-]
Ultimately it's just a matter of choosing to feel a certain way. Find things you really like (for me, sex and cigarettes), then just do EVERYTHING to get that in every situation you're in. Most people chase money. Break the mould. Chase something else. Realize that you can get everything you want if you know how to play the situation right. If you'd like to PM me we can discuss a 'training' programme that matches your lifestyle perfectly. (Context: I used to be a NLP trainer/dating coach)
Thats an example of one method of switching your mindset completely. Ultimately many mindsets can be imagined then enjoyed by the invividual if so chosen. Its simply a matter of self will primarily.
The main problem in learning a new skill is maintaining the required motivation and discipline, especially in the early stages. Gamification deals this problem better than any of the other approaches I’m familiar with. Over the past few months, I’ve managed to study maths, languages, coding, Chinese characters, and more on a daily basis, with barely any interruptions. I accomplished this by simply taking advantage of the many gamified learning resources available online for free. Here are the sites I have tried and can recommend:
Codecademy. For learning computer languages (Ruby, Python, PHP, and others).
Duolingo. For learning the major Indo-European languages (English, German, French, Italian, Portuguese and Spanish).
Khan Academy. For learning maths. They also teach several other disciplines, but they offer mostly videos with only a few exercises.
Memrise. For memorizing stuff, especially vocabulary. The courses vary in quality; the ones on Mandarin Chinese are excellent.
Comment author:Emile
28 October 2013 06:20:57PM
1 point
[-]
I've been using Anki daily these past two or three months, and regularly-but-not-quite-daily maybe a year before that. I use it for a fair amount of different things (code, psychology, languages, ...)I recommend it, though it's not really "gamified".
Not sure where this goes: how can I submit an article to discussion? I've written it and saved it as a draft, but I haven't figured out a way to post it.
Thank you! One more - how much karma do I need? I was under the impression one needed 2 to post to discussion (20 to main), but presumably this is not the case. Is there an up to date list?
Comment author:Tenoke
29 October 2013 11:26:33AM
*
6 points
[-]
Am I the only person getting more and more annoyed of the cult thing? If the whole 'lesswrong is a cult' thing is not a meme that's spreading just because people are jumping on the bandwagon then I don't know what is. Can you seriously not tell? Additionally, From my POV it seems like people starting a 'are we a cult' threads/conversations do it mainly for signaling purposes.
Also, I bet new members wouldn't usually even think about whether we are a cult or not if older members were not talking about it like it is a real possibility all the bloody time. (and yes I know, the claim is not made only by people who are part of the community)
Comment author:Mestroyer
29 October 2013 04:27:54PM
7 points
[-]
It especially annoys me when people respond to evidence-based arguments that LessWrong is not a cult with, "Well where did you come to believe all that stuff about evidence, LessWrong?"
Before LessWrong, my epistemology was basically a more clumsy version of what is now. If you described my present self to my past self, and said "Is this guy a cult victim?" he would ask for evidence. He wouldn't be thinking in terms of Bayes's theorem, but he would be thinking with a bunch of verbally expressed heuristics and analogies that usually added up to the same thing. I used to say things like "Absence of evidence is actually evidence of absence, but only if you would expect to see the evidence if the thing was true and you've checked for the evidence," which I was later delighted to see validated and formalized by probability theory.
You could of course say, "Well, that's not actually your past self, that's your present self (the cult victim)'s memories, which are distorted by mad thinking," but then you're getting into brain-in-a-vat territory. I have to think using some process. If that process is wrong but unable to detect its own wrongness, I'm screwed. Adding infinitely recursive meta-doubt to the process just creates a new one to which the same problem applies.
I'm not particularly worried that my epistemology is completely wrong, because the pieces of my epistemology, when evaluated by my epistemology, appear to do what they're supposed to. I can see why they would do what they're supposed to by simulating how they would work, and they have a track record of doing what they're supposed to. There may be other epistemologies that would evaluate mine as wrong. But they are not my epistemology, so I don't believe what they recommend me to believe.
This is what someone with a particular kind of corrupt epistemology (one that was internally consistent) would say. But it is also the best anyone with an optimal epistemology could say. So why Mestroyer::should my saying it be cause for concern? (this is an epistemic "Mestroyer::should")
I can identify with this. Reading through the sequences wasn't a magical journey of enlightenment, it was more "Hey, this is what I thought as well. I'm glad Elezier wrote all this down so that I don't have to."
I believe that one of the reasons people are boring and/or irritating is that they don't know good ways of getting attention. However, being clever or reassuring or whatever might adequately repay attention isn't necessarily easy. Could it be made easier?
I wonder how far a community interested in solving the "boring/irritating people" problem could get by creating a forum whose stated purpose was to respond in an engaged, attentive way to anything anyone posts there. It could be staffed by certified volunteers who were trained in techniques of nonviolent communication and committed to continuing to engage with anyone who posted there, for as long as they chose to keep doing so, and nobody but staff would be permitted to reply to posters.
Perhaps giving them easier-to-obtain attention will cause them to leave other forums where attention requires being clever or reassuring or similarly difficult valuable tings.
I'm inclined to doubt it, though.
I am somewhat tangentially reminded of a "suicide hotline" (more generally, a "call us if you're having trouble coping" hotline) where I went to college, which had come to the conclusion that they needed to make it more okay to call them, get people in the habit of doing so, so that people would use their service when they needed it. So they explicitly started the campaign of "you can call us for anything. Help on your problem sets. The Gross National Product of Kenya. The average mass of an egg. We might not know, but you can call us anyway." (This was years before the Web, let alone Google, of course.)
Comment author:shminux
29 October 2013 04:30:53PM
*
0 points
[-]
You say "cult" like it's a bad thing.
Seriously though, using a term with negative connotations is not a rational approach t begin with. Like asking "is this woman a slut?". It presumes that a higher-than-average number of sexual partners is necessarily bad or immoral. Back to the cult thing: why does this term have a derogatory connotation? Says wikipedia:
In the mass media, and among average citizens, "cult" gained an increasingly negative connotation, becoming associated with things like kidnapping, brainwashing, psychological abuse, sexual abuse and other criminal activity, and mass suicide. While most of these negative qualities usually have real documented precedents in the activities of a very small minority of new religious groups, mass culture often extends them to any religious group viewed as culturally deviant, however peaceful or law abiding it may be.
Secular cult opponents like those belonging to the anti-cult movement tend to define a "cult" as a group that tends to manipulate, exploit, and control its members. Specific factors in cult behavior are said to include manipulative and authoritarian mind control over members, communal and totalistic organization, aggressive proselytizing, systematic programs of indoctrination, and perpetuation in middle-class communities.
Some of the above clearly does not apply ("kidnapping"), and some clearly does ("systematic programs of indoctrination, and perpetuation in middle-class communities" -- CFAR workshops, Berkeley rationalists, meetups). Applicability of other descriptions is less clear. Do the Sequences count as brainwashing? Does the (banned) basilisk count as psychological abuse?
Matching of LW activities and behaviors to those of a cult (a New Religious Movement is a more neutral term) does not answer the original implicit accusation: that becoming affiliated, even informally, with LW/CFAR/MIRI is a bad thing, for some definition of "bad". It is this definition of badness that is worth discussing first, when a cult accusation is hurled, and only then whether a certain LW pattern is harmful in this previously defined way.
Comment author:Tenoke
29 October 2013 02:37:24PM
4 points
[-]
Being a cult is a failure mode for a group like this. Discussing failure modes has some importance.
A really unlikely failure mode. The cons of discussing whether we are a cult outweigh the pros in my book - especially when it is discussed all the time.
Comment author:Ritalin
29 October 2013 01:04:56PM
0 points
[-]
I believe we are cult. The best cult in the world. The one whose beliefs works. Otherwise, we're the same; an unusual cause, a charismatic ideological leader, and, what distinguishes us from a school of philosophy or even a political party, is that we have an excathology to worry about; an end-of-the-world scenario. Unlike other cults, though, we wish to prevent or at least minimize the damage of that scenario, while most of them are enthusiastic in hastening it. For a cult, we're also extremely loose on rules to follow, we don't ask people to cast off material posessions (though we encourage donations) or to cut ties with old family and friends (it can end up happening because of de-religion-ing, but that's an unfortunate side-effect, and it's usually avoidable).
I could list off a few more traits, but the gist of it is this; we share a lot of traits with a cult, most of which are good or double-edged at worst, and we don't share most of the common bad traits of cults. Regardless of whether one chooses to call us a cult or not, this does not change what we are.
Comment author:Tenoke
29 October 2013 01:24:14PM
*
3 points
[-]
You are using a very loose definition of a cult. Surely you know that 'cult' carries some different (negative) connotations for other people?
Regardless of whether one chooses to call us a cult or not, this does not change what we are.
It might not change what we are but it has some negative consequences. People like you who call us a cult while using a different meaning of 'cult' turn new members away because they hear that LessWrong is a cult and they don't hear your different meaning of the word (which excludes most of the negative traits of bloody cults).
Comment author:JDelta
29 October 2013 01:14:41PM
0 points
[-]
Yes, I get definite cultist vibes from some members. A cult is basically an organization of a small number of members who hold that their beliefs make them superior (in one or more ways) to others, with an added implication of social tightness, shared activities, internal slang, difficult for outsiders to understand. Many LW people often appear to behave like this.
Comment author:JDelta
29 October 2013 01:33:37PM
0 points
[-]
I never stated LW is a cult. It clearly isn't. It does however have at least several, possibly many, members who appear to think about LW in the way many cult members think of their cult.
Comment author:hyporational
30 October 2013 02:04:31PM
*
1 point
[-]
A med student colleague of mine, a devout christian, is going to give a lecture on psychosexual development for our small group in a couple of days. She's probably going to sneak in an unknown amount of propaganda. With delicious improbability, there happen to be two transgender med students in our group she probably isn't aware of. To this day, relations in our group have been very friendly.
Any tips on how to avoid the apocalypse? Pre-emptive maneuvers are out of the question, I want to see what happens.
ETA: Nothing happened. Caused a significant update.
Comment author:fubarobfusco
30 October 2013 05:04:28PM
*
4 points
[-]
This sounds like a situation in which some people present may consider some other people's beliefs to be an individual-level existential threat — whether to their identity, to their lives, or to their immortal souls. In other words, the problem is not just that these folks disagree with each other, but that they may feel threatened by one another, and by the propagation of one another's beliefs.
Consider:
"If you convince people of your belief, people are more likely to try to kill me."
"If you convince people of your belief, I am more likely to become corrupted."
One framework for dealing with situations like this is called liberalism. In liberalism, we imagine moral boundaries called "rights" around individuals, and we agree that no matter what other beliefs we may arrive at, that it would be wrong to transgress these boundaries. (We imagine individuals, not groups or ideas, as having rights; and that every individual has the same rights, regardless of properties such as their race, sex, sexuality, or religion.)
Agreeing on rights allows us to put boundaries around the effects of certain moral disagreements, which makes them less scary and more peaceful. If your Christian colleague will agree, for instance, that it is wrong to kidnap and torture someonein an effort to change that person's sexual identity, they may be less threatening to the others.
Comment author:Lumifer
30 October 2013 03:12:35PM
0 points
[-]
What would constitute an apocalypse? When you say "I want to see what happens" do you mean you want to let the situation develop organically but set certain boundaries, a cap on damages, so to say?
Comment author:hyporational
30 October 2013 04:05:18PM
*
0 points
[-]
That's exactly what I mean. I'm not directing the situation, but will be participating.
I'd like to confront and see people confront her religious bias, without the result being excessive flame or her being backed in a corner without a chance to even marginally nudge her mind in the right direction. She's smart, will not make explicit religious statements, and will back her claims with cherry picked reseach. Naturally the level of mindkill will depend on other participants too, and I will treat this as some sort of a rationality test if they manage to keep their calm. If they lose it I guess it's understandable.
I guess I'll be using lots of some version of "agree denotationally, disagree connotationally".
Comment author:niceguyanon
28 October 2013 07:24:16AM
1 point
[-]
Is there any research suggesting simulated out-of-body-experiences (OBE)(like this), can be used for self improvement? For example potential areas of benefits include triggering OBEs to help patients suffering from incorrect body identities, which is exciting.
For some time now, I have had this very strange fascination with OBE and using it to over come akrasia. Of course I have no scientific evidence for it, yet I have this strong intuition that makes me believe so. I'll do my best to explain my rationale. Often I get this idea, that I can trick myself into doing what I want, if I pretend that I am not me but just someone observing me. This disconnects my body from my identity, so that the real me can control the body me. This gives me motivation to do things for the body me. I am not studying, my body me is studying to level up. I'm not hitting the gym, the body me is hitting the gym to level up. An even more powerful effect is present for social anxiety. Things like public speaking and rejection therapy are terrifying but by disconnecting my identity from my body, I would find that rejection is not as personal, just directed at this avatar that I control. Negative self-conscious thoughts and embarrassment seem to have a lessened impact.
The kind of dissociation you talk about here, where I experience my "self" as unrelated to my body, is commonly reported as spontaneously occurring during various kinds of emotional stress. I've had it happen to me many times.
It would not be surprising if the same mechanism that leads to spontaneous dissociation in some cases can also lead to the strong intuition that dissociation would be a really good idea.
Just because there's a mechanism that leads me to strongly intuit that something would be a really good idea doesn't necessarily mean that it actually would be.
All of that said: after my stroke, I experienced a lot of limb-dissociation... my arm didn't really feel like part of me, etc. This did have the advantage you described, where I could tell my arm to keep doing some PT exercise and it would, and yes, my arm hurt, and I sort of felt bad for it, but it's not like it was me hurting, and I knew I'd be better off for doing the exercise. It is indeed a useful trick.
I suspect there are healthier ways to get the same effect.
Comment author:ChristianKl
28 October 2013 05:03:22PM
2 points
[-]
Do you have experience with OBEs? I personally have limited experience. I'm no expert but I know a bit.
In my experience the kind of people who have the skills for engaging in out-of-body-experiences usually don't get a lot done. It rather increases akrasia then decreasing it. If you want to decrease akrasia associating more with your body is a better strategy than getting outside of it.
An even more powerful effect is present for social anxiety. Things like public speaking and rejection therapy are terrifying but by disconnecting my identity from my body, I would find that rejection is not as personal, just directed at this avatar that I control.
That effect is really there. You are making a trade. You lose empathy. Stopping to care about other people means that you can't have genuine relationships.
On the other hand rejections don't hurt as much and you can more easily put yourself into such a situation.
I don't think you're off your rocker, though dissociating at the gym might increase the risk of injury.
I tentatively suggest that you explore becoming comfortable enough in your life that you don't need the hack, but I'm not sure that the hack is necessarily a bad strategy at present.
Comment author:DataPacRat
28 October 2013 01:07:25AM
0 points
[-]
MWI gives an interesting edge to an old quote:
"... there are an infinite number of alternate dimensions out there. And somewhere out there you can find anything you might imagine. What I imagine is out there is a bunch of evil characters bent on destroying our time stream!" -- Lord Simultaneous
... does the fact that there's been no obvious contact suggest that the answer to the transdimensional variant of the Fermi paradox is that once you've gone down one leg of the Trousers of Time, there's no way to affect any other leg, no matter how much you try to cheat?
The Fermi paradox includes us knowing a lot about the density of stuff in the visible universe. You'd expect expansionistic life to populate most of a galaxy in short order since there are only the three dimensions to expand in. The Everett multiverse is a bit bigger. Would you still get a similar expansion model for a difficult to discover cheat, or could we end up with effects only observable in a minuscule fraction of all branches even if a cheat was possible, but was difficult enough to discover?
Comment author:niceguyanon
31 October 2013 05:28:48PM
*
1 point
[-]
Does anyone have any book recommendations on the topic of evidence based negotiation tactics? I have read Influence; Cialdini, thinking fast, and slow ; Kahneman , and Art of Strategy; Dixit and Nalebuff. These are great books to read but I am looking for something with a more narrow focus, there are lot's of books on amazon that get good reviews but I am unsure of which one would suit me best.
Comment author:Vaniver
31 October 2013 07:14:26PM
2 points
[-]
Getting to Yes is a standard negotiation book; Difficult Conversations seems useful as a supplement for negotiation in non-business contexts (but, as a general communication book, has obvious business applications as well).
Buying it? No. Using it while your downstairs-neighbor is home? Yes. A repetitive thumping can make trying to study hellishly difficult (for people sufficiently similar to me).
Comment author:Dorikka
30 October 2013 04:12:23AM
0 points
[-]
To the extent that you believe the preferences of the person below you mirror your own, would it annoy you if the person above you started using a treadmill in their apt?
Comment author:Emily
31 October 2013 09:32:11AM
0 points
[-]
Does anyone know of a good online source for reading about general programming concepts? In particular, I'm interested in learning a bit more about pointers and content-addressability, and the Wikipedia material doesn't seem very good. I don't care about the language - ideally I'm looking for a source more general than that.
Can't actually name a good general article on pointers. They're the big sticking point for anyone trying to learn C for the first time, but they end up just being this sort of ubiquitous background knowledge everyone takes for granted pretty fast. I did stumble into Learn C the Hard Way, which does get around to pointers.
The C2 wiki is an old site for general programming knowledge. It's old, the navigation is weird, and the pages sometimes devolve into weird arguments where you have no idea who's saying what. But there's interesting opinionated content to find there, where sometimes the opinionators even have some idea what they're talking about. Here's one page on what they have to say about pointers.
Also I'm just going to link this article about soft skills involved in programming, because it's neat.
Could anyone provide me with some rigorous mathematical references on Statistical Hypotheses Testing, and Bayesian Decision Theory? I am not an expert in this area, and am not aware of the standard texts. So far I have found
Statistical Decision Theory and Bayesian Analysis - Berger
Bayesian and Frequentist Regression Methods - Wakefield
Currently, I am leaning towards purchasing Berger's book. I am looking for texts similar in style and content to those of Springer's GTM series. It looks like the Springer Series in Statistics may be sufficient.
"Bayesian decision theory" usually just means "normal decision theory," so you could start with my FAQ. Though when decision theory is taught from a statistics book rather than an economic book, they use slightly different terminology, e.g. they set things up with a loss function rather than a utility function. For an intro to decision theory from the Bayesian statistics angle, Introduction to Statistical Decision Theory is pretty thorough, and more accessible than Berger.
Comments (382)
From Venter's new book:
And later:
So, I link to Amazon fairly frequently here, and when I do I use the referral link "ref=nosim?tag=vglnk-c319-20" to kick some money back to MIRI / whoever's paying for LW.
First, is that the right link? Second, what would it take to add that to the "Show help" box so that I don't have to dig it up whenever I want to use it, and others are more likely to use it?
This is done automatically in a somewhat different way. So my advice is not to worry about it. But, yes, it shouldn't hurt and and it should help in the situation that viglink doesn't fire. In those comments, Wei Dai agrees that this is the referral code.
Part of the reason why I'm asking is because that info might be old. Apparently "ref=nosim" was obsolete two years ago, and I don't know if that's still the right VigLink account, etc.
I thought LW automatically added affiliate links using VigLink already.
Maybe it might even sense to let the forum automatically format links to Amazon that way.
I believe this is supposed to happen already, but have not tested it.
That would be a bad thing.
Why?
I should have made more explicit that it's my opinion: "a bad thing from my point of view".
It's a quirk of mine -- I dislike marketing schemes injected into non-marketing contexts, especially if that is not very explicit. It is a mild dislike, not like I'm going to quit a site because of it or even write a rant against Amazon links rewriting.
Yes, I understand that Internet runs on such things. No, it does not make me like it more.
It isn't a marketing scheme. It's a monetization scheme.
(Marketing is presenting products for sale. Monetization is finding ways to extract revenue from previously non-revenue-generating activity.)
(And no, the Internet doesn't run on marketing or monetization. A few of your favorite Internet services probably do, though; but probably not all.)
I accept the correction.
I have another quirk: I dislike being monetized.
The Sequences probably contain more material than an undergraduate degree in philosophy, yet there is no easy way for a student to tell if they understood the material properly. Some posts contain an occasional question/koan/meditation which is sometimes answered in the same of a subsequent post, but these are pretty scarce. I wonder if anyone qualified would like to compile a problem set for each topic? Ideally with unambiguous answers.
I also think this is a worthwhile endeavor, and speculate that the process and results may be useful for development of a general rationality test, which I know CFAR has some interest in.
Is Our Final Invention available as any kind of e-book anywhere? I can find it in hardback, but not for Kindle or any kind of ePub. I'm not going to start carrying around a pile of paper in order to read it!
What do you mean by "anywhere"? As Vincent Yu mentions, it is available in the US. It hasn't been published in print or ebook in the UK. When you find it in hardback, it's imports, right? If it is published in the UK, it will probably be available as an ebook, but I don't know if that will happen before the US edition is pirated. If you are generally chomping at the bit to read American ebooks, it is worth investing the time to learn if any ebook sellers fails to check national boundaries. The publisher lists six for this book.
Probably not useful, but the US edition is available in France. (Rights to publish English-language books in countries that don't speak English aren't very valuable, so the monopolies to the US and UK usually include those rights. So you if you're in France, you can get the ebook first, regardless of whether it's published in the US or UK. Unless they forget to make it available in France.)
I see a Kindle edition on Amazon.
That page only shows me a price for the hardcover version. I wonder if it's because I have a UK IP address? How much is the Kindle version?
I can see it from Finland, lists the price as $16 for me.
I think it is more likely rejecting you based on being logged in than based on IP, since I can see UK and FR results. Google cache of that link, both at google.com and google.co.uk show me the kindle edition. ($11)
You can order it to http://1dollarscan.com/ and still read it on your kindle.
LW is mostly pure-text with no images except for occasional graphs. Why is that so? Are the reasons technical (due to reddit code), cultural (it's better without images), or historical (it's always been so)?
I think most people are unaware that they can include images in comments.
A state of affairs which I hope continues.
Ah, a vote for "it's better this way". Why do you prefer pure text? Is it because of the danger of being overrun with cat pictures and blinking gif smileys?
Let's take that particular image. It covers a huge block that could have been filled by text otherwise and conveys relatively little information accurately. It distrupts my reading completely for a little while and getting back to the nice flow takes cognitive effort.
This moment I'm reading on my phone and the image fills the whole screen.
It is because text can be copy-pasted and composed easily since browsers mostly allow selecting any text (this is more difficult in win apps).
Whereas images cannot be copy pasted as simple (mostly you have to find the URL and copy paste that) and images cannot be composed easily at all (you at least need some pic editor which often doesn't allow simple copy-paste).
This is the old problem that there is no graphical language. A problem that has evadad GUI designers since the beginning.
Um. In Firefox, right-click on the image, select Copy Image. Looks pretty simple to me. Pretty sure it works the same way in Chrome as well.
I think you're missing the point of images. Their advantage is precisely that they are holistic, a gestalt -- you're supposed to take them in whole and not decompose them into elements.
Sure, if you want to construct a sequential narrative out of symbols, images are the wrong medium.
And how do you insert it into a comment?
That may be true of some images but not all.
I'd go with laziness and lack of overt demand. I know that people love graphs and images, but I don't especially feel the need when writing something, and it's additional work (one has to make the image somehow, name it, upload it somewhere, create special image syntax, make sure it's not too big that it'll spill out of the narrow column allotted articles etc). I can barely bring myself to include images for my own little statistical essays, though I've noticed that my more popular essays seem to include more images.
I haven't tried authoring an article myself, but a quick look now seems to indicate that you can't upload images, only link to them. This means images must be hosted on third parties, meaning you have to upload it there and if not directly under your control, it's vulnerable to link rot. It seems like this would be inconvenient.
You can upload images to the LessWrong wiki, and then link them from comments or posts. It's a bit roundabout, but the feature is there. The question is then, should it be made easier?
I haven't tried it, but just knowing that it requires logging in to the wiki, I know that it's way too hard and I'll probably use imgur instead.
That's very common in online forums (for the server load reasons) but doesn't seem to stop some forums from being fairly image-heavy. It's not like there is a shortage of free image-hosting sites.
Yes, I understand the inconvenience argument, but the lack of images at LW is pretty stark.
Do you think more people should include graphics in their posts?
Do you think more people should include graphics in their comments?
Do you think the image-heavy forums you mention get some benefit from being image-heavy that we would do well to pursue?
I am hesitant to put forward a recommendation. I don't know yet and approach this as the Chesterton's Fence.
That's fair.
I'll observe that I read your comments on this thread as implicitly recommending more images.
This is of course just my reading, but I figured I'd mention it anyway if you are hesitant to make a recommendation for fear of tearing that fence down in ignorance, on the off chance that I'm not entirely unique here.
I understand where you are coming from (asking why this house is not blue is often perceived as implying that this house should be blue) -- but do you think there's any way to at least tone down this implication without putting in an explicit disclaimer?
Well, if that were my goal, one thing I would try to avoid is getting into a dynamic where I ask people why they avoid X, and then when they provide some reasons I reply with counterarguments.
Another thing I would try to avoid is not questioning comments which seem to support doing X, for example by pointing out that it's easy to do, but questioning comments which seem to challenge those comments.
Also, when articulating possible reasons for avoiding X, I would take some care with the emotional connotations of my wording. This is of course difficult, but one easy way to better approximate it is to describe both the pro-X and anti-X positions using the same kind of language, rather than describing just one and leaving the other unmarked.
More generally, assymetry in how I handle the pro-X and anti-X cases will tend to get read as suggesting partiality; if I want to express impartiality, I would cultivate symmetry.
That said, it's probably easier to just express my preferences as preferences.
<shrug> I think it's fine. Reasons that people provide might be strong or might be weak -- it's OK to tap on them to see if they would fall down. I would do the same thing to comments which (potentially) said "Yay images, we need more of them!".
In general, I would prefer not to anchor the expectations of the thread participants, but not at the price of interference with figuring out of what does the territory actually look like.
I didn't (and still don't) have a position to describe. Summarizing arguments pro and con seemed premature. This really was just a simple open question without a hidden agenda.
All right.
I read them this way too.
There's a good chance this is not a "fence", deliberately designed by some agent with us in mind, but a fallen tree that ended up there by accident/laziness.
There's a design choice on the part of LessWrong against avatar images. Text is supposed to speak for itself and not be judged by it's author. Avatar imaging would increase author recognition.
I think I agree with that. I do read author names, but I read them after I read the text usually. I frequently find myself mildly surprised that I've just upvoted someone I usually downvote, or vice versa.
And yet names are visually quite distinct. I find authorship much more obvious here than on HN.
Some people embed graphics in their articles, and this is seen by many as a good thing. I suspect it's just individuals choosing not to bother with images.
I'd note that the short help for comments does not list the Markdown syntax for embedding images in comments, and even the "more comment formatting help" page is not especially clear. That LessWrong cultural encourages folk to write comments before writing Main or Discussion articles makes that fairly relevant.
Why shouldn't it be?
I find it harder to engage in System 2 when there are images around. Heck, even math glyphs usually trip me up. That's not to say graphics can't do more good than harm (for example, charts and diagrams can help cross inferential distance quickly, and may serve as useful intuition pumps) but I imagine that more images would mean more reliance on intuition and less on logic, hence less capacity for taking things to analytical extremes. So it could be harmful (given the nature of the site) to introduce more images.
I like my flow. I don't have anything against images if they are arranged in a way that doesn't distrupt reading. I'm not sure if lw platform allows for that.
Reading this comment... I suddenly feel very odd about the fact that I failed to include images in my Neuroscience basics for LessWrongians post, in spite of in a couple places saying "an image might be useful here." Though the lack of images was partly due to me having trouble finding good ones, so I won't change it at the moment.
Several months ago I set up a blog for writing intelligent, thought-provoking stuff. I've made two posts to it, and one of those is a photo of a page in Strategy of Conflict, because it hilariously featured the word "retarded". Something has clearly gone wrong somewhere.
I'm pretty sure there are other would-be bloggers on here who experience similar update-discipline issues. Would any of them like to form some loose cabal of blogging spotters, who can egg each other on, suggest topics, provide editorial and stylistic feedback, etc.?
EDIT: ITT: I'm a bit of a dick! Sorry, everyone!
Are you sure the error is that you're posting too little to the blog, rather than that you're trying to have a blog in the first place?
Is this intended as snark, or an actual helpful comment?
Assuming the latter, I have what I consider to be sound motives for maintaining a blog. Unfortunately, I don't have sound habits for maintaining a blog, coupled with a bit of a cold-start problem. I doubt I am the only person in this position, and believe social commitment mechanisms may be a possible avenue for improvement.
I was going for actual helpful comment. I personally don't have a blog because several attempts to have a blog failed. Afterwards, I was fairly sure that the reason why my blogs failed was because I like conversations too much and monologuing too little. I found that forums both had a reliable stream of content to react to, as well as a somewhat reliable stream of content to build off of. The incentive structure seemed a lot nicer in a number of ways.
More broadly, I think a good habit when plans fail is to ask the question "What information does this failure give me?", rather than the limited question "why did this plan fail me?". Sometimes you should revise the plan to avoid that failure mode; other times you should revise the plan to have entirely different goals.
My immediate practical suggestion is to create a LW draft editing circle. This won't give you the benefits of a blog distinct from LW, but eliminates most of the cold-start problem. It also adds to the potential interest base people who have ideas for posts but who don't have the confidence in their ability to write a post that is socially acceptable to LW (i.e. doesn't break some hidden protocol).
If you have any old material, you could consider posting those to get initial readership, even if you don't consider them especially high quality.
I'd interpret Vaniver's comment more generally to mean that parts of your brain might disagree with this assessment, and you experience this as procrastination.
Yes.
(My current excuse for not even having made one post is that I started to experience wrist pain, and didn't want to make it worse by doing significant typing at home. It seems to be getting better now.)
Consider your incentives. Actual (non-imaginary) incentives in your current life.
What are the incentives for maintaining a blog? What do you get (again, actually, not supposedly) when you make a post? What are the disincentives? (e.g. will a negative comment spoil your day?) Is there a specific goal you're trying to reach? Is posting to your blog a step on the path to the goal?
Are you requesting answers for my specific case, or just providing me with advice?
(As an observation, which isn't meant to be a hostile response to your comment, people seem very keen to offer advice on LW, even when none has been requested.)
Advice, I guess, in the sense that I think these are the questions you'd be interested in knowing the answers to (for yourself, not for posting here).
Count me in if anything comes out of it.
Maybe you should consider joining an existing blogging community - livejournal or tumblr or medium? They're good at giving you social prompts to write something.
In retrospect, my previous response to this does seem pretty unwarranted. This was a perfectly reasonable and relevant comment that caught me at a bad time. I'd like to apologise.
If I wanted to update a blog regularly, I would consider it imperative to put "update my blog" as a repeating item in my to-do list. For me, relying on memory is an atrocious way to ensure that something gets done; having a to-do list is enormously more effective.
I tried translating the sequences. Gave up on the third post.
Recent work suggests that dendrites may be able to do substantial computation themselves. This implies that getting decent uploads or getting a decent preservation from cryonics may require a more fine-grained approached than is often expected. Unfortunately, the paper itself seems to be not yet online, but it is by the same group which previously suggested that dendrites could be partially responsible for memory storage.
Smith et al. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo.
Abstract (emphasis mine):
Sillicon Valley's Ultimate Exit, a speech at Startup School 2013 by Balaji Srinivasan. He opens with the statement that America is the Microsoft of nations, goes into a discussion on Voice, Exit and good governence and continues with the wonderful observation that:
He names this the Paper Belt, and claims the Valley has beem unintentionally dumping horse heads in all of their beds for the past 20 years. I would call it The Cathedral and note the NYT does not approve of this kind of talk:
No seriously, that is the very first line.
Transcript
I love this speech, but I suspect it's overoptimistic. I believe that bitcoin will be illegal as soon as it's actually needed.
Still, I appreciate his appreciation of immigration/emigration. I'm convinced that mmigration/emigration gets less respect than staying and fighting because it's less dramatic, less likely to get people killed, and more likely to work.
That is likely, but note that torrenting Lady Gaga's mp3s is also illegal and yet I have absolutely zero difficulty in finding such torrents on the 'net.
Maintaining a currency takes a much more complicated information structure than letting people make unlimited copies of something.
What do you mean, "maintaining"? Bitcoin was explicitly designed to function in a distributed manner without the need for any central authority.
And consequently it has a much more complicated information structure than torrents do. :) But this aside, while you can likely run the Bitcoin economy as such, if Bitcoins cannot be exchanged for dollars or directly for goods and services, they are worthless; and this is a bottleneck where a government has a lot of infrastructure to insert itself. I suggest that, if Bitcoins become illegal, buying hard drugs is the better analogy than downloading torrents: It won't be impossible, but it'll be much more difficult than setting up a free client and clicking "download".
The differences between the physical and the virtual worlds are very relevant here.
Silk Road was blatantly illegal and it took the authorities years to bust its operator, a US citizen. Once similar things are run by, say, Malaysian Chinese out of Dubai with hardware scattered across the world, the cost for the US authorities to combat them would be... unmanageable.
What probability should I assign to being completely wrong and brainwashed by Lesswrong? What steps would one take to get more actionable information on this topic? For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them? Am I going to burn in counter factual hell for even asking?
In "The Inertia of Fear and the Scientific Worldview", by the Russian computer scientist and Soviet-era dissident Valentin Turchin, in the chapter "The Ideological Hierarchy", Soviet ideology was analyzed as having four levels: philosophical level (e.g. dialectical materialism), socioeconomic level (e.g. social class analysis), history of Soviet Communism (the Party, the Revolution, the Soviet state), and "current policies" (i.e. whatever was in Pravda op-eds that week).
According to Turchin, most people in the USSR regarded the day-to-day propaganda as empty and false, but a majority would still have agreed with the historical framework, for lack of any alternative view; and the number who explicitly questioned the philosophical and socioeconomic doctrines would be exceedingly small. (He appears to not be counting religious people here, who numbered in the tens of millions, and who he describes as a separate ideological minority.)
BaconServ writes that "LessWrong is the focus of LessWrong", though perhaps the idea would be more clearly expressed as, LessWrong is the chief sacred value of LessWrong. You are allowed to doubt the content, you are allowed to disdain individual people, but you must consider LW itself to be an oasis of rationality in an irrational world.
I read that and thought, meh, this is just the sophomoric discovery that groupings formed for the sake of some value have to value themselves too; the Omohundro drive to self-protection, at work in a collective intelligence rather than in an AI. It also overlooks the existence of ideological minorities who think that LW is failing at rationality in some way, but who hang around for various reasons.
However, these layered perspectives - which distinguish between different levels of dissent - may be useful in evaluating the ways in which one has incorporated LW-think into oneself. Of course, Less Wrong is not the Soviet Union; it's a reddit clone with meetups that recruits through fan fiction, not a territorial superpower with nukes and spies. Any search for analogies with Turchin's account, should look for differences as well as similarities. But the general idea, that one may disagree with one level of content but agree with a higher level, is something to consider.
Or not.
Could people list philosophy oriented internet forums with high concentration of smart people and no significant memetic overlap so that one could test this? I don't know any and I think it's dangerous.
I would love to see this as well
If Lesswrong would be good at brainwashing I would expect much more people to have signed up for cryonics.
Spend time outside of Lesswrong and discuss with smart people. Don't rely on a single community to give you your map of the world.
Wrong about what? Different subjects call for different probability.
The probability that the Bayes Theorem is wrong is vanishingly small. The probability that the UFAI risk is completely overblown is considerably higher.
LW "ideology" is an agglomeration in the sense that accepting (or not) a part of it does not imply acceptance (or rejection) of other parts. One can be a good Bayesian, not care about UFAI, and be signed up for cryonics -- no logical inconsistencies here.
I'd suggest starting by reading up on "brainwashing" and developing a sense of what signs characterize it (and, indeed, if it's even a thing at all).
Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.
Note that your suggestions are all within the framework of the "accepted LW wisdom". The best you can hope for is to detect some internal inconsistencies in this framework. One's best chance of "deconversion" is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that "worked" for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I'm not sure how reading up on it is remaining inside the "accepted LW wisdom."
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I'm being brainwashed. And, yes, if I conclude that it's likely that I'm being brainwashed, there are various deconversion techniques I can use to negate that.
Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.
Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn't lend itself to this approach so well... it's hard to know where to even start, there.
Reading up on brainwashing can mean reading gwern's essay which concludes that brainwashing doesn't really work. Of course that's exactly what someone who wants to brainwash yourself would tell you, wouldn't it?
Sure. I'm not exactly sure why you'd choose to interpret "read up on brainwashing" in this context as meaning "read what a member of the group you're concerned about being brainwashed by has to say about brainwashing," but I certainly agree that it's a legitimate example, and it has exactly the failure mode you imply.
For what it's worth, gwern's findings are consistent with mine (see this thread). I'd rather restrict "brainwashing" to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It's difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism -- more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee's religion) are much more common -- but if you read between the lines they seem to be higher.
Deprogramming techniques aren't much better, incidentally -- from everything I've read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn't apply most of them to yourself, and wouldn't want to in any case.
The first thing you should probably do is narrow down what specifically you feel like you may be brainwashed about. I posted some possible sample things below. Since you mention Messianic groupthink as a specific concern, some of these will relate to Yudkowsky, and some of them are Less Wrong versions of cult related control questions. (Things that are associated with cultishness in general, just rephrased to be Less Wrongish)
Do you/Have you:
1: Signed up for Cyonics.
2: Agressively donated to MIRI.
3: Check for updates on HPMOR more often then Yudkowsky said there would be on the off hand chance he updated early.
4: Gone to meet ups.
5: Went out of your way to see Eliezer Yudkowsky in person.
6: Spend time thinking, when not on Less Wrong: "That reminds me of Less Wrong/Eliezer Yudkowsky."
7: Played an AI Box experiment with money on the line.
8: Attempted to engage in a quantified self experiment.
9: Cut yourself off from friends because they seem irrational.
10: Stopped consulting other sources outside of Less Wrong.
11: Spent money on a product recommended by someone with high Karma (Example: Metamed)
12: Tried to recruit other people to Less Wrong and felt negatively if they declined.
13: Written rationalist fanfiction.
14: Decided to become polyamorous.
15: Feel as if you have sinned any time you receive even a single downvote.
16: Gone out of your way to adopt Less Wrong styled phrasing in dialogue with people that don't even follow the site.
For instance, after reviewing that list, I increased my certainty I was not brainwashed by Less Wrong because there are a lot of those I haven't done or don't do, but I also know which questions are explicitly cult related, so I'm biased. Some of these I don't even currently know anyone on the site who would say yes to them.
No
I'm a top 20 donor
Nope
yes.
Not really? That was probably some motivation for going to a mincamp but not most of it.
Nope.
Nope.
a tiny amount? I've tracked weight throughout diet changes.
Not that I can think of. Certainly no one closer than a random facebook friend.
Nope.
I've spent money on Modafinil after it's been recommended on here. I could count Melatonin but my dad told me about that years ago.
Yes.
Nope.
I was in an open relationship before I ever heard of Leswrong.
HAHAHAHAHAH no.
This one is hard to analyze, I've talked about EM hell and so on outside of the context of Lesswrong. Dunno.
Seriously considering moving to the bay area.
Beware: you've created a lesswrong purity test.
I'm in the process of doing 1, have maybe done 2 depending on your definition of aggressively (made only a couple donations, but largest was ~$1000), and done 4.
Oh, and 11, I got Amazon Prime on Yvain's recommendation, and started taking melatonin on gwern's. Both excellent decisions, I think.
And 14, sort of. I once got talked into a "polyamorous relationship" by a woman I was sleeping with, no connection whatsoever to LessWrong. But mostly I just have casual sex and avoid relationships entirely.
You'd be crazy not to ask. The views of people on this site are suspiciously similar. We might agree because we're more rational than most, but you'd be a fool to reject the alternative hypothesis out of hand. Especially since they're not mutually exclusive.
Use culture to contrast with culture. Avoid being a man of a single book and get familiar with some past or present intellectual traditions that are distant from the LW cultural cluster. Try to get a feel for the wider cultural map and how LW fits into it.
Less Wrong has some material on this topic :)
Seriously though, I'd love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I've seen some examples already, but more is good.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
The existence of the past and future (and within most people's reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.
In your opinion, is there some other form of reasoning that avoids this weakness?
That's a very complicated question but I'll try to do my best to answer.
In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. "In my heart I know..."
In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they 'want' ... how they define themselves. Consciously, and unconsciously.
For "reasoning", no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one's "presumptive model" of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.
Probabilities can't cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.
When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their 'foundation' is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.
Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.
Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you'll make a lot less mistakes.
All right. Thanks for clarifying.
I'd say you should assign a very high probability for your beliefs being aligned in the direction LessWrong's are, even in cases where such beliefs are wrong. It's just how the human brain and human society works; there's no getting around it. However, how much of that alignment is due to self-selection bias (choosing to be a part of LessWrong because you are that type of person) or brainwashing is a more difficult question.
As long as the number is small, I wouldn't update at all, because I already expect slow trickle of those people on my current information, so seeing that expectation confirmed isn't new evidence. If LW achieved a Scientology-like place in popular opinion, though, I'd be worried.
No.
Eliezer posted to Facebook:
My stab at it. I'm probably going to post it to FIMFiction in a day or so, but it's basically a first draft at this point and could doubtless use editing / criticism.
Please add the open_thread tag (with the underscore) to the post.
Fixed.
Since I'm not sure whether this advice would be welcome in a recent discussion, I'm just going to start cold by describing something which has worked for me.
In an initial post, I explain what kind of advice I'm looking for, and I'm specific about preferring advice from people who've gotten improvement in [specific situation]. I normally say other advice is welcome, but you'd be amazed how little of it I get.
I believe it's important to head off unwanted advice early. I can't remember whether I normally put my limiting request at the beginning or end of a post, but I think it helps if you can keep your commenters from becoming a mutually reinforcing advice-giving crowd.
I suggest that starting by being specific about what you do and don't want is (among other things) an assertion of status, and this has some effects on the advice-giving dynamic.
I normally do want advice from people who've had appropriate experience. Has anyone tried being clear at the beginning that they don't want advice?
In my social circle, explicitly tagging posts as "I'm not looking for advice" seems to work pretty well at discouraging advice. I don't do it often myself though.
And you're right, of course, that it is among other things an assertion of status, though of course it's also a useful piece of explicit information.
Steve Sailer on the Trolley Problem: [1] and [2]. Basically, to what degree is the unwillingness of people in the thought experiment to attempt to push the fat man the realization that pushing the fat man is an inherently riskier prospect than pulling a lever?
Noah Millman also comments:
I've been out of things for a while; how goes Eliezer's book?
The rationality book?
this is the last I've seen
http://lesswrong.com/lw/i3a/miris_2013_summer_matching_challenge/9gth
http://hpmor.com/notes/progress-13-10-0/
Eliezer says he'll do a progress report on 11/1. I haven't heard any news otherwise.
I don't think that 'Eliezer's book' refers to HPMOR. I think it is more likely that he is asking about the book based on the Sequences (for which this is probably the most recent thread).
Why isn't there a pill that makes a broken heart go away?
Just time is both necessary and sufficient.
There might be eventually.
Something like that was discussed previously. Kevin recommended antidepressants in the comments.
Ask Gwern, he probably knows something that's good enough.
It will get better over time.
They are on it: http://www.academia.edu/2764401/If_I_could_just_stop_loving_you_Anti-love_biotechnology_and_the_ethics_of_a_chemical_breakup
The main problem in learning a new skill is maintaining the required motivation and discipline, especially in the early stages. Gamification deals this problem better than any of the other approaches I’m familiar with. Over the past few months, I’ve managed to study maths, languages, coding, Chinese characters, and more on a daily basis, with barely any interruptions. I accomplished this by simply taking advantage of the many gamified learning resources available online for free. Here are the sites I have tried and can recommend:
Are you familiar with other good resources not listed above? If so, please mention them in the comments.
(Crossposted to my blog.)
I've been using Anki daily these past two or three months, and regularly-but-not-quite-daily maybe a year before that. I use it for a fair amount of different things (code, psychology, languages, ...)I recommend it, though it's not really "gamified".
Not sure where this goes: how can I submit an article to discussion? I've written it and saved it as a draft, but I haven't figured out a way to post it.
You don't have enough karma to post yet. Consider making some quality comments first.
Thank you! One more - how much karma do I need? I was under the impression one needed 2 to post to discussion (20 to main), but presumably this is not the case. Is there an up to date list?
I think the requirement is currently 5 karma to post to discussion.
Am I the only person getting more and more annoyed of the cult thing? If the whole 'lesswrong is a cult' thing is not a meme that's spreading just because people are jumping on the bandwagon then I don't know what is. Can you seriously not tell? Additionally, From my POV it seems like people starting a 'are we a cult' threads/conversations do it mainly for signaling purposes.
Also, I bet new members wouldn't usually even think about whether we are a cult or not if older members were not talking about it like it is a real possibility all the bloody time. (and yes I know, the claim is not made only by people who are part of the community)
/rant
It especially annoys me when people respond to evidence-based arguments that LessWrong is not a cult with, "Well where did you come to believe all that stuff about evidence, LessWrong?"
Before LessWrong, my epistemology was basically a more clumsy version of what is now. If you described my present self to my past self, and said "Is this guy a cult victim?" he would ask for evidence. He wouldn't be thinking in terms of Bayes's theorem, but he would be thinking with a bunch of verbally expressed heuristics and analogies that usually added up to the same thing. I used to say things like "Absence of evidence is actually evidence of absence, but only if you would expect to see the evidence if the thing was true and you've checked for the evidence," which I was later delighted to see validated and formalized by probability theory.
You could of course say, "Well, that's not actually your past self, that's your present self (the cult victim)'s memories, which are distorted by mad thinking," but then you're getting into brain-in-a-vat territory. I have to think using some process. If that process is wrong but unable to detect its own wrongness, I'm screwed. Adding infinitely recursive meta-doubt to the process just creates a new one to which the same problem applies.
I'm not particularly worried that my epistemology is completely wrong, because the pieces of my epistemology, when evaluated by my epistemology, appear to do what they're supposed to. I can see why they would do what they're supposed to by simulating how they would work, and they have a track record of doing what they're supposed to. There may be other epistemologies that would evaluate mine as wrong. But they are not my epistemology, so I don't believe what they recommend me to believe.
This is what someone with a particular kind of corrupt epistemology (one that was internally consistent) would say. But it is also the best anyone with an optimal epistemology could say. So why Mestroyer::should my saying it be cause for concern? (this is an epistemic "Mestroyer::should")
I can identify with this. Reading through the sequences wasn't a magical journey of enlightenment, it was more "Hey, this is what I thought as well. I'm glad Elezier wrote all this down so that I don't have to."
Something more than a salon and less than a movement, then... English has a big vocabulary, there must be a good word for it.
What's wrong with 'forum'?
Ah, just like 4Chan then? X-D
I think people also say things just to get conversation going. We need to look at making it easier to find useful ways of getting attention.
Can you expand on this?
I believe that one of the reasons people are boring and/or irritating is that they don't know good ways of getting attention. However, being clever or reassuring or whatever might adequately repay attention isn't necessarily easy. Could it be made easier?
(nods) Thank you.
I wonder how far a community interested in solving the "boring/irritating people" problem could get by creating a forum whose stated purpose was to respond in an engaged, attentive way to anything anyone posts there. It could be staffed by certified volunteers who were trained in techniques of nonviolent communication and committed to continuing to engage with anyone who posted there, for as long as they chose to keep doing so, and nobody but staff would be permitted to reply to posters.
Perhaps giving them easier-to-obtain attention will cause them to leave other forums where attention requires being clever or reassuring or similarly difficult valuable tings.
I'm inclined to doubt it, though.
I am somewhat tangentially reminded of a "suicide hotline" (more generally, a "call us if you're having trouble coping" hotline) where I went to college, which had come to the conclusion that they needed to make it more okay to call them, get people in the habit of doing so, so that people would use their service when they needed it. So they explicitly started the campaign of "you can call us for anything. Help on your problem sets. The Gross National Product of Kenya. The average mass of an egg. We might not know, but you can call us anyway." (This was years before the Web, let alone Google, of course.)
When when will these rants go meta?
nope http://lesswrong.com/lw/atm/cult_impressions_of_less_wrongsingularity/60ub
You say "cult" like it's a bad thing.
Seriously though, using a term with negative connotations is not a rational approach t begin with. Like asking "is this woman a slut?". It presumes that a higher-than-average number of sexual partners is necessarily bad or immoral. Back to the cult thing: why does this term have a derogatory connotation? Says wikipedia:
Some of the above clearly does not apply ("kidnapping"), and some clearly does ("systematic programs of indoctrination, and perpetuation in middle-class communities" -- CFAR workshops, Berkeley rationalists, meetups). Applicability of other descriptions is less clear. Do the Sequences count as brainwashing? Does the (banned) basilisk count as psychological abuse?
Matching of LW activities and behaviors to those of a cult (a New Religious Movement is a more neutral term) does not answer the original implicit accusation: that becoming affiliated, even informally, with LW/CFAR/MIRI is a bad thing, for some definition of "bad". It is this definition of badness that is worth discussing first, when a cult accusation is hurled, and only then whether a certain LW pattern is harmful in this previously defined way.
Lesswrong is the Rocky Horror of atheist/skeptic groups!
*I say "cult" like it carries negative connotations for most people.
I expanded on what I meant in my reply. Sorry about the ninja edit.
Being a cult is a failure mode for a group like this. Discussing failure modes has some importance.
There also the issue that cult are powerful and speaking of Lesswrong as a cult implies that Lesswrong has a certain power.
A really unlikely failure mode. The cons of discussing whether we are a cult outweigh the pros in my book - especially when it is discussed all the time.
I believe we are cult. The best cult in the world. The one whose beliefs works. Otherwise, we're the same; an unusual cause, a charismatic ideological leader, and, what distinguishes us from a school of philosophy or even a political party, is that we have an excathology to worry about; an end-of-the-world scenario. Unlike other cults, though, we wish to prevent or at least minimize the damage of that scenario, while most of them are enthusiastic in hastening it. For a cult, we're also extremely loose on rules to follow, we don't ask people to cast off material posessions (though we encourage donations) or to cut ties with old family and friends (it can end up happening because of de-religion-ing, but that's an unfortunate side-effect, and it's usually avoidable).
I could list off a few more traits, but the gist of it is this; we share a lot of traits with a cult, most of which are good or double-edged at worst, and we don't share most of the common bad traits of cults. Regardless of whether one chooses to call us a cult or not, this does not change what we are.
You are using a very loose definition of a cult. Surely you know that 'cult' carries some different (negative) connotations for other people?
It might not change what we are but it has some negative consequences. People like you who call us a cult while using a different meaning of 'cult' turn new members away because they hear that LessWrong is a cult and they don't hear your different meaning of the word (which excludes most of the negative traits of bloody cults).
Yes, I get definite cultist vibes from some members. A cult is basically an organization of a small number of members who hold that their beliefs make them superior (in one or more ways) to others, with an added implication of social tightness, shared activities, internal slang, difficult for outsiders to understand. Many LW people often appear to behave like this.
You too are using an even looser definition of a cult. Surely you know that 'cult' carries some different (negative) connotations for other people?
I never stated LW is a cult. It clearly isn't. It does however have at least several, possibly many, members who appear to think about LW in the way many cult members think of their cult.
Observe the progression:
...
...
At this point, are you saying anything at all?
Which members?
A med student colleague of mine, a devout christian, is going to give a lecture on psychosexual development for our small group in a couple of days. She's probably going to sneak in an unknown amount of propaganda. With delicious improbability, there happen to be two transgender med students in our group she probably isn't aware of. To this day, relations in our group have been very friendly.
Any tips on how to avoid the apocalypse? Pre-emptive maneuvers are out of the question, I want to see what happens.
ETA: Nothing happened. Caused a significant update.
This sounds like a situation in which some people present may consider some other people's beliefs to be an individual-level existential threat — whether to their identity, to their lives, or to their immortal souls. In other words, the problem is not just that these folks disagree with each other, but that they may feel threatened by one another, and by the propagation of one another's beliefs.
Consider:
"If you convince people of your belief, people are more likely to try to kill me."
"If you convince people of your belief, I am more likely to become corrupted."
We are surprised when a local NAACP leader has a calm meeting with a KKK leader. (But possibly not as surprised as the national NAACP leadership were.)
One framework for dealing with situations like this is called liberalism. In liberalism, we imagine moral boundaries called "rights" around individuals, and we agree that no matter what other beliefs we may arrive at, that it would be wrong to transgress these boundaries. (We imagine individuals, not groups or ideas, as having rights; and that every individual has the same rights, regardless of properties such as their race, sex, sexuality, or religion.)
Agreeing on rights allows us to put boundaries around the effects of certain moral disagreements, which makes them less scary and more peaceful. If your Christian colleague will agree, for instance, that it is wrong to kidnap and torture someone in an effort to change that person's sexual identity, they may be less threatening to the others.
What would constitute an apocalypse? When you say "I want to see what happens" do you mean you want to let the situation develop organically but set certain boundaries, a cap on damages, so to say?
That's exactly what I mean. I'm not directing the situation, but will be participating.
I'd like to confront and see people confront her religious bias, without the result being excessive flame or her being backed in a corner without a chance to even marginally nudge her mind in the right direction. She's smart, will not make explicit religious statements, and will back her claims with cherry picked reseach. Naturally the level of mindkill will depend on other participants too, and I will treat this as some sort of a rationality test if they manage to keep their calm. If they lose it I guess it's understandable.
I guess I'll be using lots of some version of "agree denotationally, disagree connotationally".
Are the participants Finnish? I am tempted to start remembering jokes about the volatile and emotional character of Finns... :-)
I formerly thought I had a politically-motivated stalker who was going through all my old comments to downvote them.
Now I wonder if I have a stalker who is trying to keep me at ~6000 total, ~200 30-day karma.
Is there any research suggesting simulated out-of-body-experiences (OBE)(like this), can be used for self improvement? For example potential areas of benefits include triggering OBEs to help patients suffering from incorrect body identities, which is exciting.
For some time now, I have had this very strange fascination with OBE and using it to over come akrasia. Of course I have no scientific evidence for it, yet I have this strong intuition that makes me believe so. I'll do my best to explain my rationale. Often I get this idea, that I can trick myself into doing what I want, if I pretend that I am not me but just someone observing me. This disconnects my body from my identity, so that the real me can control the body me. This gives me motivation to do things for the body me. I am not studying, my body me is studying to level up. I'm not hitting the gym, the body me is hitting the gym to level up. An even more powerful effect is present for social anxiety. Things like public speaking and rejection therapy are terrifying but by disconnecting my identity from my body, I would find that rejection is not as personal, just directed at this avatar that I control. Negative self-conscious thoughts and embarrassment seem to have a lessened impact.
Am I off my rocker?
A few potentially relevant observations:
The kind of dissociation you talk about here, where I experience my "self" as unrelated to my body, is commonly reported as spontaneously occurring during various kinds of emotional stress. I've had it happen to me many times.
It would not be surprising if the same mechanism that leads to spontaneous dissociation in some cases can also lead to the strong intuition that dissociation would be a really good idea.
Just because there's a mechanism that leads me to strongly intuit that something would be a really good idea doesn't necessarily mean that it actually would be.
All of that said: after my stroke, I experienced a lot of limb-dissociation... my arm didn't really feel like part of me, etc. This did have the advantage you described, where I could tell my arm to keep doing some PT exercise and it would, and yes, my arm hurt, and I sort of felt bad for it, but it's not like it was me hurting, and I knew I'd be better off for doing the exercise. It is indeed a useful trick.
I suspect there are healthier ways to get the same effect.
Do you have experience with OBEs? I personally have limited experience. I'm no expert but I know a bit.
In my experience the kind of people who have the skills for engaging in out-of-body-experiences usually don't get a lot done. It rather increases akrasia then decreasing it. If you want to decrease akrasia associating more with your body is a better strategy than getting outside of it.
That effect is really there. You are making a trade. You lose empathy. Stopping to care about other people means that you can't have genuine relationships.
On the other hand rejections don't hurt as much and you can more easily put yourself into such a situation.
I don't think you're off your rocker, though dissociating at the gym might increase the risk of injury.
I tentatively suggest that you explore becoming comfortable enough in your life that you don't need the hack, but I'm not sure that the hack is necessarily a bad strategy at present.
MWI gives an interesting edge to an old quote:
"... there are an infinite number of alternate dimensions out there. And somewhere out there you can find anything you might imagine. What I imagine is out there is a bunch of evil characters bent on destroying our time stream!" -- Lord Simultaneous
... does the fact that there's been no obvious contact suggest that the answer to the transdimensional variant of the Fermi paradox is that once you've gone down one leg of the Trousers of Time, there's no way to affect any other leg, no matter how much you try to cheat?
The Fermi paradox includes us knowing a lot about the density of stuff in the visible universe. You'd expect expansionistic life to populate most of a galaxy in short order since there are only the three dimensions to expand in. The Everett multiverse is a bit bigger. Would you still get a similar expansion model for a difficult to discover cheat, or could we end up with effects only observable in a minuscule fraction of all branches even if a cheat was possible, but was difficult enough to discover?
Does anyone have any book recommendations on the topic of evidence based negotiation tactics? I have read Influence; Cialdini, thinking fast, and slow ; Kahneman , and Art of Strategy; Dixit and Nalebuff. These are great books to read but I am looking for something with a more narrow focus, there are lot's of books on amazon that get good reviews but I am unsure of which one would suit me best.
Getting to Yes is a standard negotiation book; Difficult Conversations seems useful as a supplement for negotiation in non-business contexts (but, as a general communication book, has obvious business applications as well).
I picked these up per your suggestion. Thanks.
Hofstadter and AI-- trying to understand how people actually think rather than producing brute-force simulations for specific problems.
Is it rude to buy a treadmill if you live on the second floor of the apartment ?
Buying it? No. Using it while your downstairs-neighbor is home? Yes. A repetitive thumping can make trying to study hellishly difficult (for people sufficiently similar to me).
To the extent that you believe the preferences of the person below you mirror your own, would it annoy you if the person above you started using a treadmill in their apt?
I don't know what a treadmill upstairs sounds like.
Possibly; an elliptical machine may be more considerate, as it's less likely to produce noise or impact which will be noticed downstairs.
Does anyone know of a good online source for reading about general programming concepts? In particular, I'm interested in learning a bit more about pointers and content-addressability, and the Wikipedia material doesn't seem very good. I don't care about the language - ideally I'm looking for a source more general than that.
Try the r/learnprogramming resource pages: free books, online stuff.
Can't actually name a good general article on pointers. They're the big sticking point for anyone trying to learn C for the first time, but they end up just being this sort of ubiquitous background knowledge everyone takes for granted pretty fast. I did stumble into Learn C the Hard Way, which does get around to pointers.
The C2 wiki is an old site for general programming knowledge. It's old, the navigation is weird, and the pages sometimes devolve into weird arguments where you have no idea who's saying what. But there's interesting opinionated content to find there, where sometimes the opinionators even have some idea what they're talking about. Here's one page on what they have to say about pointers.
Also I'm just going to link this article about soft skills involved in programming, because it's neat.
Has anyone read Daniel Goleman's new book? Opinions?
Could anyone provide me with some rigorous mathematical references on Statistical Hypotheses Testing, and Bayesian Decision Theory? I am not an expert in this area, and am not aware of the standard texts. So far I have found
Currently, I am leaning towards purchasing Berger's book. I am looking for texts similar in style and content to those of Springer's GTM series. It looks like the Springer Series in Statistics may be sufficient.
Berger is highly technical, not much of an introduction.
On Bayesian statistics, Bayesian Data Analaysis is a classic.
"Bayesian decision theory" usually just means "normal decision theory," so you could start with my FAQ. Though when decision theory is taught from a statistics book rather than an economic book, they use slightly different terminology, e.g. they set things up with a loss function rather than a utility function. For an intro to decision theory from the Bayesian statistics angle, Introduction to Statistical Decision Theory is pretty thorough, and more accessible than Berger.
Great, thank you very much for the references. I am now reading your FAQ before moving onto the texts, I'll post any comments I have there.