Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread: September 2011

5 Post author: Pavitra 03 September 2011 07:50PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.

Comments (441)

Sort By: Controversial
Comment author: Armok_GoB 03 September 2011 08:29:30PM 1 point [-]

I keep running into problems with various versions of what I internally refer to as the "placebo paradox", and can't find a solution that doesn't lead to Regret Of Rationality. Simple example follows:

You have an illness from wich you'll either get better, or die. The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking. Before learning this you have 80% confidence in your recovery. Since you estimate 80%, your actual chance is 40% so you update to this. Since the estimate is now 40%, the actual chance is 20%, so you update to this. Then it's 10%, so you update to that. etc. Until both your estimated and actual chance of recovery are 0. then you die.

An irrational agent, on the other hand, upon learning this could self delude to 100% certainty of recovery, and have a 50% chance of actually recovering.

This is actually causing me real world problems, such as inability to use techniques based on positive thinking, and a lot of cognitive dissonance.

Another version of this problem features in HP:MoR, in the scene where harry is trying to influence the behaviour of dementors.

And to show this isn't JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.

Comment author: Normal_Anomaly 04 September 2011 07:02:32PM 0 points [-]

I think one way to avoid having to call this regret of rationality would be to see optimism as deceiving, not yourself, but your immune system. The fact that the human body acts differently depending on the person's beliefs is a problem with human biology, which should be fixed. If Omega does the same thing to an AI, Omega is harming that AI, and the AI should try to make Omega stop it.

Comment author: RichardKennaway 06 November 2011 05:37:32PM *  1 point [-]

To fully solve this problem requires answering the question of how the placebo effect physically works, which requires answering the question of what a belief physically is, to have that physical effect.

However, no-one yet knows the answers to those questions, which renders all of these logical arguments about as useful as Zeno's proof that arrows cannot move. The problem of how to knowingly induce a placebo response is a physical one, not a logical one. Nature has no paradoxes.

Comment author: [deleted] 06 September 2011 03:16:56AM 1 point [-]

Can you see what an absurdly implausible scenario you must use as a ladder to demonstrate rationality as a liability? Rather than being a strike against strict adherence to reality. The fact that we have to stretch so hard to paint it this way, further legitimizes the pursuit of rationality.

Comment author: Armok_GoB 06 September 2011 07:58:54AM -1 points [-]

Except I happen to, as far as I can tell, be in that "implausible" scenario IRL, or at least an isomorphic one.

Comment author: handoflixue 09 September 2011 11:34:23PM 1 point [-]

Can you actually describe the scenario you really are in? I can think of ways I'd address a lot of real-world analogues, but none of them are actually isomorphic to the example you gave. The solutions generally rely on the lack of a true isomorphism, too.

Comment author: [deleted] 06 September 2011 03:37:46PM *  2 points [-]

I mean no disrespect for your situation whatever it may be. I gave this some additional thought. You are saying that you have an illness in which the rate of recovery is increased by fifty percent due to a positive outlook and the placebo effect this mindset produces. Or that an embrace of the facts of your condition lead to an exponential decline at the rate of fifty percent. Is it depression, or some other form of mental illness? If it is, then the cause of death would likely be suicide. I am forced to speculate because you were purposefully vague.

For the sake of argument I will go with my speculative scenario. It is very common for those with bi-polar disorder and clinical depression to create a negative feedback loop which worsens their situation in the way you have highlighted. But it wouldn't carry the exacting percentages of taper (indeed no illness would carry that exact level of decline based merely on the thoughts in the patients head). But given your claims that the illness exponentially declines, wouldn't the solution be knowledge of this reality? It seems that the delusion has come in the form of accepting that an illness can be treated with positive thinking alone. The illness is made worse by an acceptance not of rationality, but of this unsupported data which by my understanding is irrational.

I am very skeptical of your scenario, merely because I do not know of any illnesses which carry this level of health decline due to the absence of a placebo. If you have it please tell me what it is as I would like to begin research now.

Comment author: christina 04 September 2011 08:02:44PM *  0 points [-]

If the placebo effect actually worked exactly like that, then yes, you would die while the self-deluded person would do better. However, from personal experience, I highly suspect it doesn't (I have never had anything that I was told I'd be likely to die from, but I believe even minor illnesses give you some nonzero chance of dying). Here is how I would reason in the world you describe:

  1. There is some probability I will get better from this illness, and some probability I will die.

  2. The placebo effect isn't magic, it is a real part of the way the mind interacts with the body. It will also decrease my chances of dying.

  3. I don't want to die.

  4. Therefore I will activate the effect.

  5. To activate the effect for maximum efficiency, I must believe that I will certainly recover.

  6. I have activated the placebo effect. I will recover (Probability: 100%). Max placebo effect achieved!

  7. The world I live in is weird.

In the real world, the above mental gymnastics are not necessary. Think about the things that would make you, personally, feel better during your illness. What makes you feel more comfortable, and less unhappy, when you are ill? For me, the answer is generally a tasty herbal tea, being warm (or cooled down if I'm overheated), and sleeping. If I am not feeling too horrible, I might be up to enjoying a good novel. What would make you feel most comfortable may differ. However, since both of us enjoy thinking rationally, I doubt spouting platitudes like "I have 100% chances of recovery! Yay!" is going to make you personally feel better. Get the benefits of pain reduction and possibly better immune response of the placebo effect by making yourself more physically and mentally comfortable. When I do these things, I don't think they help me get better because they have some magical ability in and of themselves. I think they will help me get better because of the positive associations I have for them. Hope that helps you in some way.

Comment author: [deleted] 04 September 2011 05:02:00PM 0 points [-]

The scenario you propose does seem inevitably to cause a rational agent to lose. However, it is not realistic, and I can't think of any situations in real life that are like this-- your fate is not magically entangled with your beliefs. Though real placebo effects are still not fully understood, they don't seem to work this way: they may make you feel better, but they don't actually make you better. Merely feeling better could actually be dangerous if, say, you think your asthma is cured and decide to hike down into the Grand Canyon.

Maybe there are situations I haven't thought of where this is a problem, though. Can you give a detailed example of how this paradox obtrudes on your life? I think you might get more useful feedback that way.

Comment author: shokwave 04 September 2011 03:32:05PM *  2 points [-]

Updating on the evidence of yourself updating is almost as much as a problem as is updating on the evidence of "I updated on the evidence of myself updating". Tongue-in-cheek!

That is to say, the decision theory you are currently running is not equipped to handle the class of problems where your response to a problem is evidence that changes the nature of the very problem you are responding to - in the same way that arithmetic is not equipped to handle problems requiring calculus or CDT is not equipped to handle Omega's two-box problem.

(If it helps your current situation, placebo effects are almost always static modifiers on your scientific/medical chances of recovery)

Comment author: Torben 04 September 2011 04:34:01AM 3 points [-]

Your model assumes a constant effect in each iteration. Is this justified?

I would envisage a constant chance of recovery and an asymptotically declining estimate of recovery. It seems more realistic, but maybe it's just me?

Comment author: Eliezer_Yudkowsky 04 September 2011 06:58:41AM 11 points [-]

Actually, you can solve this problem just by snapping your fingers, and this will give you all the same benefits as the placebo effect! Try it - it's guaranteed to work!

Comment author: Armok_GoB 04 September 2011 12:30:55PM 0 points [-]

... Even YOU miss the point? guess I utterly failed at explaining it then.

IF I could solve the problem I'm stating in the first post, then this would indeed be almost true. It might be true in 99% of cases, but 0.99^infinity is still ~0. Thus that is the only probability I can consistently assign to it. I MIGHT be able to self modify to be able to hold inconsistent beliefs, but that's double think and you have explicitly, loudly and repeatedly warned against and condemned it.

I'm baffled at how I seem unable to point at/communicate the concept. I even tired pointing at a specific instance of you using something very similar in MoR.

Comment author: shokwave 04 September 2011 03:37:36PM *  7 points [-]

... Even YOU miss the point? guess I utterly failed at explaining it then.

Eliezer is not "the most capable of understanding (or repairing to an understandable position) commentor on LessWrong". He is "the most capable of presenting ideas in a readable format" AND "the person with the most rational concepts" on LessWrong. Please stop assuming these qualities are good proxies for, well, EVERYTHING.

Comment author: Jack 06 September 2011 02:51:26PM 1 point [-]

"the person with the most rational concepts"

What does this mean?

Comment author: shokwave 07 September 2011 12:27:51AM 0 points [-]

Each one of his sequence posts represents a concept in rationality - so he has many more of these concepts than anyone else here on LW.

(I just noticed there's some ambiguity - it's the largest amount of rational concepts, not concepts of the highest standard of rational. [most] [rational concepts], not [most rational] [concepts].)

Comment author: khafra 07 September 2011 04:14:11PM 2 points [-]

The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking.

It would take an artificially bad situation for this to be the case. In the real world, the placebo effect still works, even if you know it's a placebo--although with diminished efficacy.

But that's beside the point. More on-point is that intentional self-delusion, if possible, is at best a crapshoot. It's not systematic; it relies on luck, and it's prone to Martingale-type failures.

The HPMOR and placebo examples appear, to me, to share another confounding factor: The active ingredient isn't exactly belief. It's confidence, or affect, or some other mental condition closely associated with belief. If it weren't, there'd be no way Harry could monitor his level of belief that the dementors would do what he wanted them to, while simultaneously trying to increase it. Anecdotally, my own attempts at inducing placebo effects feel similar.

Comment author: handoflixue 09 September 2011 11:49:57PM 2 points [-]

I've been doing this for years, and it really does work!

(No, really, I actually have; it actually does. The placebo effect is awesome ^_^)

Comment author: Normal_Anomaly 16 September 2011 09:22:15PM 8 points [-]

Relevant and amusing (to me at least) story: A few months ago when I had a cold, I grabbed a box of zinc cough drops from my closet and started taking them to help with the throat pain. They worked as well or better than any other brand of cough drops I've tried, and tasted better too. Later I read the box, and it turned out they were homeopathic. I kept on taking them, and kept on enjoying the pain relief.

Comment author: handoflixue 09 September 2011 11:39:31PM 1 point [-]

http://www.guardian.co.uk/science/2010/dec/22/placebo-effect-patients-sham-drug It is also well worth noting that the Placebo Effect works just fine even if you know it's just a Placebo Effect. I hadn't realized it worked for others, but I've been abusing this one for a lot of my life, thanks to a neurological quirk that makes placebos especially potent for me.

Comment author: Armok_GoB 10 September 2011 10:00:34AM 1 point [-]

Yes, but you have to BELIEVE the placebos will help. In fact, the paradox ONLY appears in the case you know it's a placebo because that's when the feedback loop can happen.

Comment author: handoflixue 11 September 2011 02:23:39AM 1 point [-]

I'm not aware of any research that says a placebo won't help a "non-believer" - can you cite a study? Given the study I linked where they were deliberately handed inert pills and told that they were an inert placebo, and they still worked, I actually strongly doubt your claim.

And given the research I linked, why in the world wouldn't you believe in them? They do rationally work.

Comment author: Armok_GoB 11 September 2011 09:17:09AM 1 point [-]

A placebo will help if you think the pill you're taking will help. This may be because you think it's a non-placebo pill that'd help even if you didn't know you were taking it, or because you know it's a placebo but think placebos work. If you were given a placebo pill, told it was just a candy and given no indication it might help anything, it wouldn't do anything because it's just sugar. Likewise if you're given a placebo, know it's a placebo, and are convince on al levels that there is no chance of it working.

Comment author: handoflixue 12 September 2011 06:26:25PM 1 point [-]

Right. So find someone who will tell you it's a placebo, and read up on the research that says it does work. It'd be irrational to believe that they don't work, given the volume of research out there.

Comment author: Armok_GoB 12 September 2011 06:36:14PM -2 points [-]

facepalms Did you even read any other post in this thread?

Comment author: handoflixue 12 September 2011 08:27:44PM 1 point [-]

Yes, but you have to BELIEVE the placebos will help.

Quite a few of them. You're being vague enough that I can only play with the analogies you give me. You gave me the analogy of a placebo not working if you don't believe in it; I pointed out that disbelief in placebos is rather irrational.

Comment author: [deleted] 07 September 2011 02:09:26PM 3 points [-]

Speaking of Omega setting up an isomorphic situation, the Newcomb's Box problems do a good job of expressing this.


However, I also though of a side question. Is the person who is caught in a cycle of negative thinking like the placebo effect that you mention, engaging in confirmation bias?

I mean, if that person thinks "I am caught in a loop of updates that will inexorably lead to my certain death." And they are attempting to establish that that is true, they can't simply say "I went from 80%/40% to 40%/20% to 20%/10%, and this will continue. I'm screwed!" as evidence of it's truth, because that's like saying "4,6,8" "6,8,10" "8,10,12" as the guesses for the rule that you know "2,4,6" follows. and then saying "The rule is even numbers, right? Look at all this evidence!"

If a person has a hypothesis that their thoughts are leading them to an inexorable and depressing conclusion, then to test the hypothesis, the rational thing to do is for that person to try proving themselves wrong. By trying "10,8,6" and then getting "No, that is not the case." (Because the real rule is numbers in increasing order.)

I actually haven't confirmed that this idea myself yet. I just thought of it now. But casting it in this light makes me feel a lot better about all the times I perform what appear at the time to be self delusions on my brain when I'm caught in depressive thinking cycles, so I'll throw it out here and see if anyone can contradict it.

Comment author: Armok_GoB 07 September 2011 07:34:16PM 1 point [-]

Thanks for restating parts of the problem in a much clearer manner!

And yea, that article is why this problem is wreaking such havock on me, and I were thinking of it as I wrote the OP. I'm not sure why I didn't link it.

However, I still can't resolve the paradox. Although I'm finally starting to see how one might start on doing so: formalizing an entire decision theory that solves the entire class of problems, and them swapping half my mindware out in a single operation. Doesn't seem like a very^good solution thou so I'd rather keep looking for third options.

I don't think I understand the middle paragraph with all the examples. Probably because the way I actually think of it is not the way I used in the OP, but rather an equation where expectation must be equal to actual probability to call my belief consistent, and jumping straight there. Like so: P=E/2, E=P, thus E=0.

Hmm, I just got a vague intuition saying roughly "Hey, but wait a moment, probability is in the mind. The multiverse is timeless and in each Everett branch you either do recover or you don't! ", but I'm not sure how to proceed from there.

Comment author: Pfft 06 September 2011 10:01:05PM 1 point [-]
Comment author: lessdazed 03 September 2011 11:50:27PM *  -2 points [-]

Edit: Original version moved to karma sink to hide it away and leave it available for reference. New version:

Is what we refer to as "status" always best thought of as relative? Is a person's status like shares in a corporation or money in an economy, where the production of more diminishes what they have and does not create wealth? Is it an ability to compel others and resist compulsion? Or is it more like widgets, where if I happen to lose out from you getting more widgets, it is only because of secondary effects like your ability to out-compete me with your widgets?

I am not trying to find a really true definition of "status". To some, it seems right to answer the question "Is status all relative or is status not all relative?" with "It depends on which reasonable meaning of status you mean." Everyone (?) agrees that a valid way of discussing status is to talk about something like what portion of the total (subcategory of) status a person has.

Not everyone agrees that there is a reasonable meaning by which one might speak of non-relative status, other than the one that is shorthand for ignoring small or infinitesimal losses by others. In the same way we may say "The government printed one million dollars and gave it to an agency, no one else lost or gained anything." It's fine to say that, but only because: a) the inflation caused by printing a million dollars is miniscule, b) we can count on the listener to infer that increasing money does not increase wealth in that way.

So if one's answer is "It depends," then one thinks it is more than just linguistically valid to think about status in terms of an absolute that can be increased or decreased, but literally, logically, true. Not everyone agrees with that, and the poll is to get a general feel for how many here think each way.

So, as a hypothetical: A person in a room magically becomes awesome - say a guy has knowing kung fu downloaded into his brain, and he tells everyone, and they believe him. Does it make any sense at all to say that the status of others has not changed, other than in a way susceptible to a money/inflation/wealth (simple truth sheep/rock) metaphor?


Status all is relative

Status is not all relative

Comment author: lessdazed 04 September 2011 03:40:20PM *  0 points [-]

Status is not all relative

Comment author: lessdazed 04 September 2011 03:40:34PM *  1 point [-]

Status is all relative

Comment author: lessdazed 05 September 2011 02:29:23AM *  3 points [-]

Could someone please explain the response to this comment? What I'm most curious about are the responses to the attached poll replies. Multiple people have downvoted each entry in the poll without comment. This ruins the poll for the participants, as one can no longer tell how many people have voted for each option. Do not do this on polls until either LW shows more than net votes, or there is a better way to poll.

I also don't understand downvoting this comment without criticizing it and helping me fix its problems. I have discussed this topic with several LW participants and have gotten each of the two types of responses multiple times, and I think a previously undiscussed issue that gets divergent intuitions from people who theretofore have believed themselves having very similar philosophies is potentially interesting. If I am not criticized, I do not know how to improve. It is currently sitting at -2 but it has been upvoted several times as well, five or more people have downvoted without comment.

I'm not shy about posting things in discussion if I think they merit it, but I didn't think this topic did, so I posted it in the open thread. If this issue is not appropriate for an open thread, where is it appropriate for?

Comment author: ArisKatsaris 06 September 2011 10:06:44PM 1 point [-]

I've not downvoted you, nor participated in the poll, but...

...your question about how relative 'status' is, reminds me of debates about whether a tree falling in the forest makes a sound. Depends how one defines the word. You don't seem to have an option in your poll for "Depends how one defines 'status' ".

...also you seem to be first posing a detailed specific scenario with a concrete question about what happens with the fires on the first and second islands -- but then the polls don't offer that specific, concrete question, they offer the vague "status is relative/not all relative" questions instead. Which seems you want to jumble different questions together, or making people seem to support one thing by answering another. Or something.

In short it all seems a bit muddled. Mind you, as I said, I wasn't among the people downvoting this, so I don't know their own reasoning behind their votes.

Comment author: wedrifid 27 September 2011 04:37:00PM -1 points [-]

Ok, my 'last 30 days' karma just dropped 100 over an 8 hour period. Now I'm trying to work out exactly why I need to be reminded that I must have written some awesome comments a month ago. :P

Comment author: wedrifid 29 September 2011 03:45:38AM 0 points [-]

Ok, now it is a 200 drop in the 30 days while the absolute increases by about 100. WTF was I doing back then? I didn't write a top level post. Must have been some sort of political drama that I lucked out and got on the popular side of.

Comment author: anonym 05 September 2011 05:30:24PM *  3 points [-]

I don't recall any discussion on LW -- and couldn't find any with a quick search -- about the "Great Rationality Debate", which Stanovich summarizes as:

An important research tradition in the cognitive psychology of reasoning--called the heuristics and biases approach--has firmly established that people’s responses often deviate from the performance considered normative on many reasoning tasks. For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they display illogical framing effects, they uneconomically honor sunk costs, they allow prior knowledge to become implicated in deductive reasoning, and they display numerous other information processing biases (for summaries of the large literature, see Baron, 1998, 2000; Dawes, 1998; Evans, 1989; Evans & Over, 1996; Kahneman & Tversky, 1972, 1984, 2000; Kahneman, Slovic, & Tversky, 1982; Nickerson, 1998; Shafir & Tversky, 1995; Stanovich, 1999; Tversky, 1996).

It has been common for these empirical demonstrations of a gap between descriptive and normative models of reasoning and decision making to be taken as indications that systematic irrationalities characterize human cognition. However, over the last decade, an alternative interpretation of these findings has been championed by various evolutionary psychologists, adaptationist modelers, and ecological theorists (Anderson, 1990, 1991; Chater & Oaksford, 2000; Cosmides & Tooby, 1992; 1994b, 1996; Gigerenzer, 1996a; Oaksford & Chater, 1998, 2001; Rode, Cosmides, Hell, & Tooby, 1999; Todd & Gigerenzer, 2000). They have reinterpreted the modal response in most of the classic heuristics and biases experiments as indicating an optimal information processing adaptation on the part of the subjects. It is argued by these investigators that the research in the heuristics and biases tradition has not demonstrated human irrationality at all and that a Panglossian position (see Stanovich & West, 2000) which assumes perfect human rationality is the proper default position to take.

Stanovich, K. E., & West, R. F. (2003). Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality. In D. E. Over (Ed.), Evolution and the psychology of thinking: The debate, Psychological Press. [Series on Current Issues in Thinking and Reasoning]

The lack of discussion seems like a curious gap given the strong support to both the schools of thought that Cosmides/Tooby/etc. represent on the one hand, and Kahneman/Tversky/etc. on the other, and that they are in radical opposition on the question of the nature of human rationality and purported deviations from it, both of which are central subjects of this site.

I don't expect to find much support here for the Tooby/Cosmides position on the issue, but I'm surprised that there doesn't seem to have been any discussion of the issue. Maybe I've missed discussions or posts though.

Comment author: Vaniver 07 September 2011 09:33:45PM 2 points [-]

Typically, the "optimal thinking" argument gets brought up here in the context of evolutionary psychology. Loss aversion makes sound reproductive sense when you're a hunter-gatherer, and performing a Bayesian update carefully doesn't help all that much. But times have changed, and humans have not changed as much.

Comment author: rehoot 06 September 2011 03:39:42AM 3 points [-]

I don't understand the basis for the Cosmides and Tooby claim. In their first study, Cosmides and Tooby (1996) solved the difficult part of a Bayesian problem so that the solution could be found by a "cut and paste" approach. The second study was about the same with some unnecessary percentages deleted (they were not needed for the cut and paste solution--yet the authors were surprised when performance improved). Study 3 = Study 2. Study 4 has the respondents literally fill in the blanks of a diagram based on the numbers written in the question. 92% of the students answered that one correctly. Studies 5 & 6 returned the percentages and the students made many errors.

Instead of showing innate, perfect reasoning, the study tells me that students at Yale have trouble with Bayesian reasoning when the question is framed in terms of percentages. The easy versions do not seem to demonstrate the type of complex reasoning that is needed to see the problem and frame it without somebody framing it for you. Perhaps Cosmides and Tooby are correct when they show that there is some evidence that people use a "calculus of probability" but their study showed that people cannot frame the problems without overwhelming amounts of help from somebody who knows the correct answer.


Cosmides, L. & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition 58, 1–73, DOI: 10.1016/0010-0277(95)00664-8

Comment author: anonym 07 September 2011 03:41:36AM *  2 points [-]

I agree. I was hoping somebody could make a coherent and plausible sounding argument for their position, which seems ridiculous to me. The paper you referenced shows that if you present an extremely simple problem of probability and ask for the answer in terms of a frequency (and not as a single event), AND you present the data in terms of frequencies, AND you also help subjects to construct concrete, visual representations of the frequencies involved by essentially spoon-feeding them the answers with leading questions, THEN most of them will get the correct answer. From this they conclude that people are good intuitive statisticians after all, and they cast doubt on the entire heuristics and biases literature because experimenters like Kahneman and Tversky don't go to equally absurd lengths to present every experimental problem in ways that would be most intuitive to our paleolithic ancestors. The implication seems to be that rationality cannot (or should not) mean anything other than what the human brain actually does, and the only valid questions and problems for testing rationality are those that would make sense to our ancestors in the EEA.

Comment author: [deleted] 14 November 2013 10:59:32PM 0 points [-]

Hello. I just signed up. I don't understand exactly the architecture of the site. Where can I post an idea, which is also a request for help, in developing a "revolutionary" computer program, for instance?

Comment author: Nisan 14 November 2013 11:07:54PM *  3 points [-]

The current Open Thread would be a good place to do that. If you want to wait a day, there will be a new weekly Open Thread and you will see it at the top of the Discussion section.

EDIT: Also, feel free to introduce yourself in the current Welcome Thread.

Comment author: [deleted] 14 November 2013 11:48:23PM 0 points [-]

What is this "new weekly Open Thread"; and how will it be called?

Comment author: Nisan 15 November 2013 12:45:35AM 2 points [-]

Ah, well, if you follow the link to the Discussion section, you'll see a list of the most recent posts, with the newest posts first. Currently, about halfway down the page, you can see "Open Thread, November 8 - 14, 2013". This is a link to what I called the current Open Thread. I expect that in the next 24 hours or so, a new post called "Open Thread, November 15 - 21, 2013" or something similar will appear at the top of the list.

Comment author: [deleted] 15 November 2013 11:15:26AM 0 points [-]

OK, thanks.

Comment author: [deleted] 06 September 2011 12:20:39PM *  2 points [-]

EDIT: this comment was made when I was in a not-too-reasonable frame of mind, and I'm over it.

Is teaching, learning, studying rationality valuable?

Not as a bridge to other disciplines, or a way to meet cool people. I mean, is the subject matter itself valuable as a discipline in your opinion? Is there enough to this? Is there anything here worth proselytizing?

I'm starting to doubt that. "Here, let me show you how to think more clearly" seems like an insult to anyone's intelligence. I don't think there's any sense teaching a competent adult how to change his or her habits of thought. Can you imagine a perfectly competent person -- say, a science student -- who hasn't heard of "rationalism" in our sense of the world, finding such instruction appealing? I really can't.

Of course I'm starting to doubt the value (to myself) of thinking clearly at all.

Comment author: wedrifid 07 September 2011 08:18:06AM 1 point [-]

Not as a bridge to other disciplines, or a way to meet cool people. I mean, is the subject matter itself valuable as a discipline in your opinion?

A little bit but it varies wildly based on who you are.

Is there enough to this? Is there anything here worth proselytizing?

Not really.

Comment author: [deleted] 07 September 2011 04:11:11AM -1 points [-]

Doesn't this non-true-believer sort of mentality make you the perfect proponent?

I don't think there's any sense teaching a competent adult how to change his or her habits of thought.

Then we're all doomed. Literally.

Comment author: arundelo 07 September 2011 04:30:35AM 0 points [-]

Then we're all doomed.

You might be reading SarahC as saying that teaching a competent adult to change his or her habits of thought is not possible (if you're not, ignore this comment), but I think she's saying that it's not worthwhile.

Comment author: handoflixue 09 September 2011 10:54:49PM 1 point [-]

If it is not worthwhile for competent adults to learn something as basic as "how to change their mind" then I would have to agree with the conclusion that we are doomed.

Comment author: Jack 09 September 2011 11:06:36PM *  2 points [-]

Er why, exactly? Most competent adults in history have not known how to change their mind? The worlds has improved because of those who do. It seems to me that the key variable in teaching rationality is whether the student is willing. Most people just don't care that much about the truth of their far-beliefs. But occasional people do and those are the people you can teach. Thats why everyone here is a truth fetishist.

What we need is more pro-truth propaganda so that in the next generation the pool of potential rationalists is larger.

Comment author: handoflixue 09 September 2011 11:30:55PM 2 points [-]

The emphasis here is on worthwhile: the idea that changing your mind, and knowing how to, has a tangible benefit, and one that is (generally, on average) worth the effort it takes to learn. If there's no particular benefit to changing your mind, then either (a) you have already selected the best outcome or (b) your choices are irrelevant.

If this is the best possible world, then I feel okay calling us doomed; it's a pretty lousy world.

As to irrelevancy, well, to think that I'd live the same life regardless of whether "Will you marry me?" is met with yes or no? That is not a world I want. The idea that given a set of choices, the outcome remains the same across them is just a terrifying nihilistic idea to me.

Comment author: Jack 09 September 2011 11:34:10PM *  -1 points [-]

The claim isn't that it isn't worthwhile to learn rationalism, period. The claim is that for lots of people, it isn't worthwhile.

Comment author: handoflixue 09 September 2011 11:56:49PM 2 points [-]

The claim is that, for lots of people, the net gain from changing their mind is so minimal as to not be worth the time spent studying. This implies strongly that, for lots of people, they have either (a) already made the best choice or (b) are not faced with any meaningful choices.

(a) implies that either lots of people are completely incapable of good decisions or are the Chosen Of God, their every selection Divinely Inspired from amongst the best of all possible worlds. Which goes back to this being a pretty lousy world.

(b) flies in the face of all the major decisions people normally make (marriage, buying a house, having children, etc.), and suggests that, statistically, a lot of the "important decisions" in my own life are probably meaningless unless I am the Chosen Of Bayes, specially exempt from the nihilism that blights the mundane masses.

For some people there may be the class (c) that the cost of learning rationality is much, much higher than normal. If your focus is on this group, that's a whole different conversation about why I think this is really rare :)

Comment author: Jack 10 September 2011 12:33:06AM *  2 points [-]

Just to begin with, the above is a terrible way to structure an inductive argument about something as variable has human behavior. Obviously few people are "completely incapable of good decisions or are the Chosen Of God" and no important decisions in life are "meaningless". It is, however, the case that most decisions don't matter all that much and that, when they do, people usual do a pretty good job without special training.

But the real issue that you're missing is opportunity cost. Lots of people don't know how to read or do arithmetic. Lots of people can't manage personal finances. Lots of people need more training to get a better job. Lots of people suffer from addiction. Lots of people don't have significant chunks of free time. Lots of people have children to raise. Almost everyone could benefit from learning something but many people either do not have the time or would benefit far more from learning a particular skill or trade rather than Bayesian math and how to identify cognitive biases.

Comment author: handoflixue 11 September 2011 02:21:05AM 1 point [-]

Almost everyone could benefit from learning something but many people either do not have the time or would benefit far more from learning a particular skill or trade rather than Bayesian math and how to identify cognitive biases.

I'm not disagreeing with this at all. But given the option of teaching someone nothing or teaching them this? I think it's a net gain for them to learn how to change their mind. And I think most people have room in their life to pretty easily be casually taught a simple skill like this, or at least the basics. I've been teaching it as part of casual conversations with my roommate just because I enjoy talking about it.

Comment author: handoflixue 09 September 2011 10:53:30PM 2 points [-]

"Here, let me show you how to think more clearly"

I was recently around some old friends who are lacking in rationality, and kept finding myself at a complete loss. I wanted to just grab them and say exactly that.

In other news, I've learned that some lessons in how to politely and subtly teach rationality would be quite welcome >.>

Comment author: [deleted] 06 September 2011 05:23:52PM 11 points [-]

Can you imagine a perfectly competent person -- say, a science student -- who hasn't heard of "rationalism" in our sense of the world, finding such instruction appealing? I really can't.

At some point I was that person. Weren't you?

Comment author: lessdazed 06 September 2011 01:44:35PM *  17 points [-]

Yesterday I spoke with my doctor about skirting around the FDA's not having approved of a drug that may be approved in Europe first (it may be approved in the US first). I explained that one first-world safety organization's imprimatur is good enough for me until the FDA gives a verdict, and that harm from taking a medicine is not qualitatively different than harm from not taking a medicine.

We also discussed a clinical trial of a new drug, and I had to beat him with a stick until he abandoned "I have absolutely no idea at all if it will be better for you or not". I explained that abstractly, a 50% chance of being on a placebo and a 50% chance of being on a medicine with a 50% chance of working was better than assuredly taking a medicine with a 20% chance of working, and that he was able to give a best guess about the chances of it working.

In practice, there are other factors involved, in this case it's better to try the established medicine first and just see if it works or not, as part of exploration before exploitation.

This is serious stuff.

Comment author: wedrifid 07 September 2011 08:15:30AM -1 points [-]

We also discussed a clinical trial of a new drug, and I had to beat him with a stick until he abandoned "I have absolutely no idea at all if it will be better for you or not". I explained that abstractly, a 50% chance of being on a placebo and a 50% chance of being on a medicine with a 50% chance of working was better than assuredly taking a medicine with a 20% chance of working, and that he was able to give a best guess about the chances of it working.

Better yet, if you aren't feeling like being altruistic you go on the trial then test the drug you are given to see if it is the active substance. If not you tell the trial folks that placebos are for pussies and go ahead and find either an alternate source of the drug or the next best thing you can get your hands on. It isn't your responsibility to be a control subject unless you choose to be!

Comment author: ArisKatsaris 07 September 2011 09:01:36AM 10 points [-]

Downvoted for encouraging people to screw over other people by backing out of their agreements... What would happen to tests if every trial patient tested their medicine to see if it's a placebo? Don't you believe there's value in having control groups in medical testing?

Comment author: wedrifid 07 September 2011 09:30:12AM *  1 point [-]

Downvoted for encouraging people to screw over other people by backing out of their agreements... What would happen to tests if every trial patient tested their medicine to see if it's a placebo? Don't you believe there's value in having control groups in medical testing?

Downvoted for actively polluting the epistemic belief pool for the purpose of a shaming attempt. I here refer especially (but not only) to the rhetorical question:

Don't you believe there's value in having control groups in medical testing?

I obviously believe there's a value in having control groups. Not only is that an obvious belief but it is actually conveyed by my comment. It is a required premise for the assertion of altruism to make sense.

My comment observes that sacrificing one's own (expected) health for the furthering of human knowledge is an act of altruism. Your comment actively and directly sabotages human knowledge for your own political ends. The latter I consider inexcusable and the former is both true and necessary if you wish to encourage people who are actually capable of strategic thinking on their own to be altruistic.

You don't persuade rationalists to conform to your will by telling them A is made of fire or by trying to fool them into believing A, B and C don't even exist. That's how you persuade suckers.

Comment author: ArisKatsaris 07 September 2011 09:45:42AM *  4 points [-]

I obviously believe there's a value in having control groups Not only is that obvious but it is actually conveyed by my comment. It is a required premise for the assertion of altruism to make sense.

Not so, there exists altruism that is worthless or even of negative value. An all-altrustic CooperateBot is what allows DefectBots to thrive. Someone can altruistically spend all his time praying to imaginary deities for the salvation of mankind, and his prayers would still be useless. To think that altruism is about value is a map-territory confusion.

My comment observes that sacrificing one's own (expected) health for the furthering of human knowledge is an act of altruism.

Your comment doesn't just say it's altruistic. It also tells him that if he doesn't feel like being an altruist, that he should tell people that "placebos are for pussies". Perhaps you were just joking when you effectively told him to insult altruists, and I didn't get it.

Either way, if he defected in this manner, not just he'd be partially sabotaging the experiment he signed up for, he'd probably be sabotaging his future chances of being accepted in any other trial. I know that if I was a doctor, I would be less likely to accept you in a medical trial.

Your comment actively and directly sabotages human knowledge for your own political ends.

Um, what? I don't understand. What deceit do you believe I committed in my above comment?

Comment author: Jack 07 September 2011 05:00:39PM 5 points [-]

Let me see if I can summarize this thread:

Wedrifid made a strategic observation that if a person cares more about their own health then the integrity of the trial it makes sense to find out whether they are on placebo and, if they are, leave the trial and seek other solutions. He did this with somewhat characteristic colorful language.

You then voted him down for expressing values you disagree with. This is a use of downvoting that a lot of people here frown on, myself included (though I don't downvote people for explaining their reasons for downvoting, even if those reasons are bad). Even if wedrifid thought people should screw up controlled trials for their own benefit his comment was still clever, immoral or not.

Of course, he wasn't actually recommending the sabotage of controlled trials-- though his first comment was sufficiently ambiguous that I wouldn't fault someone for not getting it. Luckily, he clarified this point for you in his reply. Now that you know wedrifid actually likes keeping promises and maintaining the integrity of controlled trials what are you arguing about?

Comment author: ArisKatsaris 07 September 2011 05:57:25PM 3 points [-]

Wedrifid made a strategic observation that if a person cares more about their own health then the integrity of the trial it makes sense to find out whether they are on placebo and, if they are, leave the trial and seek other solutions.

To me it didn't feel like an observation, it felt like a very strong recommendation, given phrases like "Better yet", "tell them placebos are for pussies", "It isn't your responsibility!", etc

Even if wedrifid thought people should screw up controlled trials for their own benefit his comment was still clever, immoral or not.

Eh, not really. It seemed shortsighted -- it doesn't really give an alternate way of procuring this medicine, it has the possibilty to slightly delay the actual medicine from going on the market (e.g. if other test subjects follow the example of seeking to learn if they're on a placebo and also abandon the testing, that forcing the thing to be restarted from scratch), and if a future medicine goes on trial, what doctor will accept test subjects that are known to have defected in this way?

Now that you know wedrifid actually likes keeping promises and maintaining the integrity of controlled trials what are you arguing about?

Primarily I fail to understand what deceit he's accusing me of when he compares my own attitude to claiming that "A is made of fire" (in context meaning effectively that I said defectors will be punished posthumously go to hell; that I somehow lied about the repercussions of defections).

He attacks me for committing a crime against knowledge -- when of course that was what I thought he was committing, when I thought he was seeking to encourage control subjects to find out if they're a placebo and quit the testing. Because you know -- testing = search for knowledge, sabotaging testing = crime against knowledge.

Basically I can understand how I may have misunderstood him --- but I don't understand in what way he is misunderstanding me.

Comment author: lessdazed 07 September 2011 10:08:29AM *  4 points [-]

Your comment actively and directly sabotages human knowledge for your own political ends.

OK, see, I thought this might happen. I love your first comment, much more than ArisKatsaris', but despite it having some problems ArisKatsaris is referring to, not because it is perfect. I only upvoted his comment so I could honestly declare that I had upvoted both of your comments, as I thought that might diffuse the situation - to say I appreciated both replies.

Don't get me wrong - I don't really mind ArisKatsaris' comment and I don't think it's as harmful as you seem to, but I upvoted it for the honesty reason.

You just committed an escalation of the same order of magnitude that he did, or more, as his statements were phrased as questions and were far less accusatory. I thought you might handle this situation like this and I mildly disapprove of being this aggressive with this tone this soon in the conversation.

Comment author: wedrifid 07 September 2011 10:40:36AM *  0 points [-]

I don't think it's as harmful as you seem to

A very slightly harmful instance of a phenomenon that is moderately bad when done on things that matter.

I thought you might handle this situation like this and I mildly disapprove of being this aggressive with this tone this soon in the conversation.

Where 'this soon' means the end. There is nothing more to say, at in this context. (As a secondary consideration my general policy is that conversations which begin with shaming terminate with an error condition immediately.) I do, however, now have inspiration for a post on the purely practical downsides of suppression of consideration of rational alternatives in situations similar to that discussed by the post.

EDIT: No, not post. It is an open thread comment by yourself that could have been a discussion post!

Comment author: lessdazed 07 September 2011 10:51:21AM *  1 point [-]

I'm not unsympathetic.

Compare and contrast my(September 7th, 2011) approach to yours(September 7th, 2011), I guess.

Where 'this soon' means the end.

ADBOC, it didn't have to be.

It is an open thread comment by yourself that could have been a discussion post!

It sort of soon became one.

Comment author: AlanCrowe 07 September 2011 09:47:26AM 5 points [-]

Lessdazed is describing quite a messy situation. Let me split out various subcases.

First is the situation with only one approval authority running randomised controlled trials on medicines. These trials are usually in three phases. Phase I on healthy volunteers to check for toxicity and metabolites. Phase II on sufferers to get an idea of the dose needed to affect the course of the illness. Phase III to prove that the therapeutic protocol established in Phase II actually works.

I have health problems of my own and have fancied joining a Phase III trial for early access to the latest drugs. Reading around for example it seems to be routine for drugs to fail in Phase III. Outcomes seem to be vaguely along the lines of three in ten are harmful, six in ten are useless, one in ten is beneficial. So the odds that a new drug will help, given that it was the one out of ten that passed Phase III, are good, while the odds that a new drug will help, given that it is about to start on Phase III are bad.

Joining a Phase III trial is a genuinely altruistic act by which the joiner accepts bad odds for himself to help discover valuable information for the greater good.

I was confused by the idea of joining a Phase III trial and unblinding it by testing the pill to see whether one had been assigned to the treatment arm of the study or the control arm. Since the drug is more likely to be harmful than to be beneficial, making sure that you get it is playing against the odds!

Second, Lessdazed seemed to be considering the situation in which EMA has approved a drug and the FDA is blocking it in America, simply as a bureaucratic measure to defend its home turf. If it were really as simple as that, I would say that cheating to get round the bureaucratic obstacles is justified.

However the great event of my lifetime was man landing on the Moon. NASA was brilliant and later became rubbish. I attribute the change to the Russians dropping out of the space race. In the 1960's NASA couldn't afford to take bad decisions for political reasons, for fear that the Russians would take the good decision themselves and win the race. The wider moral that I have drawn is that big organisations depend on their rivals to keep them honest and functioning.

Third: split decisions with the FDA and the EMA disagreeing, followed by a treat-off to see who was right, strike me as essential. I dread the thought of a single, global medicine agency that could prohibit a drug world wide and never be shown up by approval and successful use in a different jurisdiction.

Hmm, my comment is losing focus. My main point is that joining a Phase III trial is, on average, a sacrifice for the common good.

Comment author: orthonormal 06 September 2011 12:53:47PM 13 points [-]

You're confuting two things here: whether rationality is valuable to study, and whether rationality is easy to proselytize.

My own experience is that it's been very valuable for me to study the material on Less Wrong- I've been improving my life lately in ways I'd given up on before, I'm allocating my altruistic impulses more efficiently (even the small fraction I give to VillageReach is doing more good than all of the charity I practiced before last year), and I now have a genuine understanding (from several perspectives) of why atheism isn't the end of truth/meaning/morals. These are all incredibly valuable, IMO.

As for proselytizing 'rationality' in real life, I haven't found a great way yet, so I don't do it directly. Instead, I tell people who might find Less Wrong interesting that they might find Less Wrong interesting, and let them ponder the rationality material on their own without having to face a more-rational-than-thou competition.

Comment author: Swimmer963 06 September 2011 05:23:33PM 6 points [-]

Instead, I tell people who might find Less Wrong interesting that they might find Less Wrong interesting, and let them ponder the rationality material on their own without having to face a more-rational-than-thou competition.

This phrase jumped out in my mind as "shiny awesome suggestion!" I guess in a way it's what I've been trying to do for awhile, since I found out early, when learning how to make friends, that most people and especially most girls don't seem to like being instructed on living their life. ("Girls don't want solutions to their problems," my dad quotes from a book about the male versus the female brain, "they want empathy, and they'll get pissed off if you try to give them solutions instead.")

The main problem is that most of my social circle wouldn't find LW interesting, at least not in its current format. Including a lot of people who I thought would benefit hugely from some parts, especially Alicorn's posts on luminosity. (I know, for example, that my younger sister is absolutely fascinated by people, and loves it when I talk neuroscience with her. I would never tell her to go read a neuroscience textbook, and probably not a pop science book either. Book learning just isn't her thing.)

Comment author: AdeleneDawner 06 September 2011 06:57:59PM 1 point [-]

Depending on what you mean by 'format', you might be able to direct those people to the specific articles you think they'd benefit from, or even pick out particular snippets to talk to them about (in a 'hey, isn't this a neat thing' sense, not a 'you should learn this' sense).

Comment author: Swimmer963 06 September 2011 07:27:21PM 1 point [-]

"Pick out particular snippets" seems to work quite well. If something in the topic of conversation tags, in my mind, to something I read on LessWrong, I usually bring it up and add it to the conversation, and my friends usually find it neat. But except with a few select people (and I know exactly who they are) posting an article on their facebook wall and writing "this is really cool!" doesn't lead to the article actually being read. Or at least they don't tell me about reading it.

Comment author: AdeleneDawner 06 September 2011 07:42:51PM 2 points [-]

If facebook is like twitter in that regard, I mostly wouldn't expect you to get feedback about an article having been read - but I'd also not expect an especially high probability that the intended person actually read it, either. What I meant was more along the lines of emailing/IMing them individually with the relevant link. (Obviously this doesn't work too well if you know a whole lot of people who you think should read a particular article. I can't advise about that situation - my social circle is too small for me to run into it.)

Comment author: Swimmer963 10 September 2011 01:15:16PM 1 point [-]

Sorry for the delayed reply...

I don't know what Twitter is like, but the function on Facebook that I prefer to use (private messages) is almost like email and seems to be replacing email among much of my social circle. I will preferentially send my friends FB messages instead of emails, since I usually get a reply faster.

Writing on someone's wall is public, and might result in a slower reply because it seems less urgent. But it's still directed at a particular person, and it would be considered rude not to reply at all. But when I post an article or link, the reply I often get is "thanks, looks neat, I'll read that later."

Comment author: orthonormal 07 September 2011 01:56:26AM 10 points [-]

I, uh, just did that, and received this reply half an hour later:

Wow, thanks for destroying my chance of getting any work done for the next 7-10 days! Some friend you are!

I think that counts as a success.

Comment author: Pavitra 05 September 2011 05:10:13PM 0 points [-]

The bitcoin market seems to be experiencing well-funded deliberate market manipulation. Someone who's good at economics should pick up some of that free money.

Comment author: Solvent 05 September 2011 05:13:00AM 0 points [-]

What's this SingInst House which I have heard about, which people go to, and it is exciting?

What's the Visiting Fellows program?

Is there some public list of people who've been on it, for verification purposes?

Comment author: Kaj_Sotala 05 September 2011 12:41:32PM *  4 points [-]
Comment author: klkblake 05 September 2011 11:18:19AM 3 points [-]

I'm confused about Kolmogorov complexity. From what I understand, it is usually expressed in terms of Universal Turing Machines, but can be expressed in any Turing-complete language, with no difference in the resulting ordering of programs. Why is this? Surely a language that had, say, natural language parsing as a primitive operation would have a very different complexity ordering than a Universal Turing Machine?

Comment author: Oscar_Cunningham 05 September 2011 11:40:09AM 1 point [-]

The Kolmogorov complexity changes by an amount bounded by a constant when you change languages, but the order of the programs is very much allowed to change. Where did you get that it wasn't?

Comment author: klkblake 05 September 2011 10:52:19PM 0 points [-]

I knew Kolmogorov complexity was used in Solomonoff induction, and I was under the impression that using Universal Turing Machines was an arbitrary choice.

Comment author: Oscar_Cunningham 05 September 2011 10:54:14PM 1 point [-]

Solomonoff induction is only optimal up to a constant, and the constant will change depending on the language.

Comment author: printing-spoon 09 September 2011 12:26:42AM 1 point [-]

(this is because all Turing-complete languages can simulate each other)

Comment author: wnoise 18 September 2011 06:11:34PM *  14 points [-]

European Philosophers Become Magical Anime Girls

Author Junji Hotta has blessed the world with “Tsundere, Heidegger, and Me”, a tour de force of European philosophy… in a world where all the philosophers are self-conscious anime girls. The books went on sale September 14.


Comment author: Vaniver 23 September 2011 02:01:41AM 1 point [-]

That picture of Spinoza displeases me on so many levels.

Comment author: pedanterrific 24 September 2011 02:56:18AM *  4 points [-]

"Desire is the essence of a man." - Baruch Spinoza

Comment author: pedanterrific 24 September 2011 03:07:22AM *  6 points [-]

I... that's... I don't...


I'll be in my bunk.

Comment author: NihilCredo 23 September 2011 02:47:23AM 2 points [-]

At first I thought "Oh, nice, I'll finally know what Christians felt when that horrible Manga Gospel got published", but then I clicked the link and I just couldn't help having a good laugh. It seems I can only simulate the more chill Christians.

On further reflection, I got my start on literature through multiple shelves full of comic book adaptations of the classics, so I really shouldn't feel superior. Although to be fair those were a little more faithful to the source material - except for Taras Bulba, which quite shocked me later when I got my hands on the non-bowdlerised version.

Comment author: Bugmaster 23 September 2011 02:41:50AM 2 points [-]

Please please please someone translate it into English ! Or Russian, I'm not picky... I must read this manga, if only to see whether the text... disturbs... me as much as the art does.

Comment author: gwern 19 September 2011 03:15:46PM 4 points [-]

They have gone too far.

Comment author: knb 08 September 2011 06:24:06AM 14 points [-]

Paul Graham's essay "Why Nerds are Unpopular" has been mentioned a few times on LW, in a very positive way.

My initial reaction upon reading that essay a couple years ago was also very positive. However, upon rereading it, I realized it doesn't really fit with my observations or what I know from social science research at all. I want to write a top level post about why I disagree with Graham, but I'm not really sure if that would be on-topic enough for a top-level.

So I guess I'll just put this to a vote. Please upvote this if you think I should write a top-level post.

Comment author: knb 08 September 2011 06:26:55AM 1 point [-]

If you think I shouldn't post about it at all, please upvote this. Be sure to downvote below.

Comment author: lessdazed 08 September 2011 06:30:45AM 11 points [-]

Why not just do a draft in discussion? It's a top level subject, but how could we judge well at this point without knowing what the post would look like?

Comment author: knb 08 September 2011 06:24:33AM 31 points [-]

Please Upvote this if you think I should write a discussion level post.

Comment author: Kaj_Sotala 04 September 2011 09:59:09AM *  16 points [-]

I'm getting increasingly pessimistic about technology.

If we don't get an AI wiping us out or some form of unpleasant brain upload evolution, we'll get hooked by superstimuli and stuff. We don't optimize for well-being, we optimize for what we (think we) want, which are two very different things. (And often, even calling it "optimization" is a stretch.)

Comment author: [deleted] 04 September 2011 10:22:20AM 8 points [-]

We don't optimize for well-being, we optimize for what we (think we) want, which are two very different things.

Natural selection does not cease operation. Say, for example, that someone invents a box that fully reproduces in every respect the subjective experience of eating and of having eaten by directly stimulating the brain. Dieters would love this device. Here's a device that implements in extreme form the very danger that you fear. In this case, the specific danger is that you will stop eating and die.

So the question is, will the device wipe out the human race? Almost certainly it will not wipe out the entire human race, simply because there are enough people around who would nevertheless choose to eat despite the availability of the device, possibly because they make a conscious decision to do so. These people will be the survivors, and they will reproduce, and their children will have both their values (transmitted culturally) and their genes, and so will probably be particularly resistant to the device.

That's an extreme case. In the actual case, there are doubtless many people who are not adapting well to technological change. They will tend to die out disproportionately, will tend to reproduce disproportionately less.

We have a model of this future in today's addictive drugs. Some people are more resistant to the lure of addictive drugs than others. Some people's lives are destroyed as they pursue the unnatural bliss of drugs, but many people manage to avoid their fate.

Many people have so far managed the trick of pursuing super stimuli without destroying their lives in the process.

Comment author: Iabalka 05 September 2011 02:14:51PM 1 point [-]

Are you suggesting to leave everything to natural selection? Doesn't strike me as the rationalists' way.

Comment author: Kaj_Sotala 04 September 2011 04:31:05PM 3 points [-]

Sure, I don't think humanity is in any danger of being destroyed by conventional technologies, and I'm pretty sure the Singularity will be happen - in one form or another - way before then. But there may very well be a lot of suffering on the way.

Comment author: nerzhin 05 September 2011 04:18:55PM 3 points [-]

It is not at all clear that the people resistant to addictive drugs are reproducing at a higher rate than those who aren't.

Comment author: [deleted] 12 September 2011 10:46:43PM *  4 points [-]

What struck me about the example in this post that its basically genetically equivalent to reliable easy to use contraception.

And now that I think about it humanity basically is like a giant petri dish where someone dumped some antibiotics. The demographic transition is a temporary affair, a die off of maladapted genotypes and memeplexes.

Comment author: Eugine_Nier 07 September 2011 07:46:24AM 19 points [-]

Keep in mind, it's possible to evolve to extinction.

Comment author: Thomas 05 September 2011 05:50:02PM 0 points [-]

I, on the contrary, remain a techno optimist, even more so.

It's a kind of sad, that so many clever people here are losing their confidence into the techno progress. Well, maybe not sad, but it certainly means, that they are not onto something big themselves.

Comment author: [deleted] 04 September 2011 02:38:24PM 3 points [-]

Are there particular technologies (or uses of) that have especially earned your pessimism?

Comment author: Kaj_Sotala 04 September 2011 04:56:10PM 12 points [-]

Lots of things, but some off the top of my head:

Communication technologies probably top the list. Sure, the Internet has given birth to lots of great communities, like the one where I'm typing this comment. But it has also created a hugely polarized environment. (See the picture on page 4 of this study.) It's ever easier to follow your biases and only read the opinions of people who agree with you, and to think that anyone who disagrees is stupid or evil or both. On one hand, it's great that people can withdraw to their own subcultures where they feel comfortable, but the groupthink that this allows...

"Television is the first truly democratic culture - the first culture available to everybody and entirely governed by what the people want. The most terrifying thing is what people do want." -- Clive Barnes. That's even more true for the Internet.

Also, it's getting easier and easier to work, study and live for weeks without talking to anyone else than the grocery store clerk. I don't think that's a particularly good thing from a mental health perspective.

Comment author: Vaniver 04 September 2011 09:31:17PM 2 points [-]

Also, it's getting easier and easier to work, study and live for weeks without talking to anyone else than the grocery store clerk. I don't think that's a particularly good thing from a mental health perspective.

Talking with your mouth, or talking? Because it's not clear to me that talking online is significantly worse than talking in person at sustaining mental health. I suspect getting a girlfriend/boyfriend will do more for your mental health and social satisfaction than interacting with people face-to-face more.

Comment author: luminosity 04 September 2011 11:27:35PM 1 point [-]

I've been working from home for a year now. I don't get out and see people often, my family live far away, so I don't have many opportunities to see people in person. The exception is, my brother is staying with me while he studies at University. There have been a few periods however where he's been away up with our parents, or off at a different university in a different state. I have a few friends I talk with regularly online through IM, and it helps, but the periods when my brother was away were still very difficult and I was getting very stressed towards the end, even though we don't interact all that much on a day to day basis, and even though I've always been much more tolerant and even thriving on loneliness than most people I know.

Maybe video chatting with people would be an adequate substitute? I haven't tried that, but my anecdote is that IM / talking online alleviates some of the stress, but goes nowhere near to mitigating it.

Comment author: Kaj_Sotala 05 September 2011 12:34:04PM 4 points [-]

Personally I find that if I don't hang out with people in real life every 2-4 days I will get increasingly lethargic and incapable of getting anything done. To what degree this generalizes is another matter.

Comment author: Raemon 06 September 2011 11:08:20PM *  9 points [-]

I find the same thing as Kaj. I've started literally percieving myself as having that set of "needs" bars in the Sims. Bladder bar gets empty, and I need to use the toilet or I'll be uncomfortable. Sleep bar gets low, and I'll be tired until I get enough. Social bar (face to face time) gets low, and I'll feel bleah until I get some face to face time.

The good news is that I've noticed this, become able to distinguish between "not enough facetime Bleah" and other types of Bleah, and then make sure to get face-to-face time when I need it.

Comment author: [deleted] 07 September 2011 03:34:28AM *  2 points [-]

It's spooky how similar I am in this regard.

The good news is

What's the bad news?

Comment author: [deleted] 06 September 2011 09:36:06PM 1 point [-]

Very much the same way. The internet has been a mixed blessing -- it allowed me to have the life I have at all, way back when, but now it's also a massive hook for akrasia and encourages sub-optimal use of free time. I'm still trying to get that under control.

Comment author: SilasBarta 06 September 2011 05:19:30PM 2 points [-]

If you mean a face-to-face bf/gf, you're not actually disagreeing with Kaj. Also, I concur with his points about social deprivation leading to lethargy, based on personal experience.

Comment author: Alex_Altair 05 September 2011 07:31:06PM 3 points [-]

I gain great confidence from the principle that rational people win, on average. It is rational people that make the world, and if it gets to be something we don't want, we change it. The only real threat is rationalists with different utility functions (e.g. Quirrelmort).

(Disclaimer: please don't take this as a promotion of an "us/them" dichotomy.)

Comment author: wedrifid 04 September 2011 10:17:43AM 3 points [-]

We don't optimize for well-being, we optimize for what we (think we) want, which are two very different things. (And often, even calling it "optimization" is a stretch.)

You think we optimize for what we think we want? That's a stretch in itself. ;)

(Totally agree with what you are saying!)

Comment author: jkaufman 19 March 2012 08:29:21PM *  1 point [-]

Testing nofollow on a link that contains 'lesswrong' somewhere but doesn't point to lesswrong.com.

Comment author: KPier 21 September 2011 03:51:25AM *  4 points [-]

I've been debating the validity of reductionism with a friend for a while, and today he presented me with an article (won't link it, it's a waste of your time) arguing that the consciousness-causes-collapse interpretation of QM proves that consciousness is ontologically fundamental/epiphenomenal/ect..

To which I responded: "Yeah, but consciousness-causes-collapse is wrong."

And then realized that the reasons I have rejected it are all reductionist in nature. So he pointed out, fairly, that I was begging the question. And unfortunately, I'm not sufficiently familiar with the literature on QM to point him to an explanation. Does anyone know an explanation of reasons to reject consciousness-causes-collapse that isn't explicitly predicated on reductionism?

Comment author: Vladimir_Nesov 21 September 2011 10:58:21AM *  1 point [-]

I've been debating the validity of reductionism with a friend for a while [...] Does anyone know an explanation of reasons to reject consciousness-causes-collapse that isn't explicitly predicated on reductionism?

This quite possibly can't be done. If you handicap yourself by refusing to use an idea while examining its merits, you may well draw inferior conclusions about it, and modify it in a way that makes it worse. You should use your whole mind to reflect on itself (unless you conclude some of its parts are not to be trusted). See these posts in particular:

Comment author: Kingreaper 22 September 2011 12:25:42AM 3 points [-]

You don't need to reject CCC without reductionism to defeat his argument. His argument is "If CCC is true, reductionism is false"

That's not a reason to reject reductionism, unless you have better reason to hold to CCC than to reductionism.

Comment author: Owen 21 September 2011 04:09:10AM 3 points [-]

Perhaps that extremely simple systems, that no one would consider conscious, can also "cause collapse"? It doesn't take much: just entangle the superposed state with another particle - then when you measure, canceling can't occur and you perceive a randomly collapsed wavefunction. The important thing is the entangling, not the fact that you're conscious: measuring a superposed state (i.e. entangling your mind with it) will do the trick, but it's entirely unnecessary.

I used to believe the consciousness-causes-collapse idea, and it was quite a relief when I realized it doesn't work like that.

Comment author: JoshuaZ 21 September 2011 04:11:57AM 1 point [-]

Some of the consciousness causes collapse people would claim that you intended to cause that entanglement. (If you are thinking this sounds like an attempt to make their claims not falsifiable, I'd be inclined to agree.)

Comment author: Owen 21 September 2011 04:21:11AM 1 point [-]

I can intentionally do lots of things, some of which cause entanglement and "collapse", and some of which don't. I'd say to them that it still seems like the conscious intent isn't what's important.

If you'd like to substitute a better picture for the layperson, I'd go with "disturbing the system causes collapse". (Where "disturb" is really just a nontechnical way of saying "entangle with the environment.") Then it's clear that conscious observation (which involves disturbing the system somehow to get your measurement) will cause (apparent) collapse, but doesn't do so in a special depends-on-consciousness way. And if they want a precise definition of "disturb", you can get into the not-too-difficult math of superposition and entanglement.

Comment author: JoshuaZ 21 September 2011 04:28:17AM *  1 point [-]

And if they want a precise definition of "disturb", you can get into the not-too-difficult math of superposition and entanglement.

I'm a math grad student and I consider the math of entanglement and the like to be not easy. There are two types of consciousness-causes-collapse proponents. The first type who doesn't know much physics will find entanglement to be pretty difficult (they need to already understand complex numbers and basic linear algebra to get the structure of what is going on). Even a genuinely curious individual will likely have trouble following that unless they are a mathematically inclined individual. The second, much smaller group of people, are people who already understand entanglement but still buy into consciousness-causes collapse.They seem to have developed very complicated and sometimes subtle notions of what it means for things to be conscious or to have intent (almost akin to theologians). So in either case this avenue of attack seems unlikely to be successful.

If one is more concerned with convincing bystanders (as is often more relevant on the internet. People might not change their minds often. But people reading might), then this could actually do a good job when encountering the first category by making it clear that one knows a lot more about the subject than they do. This seems to empirically work in real life also as one can see in various discussions. See for example the cases Deepak Chopra has try to invoke a connection between QM and consciousness and he gets shot down pretty bluntly when there's anyone with a bit of math or physics background.

Comment author: Owen 21 September 2011 01:49:24PM 1 point [-]

You're right; maybe I'm overestimating my ability to explain things so that laypeople will understand. But there are some concessions you can make to get the idea across without the full background of complex linear algebra - often I use polarizers as an example, because most people have some experience with them (from sunglasses or 3D movies), and from there it's only a hop, skip, and a jump to entangled photons.

I do try to explain so that people feel like the explanation is totally natural, but then I often run into the problem of people trying to reason about quantum mechanics "in English", so to speak, instead of going to the underlying math to learn more. Any suggestions?

Comment author: JoshuaZ 21 September 2011 01:55:58PM *  2 points [-]

It seems to me that it is easier to get people to realize just that they can't use their regular language to understand what is going on than to actually explain it. People seem to have issues with understanding this primarily because of Dunning-Kruger and because of the large number of popularizations of difficult science that just uses vague analogies.

I'd ask "ok. This is going to take some math. Did you ever take linear algebra?" If yes, then I just explain things. When they answer no (vast majority of the time)I then say "ok do you remember how matrix multiplication works?" They will generally not or have only a vague memory. At that point I then tell them that I could spending a few hours or so developing the necessary tools but that they really don't have the background without a lot of work. This generally results in annoyance and blustering on their part. At this point one tells them the story of Oresme and how he came up with the idea of gravity in the 1300s but since he didn't have a mathematical framework it was absolutely useless. This gets the point across sometimes.

Edit: Your idea of using polarization as an example is an interesting one and I may try that in the future.

Comment author: Owen 21 September 2011 05:30:16PM 1 point [-]

Upvoted; thanks for providing the name "Dunning-Kruger" and the Oresme example!

Comment author: Mitchell_Porter 21 September 2011 05:01:58AM 4 points [-]

From the perspective of the Copenhagen interpretation, this is like a debate about whether 'consciousness updates the prior', in which 'the prior' is treated as a physical entity which exists independently of observers and their ignorance.

In the Copenhagen interpretation - at least as originally intended! - a wavefunction is not a physical state. It is instead like a probability distribution.

From this perspective, the mystery of quantum mechanics is not, why do wavefunctions collapse? It is, why do wavefunctions work, and what is the physical reality behind them?

The reification of wavefunctions has apparently become an invisible background assumption to a lot of people. But in the Copenhagen interpretation, wavefunctions do not exist, only "observables" exist: the quantities whose behavior the wavefunction helps you to predict.

Examples of observables are: the position of an electron; the rate of change of a field; the spin of a photon. In the Copenhagen interpretation, these are what exists.

Some examples of things which are not observables and which do not exist: An electron wavefunction with a peak here and a peak there; a photon in a superposition of spin states; in fact, any superposition.

Because quantum mechanics does not offer a nonprobabilistic deeper level of description, it is very easy for people to speak and think as if the wavefunctions are the physical realities, but that is not how Copenhagen is supposed to work.

To reiterate: "consciousness collapses the wavefunction" in exactly the same sense that "consciousness updates the prior". You are free to invent subquantum physical theories in which wavefunctions are real, in an attempt to explain why quantum mechanics works, and maybe in those theories you want to have something "collapsing" wavefunctions, but you probably wouldn't want that to be "consciousness".

Comment author: Manfred 21 September 2011 11:48:45AM 1 point [-]

I wouldn't call occam's razor an explicit part of reductionism. It's basically equivalent to saying you can't just make up information.

Comment author: JoshuaZ 22 September 2011 12:22:38AM 1 point [-]

I wouldn't call occam's razor an explicit part of reductionism. It's basically equivalent to saying you can't just make up information.

I don't think so. This may be the case when your hypotheses are something like "A" and "A v B" but if your hypotheses you are comparing are "A" and "C ^ D ^ E" this sort of summary of Occam's razor seems to be insufficient.

Comment author: JoshuaZ 21 September 2011 04:04:48AM *  1 point [-]

There are a variety of different issues.

First, it assumes that consciousness exists as an ontological unit. This isn't just a problem with reductionism but is a problem with Occam's razor. What precisely one means by reductionism can be complicated and subtle with some versions more definite or plausible than others. But regardless, there's no good evidence that consciousness is an irreducible.

Second, it raises serious questions about what things were like before there were conscious entities. If no collapse occurred prior to conscious entities what does that say about the early universe and how it functioned? Note that this actually raises potentially testable claims if one can use telescopes to look back before the dawn of life. Unfortunately, I've never seen any consciousness causes collapse proponent either explain why this doesn't lead to any observable difference or make any plausible claim about what differences one would observe.

Third, it violates a general metapattern of history. As things have progressed the pattern has consistently been that minds don't interact with the laws of physics in any fundamental way and that more and more ideas about how minds might interact have been thrown out (ETA: There are a few notable exceptions such as some of the stuff involving the placebo effect.). We've spent much of the last few hundred years establishing stronger and stronger versions of this claim. Thus, as a simple matter of induction, one would expect that trend if anything to continue. (I don't know how much inducting on the pattern of discoveries is justified.)

Fourth, it is ill-defined. What constitutes a conscious mind? Presumably people are conscious. Are severely mentally challenged people conscious? Are the non-human great apes conscious? Are ravens and other corvids conscious? Are dogs or cats conscious? Are mice conscious? Etc. down to single celled organisms and viruses.

Fifth, consciousness causes collapse is a hypothesis that is easily supported by standard human biases. This raises two issues one of which is not that relevant but is worth mentioning and the other which is very relevant. The first, less relevant issue, is that this means we should probably assume that we are likely to overestimate our chance that the hypothesis is correct. This is not however an argument against the hypothesis. But there's a similar claim that is a sort of meta-argument against the hypothesis. Since this hypothesis is one which is supported by human biases one would expect a lot of motivated cognition for evidence and arguments for the hypothesis. So if there are any really good arguments one should consider it more likely that they would have been hit on. The fact that they have not suggests that there really aren't any good arguments for it.

Comment author: gwern 20 September 2011 09:40:41PM 3 points [-]

I tried writing an essay arguing that popular distaste for politicians is due largely to base rate neglect leading people to think they are worse than they are: http://www.gwern.net/Notes#politicians-are-not-unethical (I don't think it works, though.)

Comment author: byrnema 12 September 2011 03:11:36PM *  4 points [-]

I noticed a bias about purchasing organic milk this morning, that is perhaps a combination of the sunk cost fallacy, ugh fields and compartmentalization.

My mother is sending me information this morning that I should be giving my children organic milk (to avoid hormones, etc). I don't disagree with her, but I'm probably not going to start buying organic milk. This makes me feel a little sorry for my mother, that she is going to some effort to convince me I ought to take this precaution, and I'm going to nod and agree, and then finally not change my behavior.

The twinge of guilt makes me examine the 'why', and I believe the reason I won't buy organic is because my children already drink much less milk than they used to. If there was one year I should have bought organic, it should have been during their first year of drinking cow milk when they drank several bottles a day and it was a major source of their nutrition. Now they only drink a couple glasses a day, and this milk is mixed with many other food sources.

I'm sure the logic is still opaque... Even if they don't drink as much milk as they used to, the milk drinking continues over the rest of their lives and switching to organic now would make a difference. If one of the main objections is the cost of organic milk (and at first I would claim that it was) then this fact means that by switching to organic milk now, I can pay less per day to completely free them of any contaminants normal milk would expose them to. For a few extra dollars a week, my children could be rBGH-free the rest of their lives.

What is my true objection? My true objection, perhaps, is that some part of my brain is already computing what it would feel like to purchase organic milk next time in the store. I'm paying a significant amount more, so I should be feeling good about the purchase, that I am making such-and-such good choices for my family. However, I know I will only feel badly! If the marginal price of organic milk is justified now, I should have been buying it before -- when my kids were small -- and so every single time that I purchase organic milk I will feel a dissonance that I wasn't purchasing it before. Either organic milk is important or it isn't, and in deciding to ignore my mother and continue to buy regular milk, I am making a choice to behave consistently with past choices.

Some compartmentalization is at work here, because I realize all this quite consciously, and it doesn't matter. I still feel like going to the milk aisle and glibly throwing in the carton that costs $3.49 rather than $5.50 is a viable option that I choose. I can even resolve to look at the label and chant "I am buying this rather than something else that I know is better because I don't want to have to renounce past decisions", and it doesn't matter.

A factor in this locus of irrationality is that I don't feel strongly that organic milk is better, and the extra cost is a weighing factor. Thus, the desire to avoid negative feelings is operating in a landscape that is nearly even. I trust that if I deemed it was more important to go with organic milk, I would do so. On the other hand, this is a reminder that such psychological tensions can affect more important decisions, if the need to avoid negative feelings is stronger, and I should continue to be honest with myself and be aware of them.

Comment author: AdeleneDawner 12 September 2011 04:27:38PM 11 points [-]

Past-you, using the evidence that past-you had, came to a particular conclusion. Present-you, using more evidence, may come to a different conclusion. Future-you, using still more evidence, may come to yet another conclusion. This is as it should be; that's what evidence is for.

Comment author: ataftoti 08 September 2011 05:40:26AM *  2 points [-]

Has anyone been able to play Mafia using bayesian methods? I have tried and failed due to encountering situations that eluded my attempts to model them mathematically. But since I am not strong at math, I'm hoping others have had success?

And the related question: any mafiascum.net players here?

Edit: I mean specifically using bayesian methods for online forum-based Mafia games. These seem to me to give the player enough time to do conscious calculations.

Comment author: katydee 13 September 2011 05:22:34AM 1 point [-]

I play online Mafia but haven't attempted to use explicit Bayesian reasoning to do so.

Comment author: Jack 12 September 2011 04:39:53AM 1 point [-]

I wonder if there aren't any group rationality games that don't seriously undermined group moral and cohesion. The last time I played Mafia people ended up crying and my relationship with my brother and cousin went through traumatic upheaval. Diplomacy is not a better option.

Comment author: katydee 13 September 2011 05:24:24AM 2 points [-]

This seems like an unusual experience to have. I have played Mafia with 3+ non-overlapping groups in person and 4+ non-overlapping groups online, and have yet to encounter any trouble; in fact, in two of the cases we were explicitly playing as a bonding exercise to improve group morale and cohesion, and it seems to have worked both times.

Comment author: ataftoti 13 September 2011 04:27:40AM *  1 point [-]

The last time I played Mafia people ended up crying

And what about the times before that?

Playing mafia has never undermined real social relationships in my experience, and I've introduced this game to perhaps 20 people in real life, with at least 2 completely non-overlapping groups.

Also, I doubt face-to-face mafia should be considered a game that especially exercises rationality. It seems to me that you get thrown a huge fuckton of cognitive biases with no time to combat them.

(again, my original question should specify "forum based mafia games"...let me edit that now...)

Comment author: Jack 13 September 2011 06:04:07AM 1 point [-]

On reflection, I think the problems came from the people in the group being too close. I have certainly had fun before. We may have also taken the game too seriously.

Comment author: Will_Sawin 13 September 2011 05:39:37AM 1 point [-]

It's more like it teaches a sort of mini-rationality: "You're swimming in cognitive biases, but your intuitions can also be helpful. Empirically develop a few techniques to separate good intuitions from bad with decent error probability."

Comment author: shokwave 12 September 2011 12:03:28PM 1 point [-]

I can report that playing Mafia at a meetup markedly improved group interaction. What impact this has on your position is unknown.

Comment author: Oscar_Cunningham 08 September 2011 09:16:33AM 1 point [-]

Trying to update even on just the well-defined data looks impossible for humans, trying to update on what other people are saying would be difficult, even with a computer. Also, it seems like there might be certain disadvantages if you turn out to be Mafia.

Comment author: ataftoti 12 September 2011 04:22:04AM 1 point [-]

Allow me to specify: I am referring to online forum mafia games.

These games are slow enough that one can do some calculations, if one can find the numbers (and that seems to be the hard part, along with deciding how they should be calculated).

I've thought and am still thinking that the fact that I've never heard of bayesian methods being used in mafia is simply an observation about the failures of players, not that it inherently cannot be done using available tools.

Frankly I'm surprised mafia does not seem to attract more attention from the demographic concerned with rationality. If some set of methods were developed that consistently worked and cut through the jungle of biases that is the nature of the game, then that would be an achievement for the progress of rationality, would it not? I think many methods that may develop would easily transfer to other uses as well.

Comment author: Bill_McGrath 07 September 2011 11:50:01PM 4 points [-]

I have been wondering recently about how to rationally approach topics that are naturally subjective. Specifically, this came up in conversation about history and historiography. Historic events are objective of course, but a lot of historical scholarship concerns itself with not just describing events, but speculating as to their causes and results. This is naturally going to be influenced by the historian's own cultural context and existing biases.

How can rationalists engage with this inherently subjective topic, and apply rationality techniques? We can try to take account of the historian's biases, but in many cases that will require us to do some historical research - it is probably not possible to get an accurate, objective account.

This applies to a certain extent to other fields I am sure, but history and historiography are perhaps the most scholarly ones I can bring to mind.

Comment author: Bill_McGrath 08 September 2011 10:09:32PM 1 point [-]

Hmm. I was a little tired and rushed when I wrote this. There are a few thoughts I'd like to add concerning historiography.

As I said above, history, because of its subjective nature, is always influenced by the historian's bias. Historiography could maybe be called the study of these biases, but is in itself subject to the same flaws.

No historian's viewpoint on a historical event will be fully objective. But just because no approach can be perfect, does not mean that all approaches can be equally imperfect. My question isn't so much about how to be a rational historian, but more: is there a rational way to evaluate the relative worths of different historical viewpoints?

Comment author: ahartell 07 September 2011 07:51:10PM 6 points [-]

Would it be really stupid to use Harry James Potter-Evans-Verres as the fictional character that had an impact on me for my CommonApp essay? On one hand it seems right since he introduced me to lesswrong which has certainly had a big effect but on the other hand... it's... you know... fanfiction.

Comment author: thomblake 07 September 2011 09:27:38PM 1 point [-]

In general, honesty is the best policy. If you really were influenced to great things by HJPEV, explain it well and it should go over well. If the admissions folks are going to say "This well-written and inspiring essay is about fanfiction" and thus throw it in the garbage, it could just as well have been thrown away for the room's lighting or what they had for breakfast.

Comment author: [deleted] 07 September 2011 09:49:53PM 2 points [-]

The other way to look at the situation is that the admissions folks are looking for a very specific essay. That essay requires you to identify yourself with a character from some postmodern South American novel (or possibly Elie Wiesel in "Night") and certainly has no place in it for fan fiction.

Comment author: [deleted] 08 September 2011 01:57:23AM 0 points [-]

I think if you were to choose a character from a conventionally literary work, it should be something generally well-regarded in English departments, but which is very rarely assigned reading in high school. Maybe Middlemarch?

Comment author: Kevin 09 September 2011 03:54:59AM 6 points [-]

Nope. Admissions folks are looking to be entertained.

Comment author: gwern 08 September 2011 12:09:02AM *  7 points [-]

If you really were influenced to great things by HJPEV, explain it well and it should go over well.

This is important. Deliberately choosing to write about fanfiction is a high-risk move, and so is high-status if you pull it off well! But you might just face-plant. (You don't try out unpracticed tricks in front of a girl you want to impress.)

Or to put it another way:

  1. a high-status fictional character like Hamlet treated mediocrely is a mainstream submission
  2. a low-status fictional character like Bella Swan treated mediocrely is a contrarian submission, and penalized accordingly - the intellectual equivalent of misspelling "it's/its"
  3. a high-status fictional character like Ahab treated well is a conspicuous mainstream signal
  4. a low-status fictional character like MoR!Harry treated well is a meta-contrarian submission, and thus is a conspicuous contrarian signal

All else equal, 3<4.

Comment author: Will_Newsome 10 September 2011 05:14:38PM 1 point [-]

Has anyone done a thorough social psychological game theoretic analysis of college admissions? Seems right up your alley, gwern.

Comment author: gwern 10 September 2011 06:31:03PM 5 points [-]

I only play a deep thinker online, I don't think I could write such a thing in a way that isn't merely extensive plagiarism of, say, Steve Sailer.

(That said, reading over my comment, I missed an opportunity: I should have pointed out that the reason why 4>3 is because it is an expensive signal in the sense that attempting to do #4 but only achieving a #2 exposes one to considerable punishment whereas one doesn't run such a risk with#1 and #3, and expensive signals are, of course, the most credible signals.)

Comment author: thomblake 09 September 2011 02:06:06PM 2 points [-]

I stand by my statement.

If the essay asked about "the fictional character that had the greatest impact on you" or something to that effect and that person is HJPEV, then that's what you should write about. Otherwise, you'd be lying, and apart from the general wrongness of lying, you're going to write better about something that's true.

Comment author: gwern 09 September 2011 02:12:50PM 1 point [-]

I stand by my statement.

I didn't disagree.

Comment author: ahartell 09 September 2011 07:37:34PM 2 points [-]

Thank you by the way. Your post convinced me to write about him and illuminated the best way to handle it.

Comment author: gwern 09 September 2011 07:52:01PM 3 points [-]

If it's not too personal, I would be curious to see the final product.

Comment author: ahartell 09 September 2011 08:40:24PM 1 point [-]

If I like how it turns out and decide to stick with it I'll message it to you. I may not start for a while though.

Comment author: Normal_Anomaly 08 September 2011 12:33:28AM *  3 points [-]

As someone currently going through this process (I just wrote the same essay about Terry Pratchett's character Tiffany Aching), the impression I get is that it's very important to be unique: if your essay is the same as 200 others, it will be penalized as much as if it is poorly written. Using a rationalist fanfiction character, if you can write it well and have the guts to write it sincerely (but not too sincerely, or you'll signal naivete), is a good idea. If you don't want to deal with a fanfiction character, write about some other rationalist. Either way, don't mention lesswrong. And please don't write about Howard Roark. I enjoyed The Fountainhead, but it's worse signaling than fanfiction. You'll look like a shallow thinker who falls for propaganda, and most universities lean to the liberal end of the spectrum.

Important note: I'm applying to highly selective colleges with student bodies that think of themselves as contrarian or meta-contrarian. If you aren't, this advice may not apply.

Comment author: shokwave 08 September 2011 12:26:42AM 9 points [-]

Also, recognising a low-status character as a low-status character is an important part of 4. Trying to pretend it's high status ("the author is an AI researcher, it is the most reviewed fanfiction ever, it's better than Rowling's Harry Potter", etc) will usually backfire.

Honestly, I'd start by baldly and confidently acknowledging that characters from fanfiction about popular books are low-status, and that you are going to do your piece on him anyway.

Comment author: [deleted] 19 September 2011 01:45:43PM 5 points [-]

You can do it. It's good countersignaling. But you have to be absurdly careful about writing quality. It's your job to convey to a skeptical audience that fanfiction can be transformative. You have to be absolutely brutal in avoiding language that signals immaturity -- or, better, find an editor who can be absolutely brutal to you.

My M.O., back in my college-essay days, was to read a New Yorker before sitting down to write. Inhale the style. Better yet, find some essays by Gene Weingarten, the modern master of long-form narrative journalism. Imagine what Gene Weingarten could do with HP:MOR. Then try to do it.

Comment author: Tripitaka 19 September 2011 02:38:09PM 0 points [-]
Comment author: imaxwell 14 September 2011 07:24:54AM 1 point [-]

Hmm... I'm not sure. I'd take the word of someone with experience on an admissions committee, if you can get it.

If you do it, I think you'd be better off talking just a little about the character and much more about the community you found. Writing to the prompt is not really important for this sort of thing. (Usually one of the prompts is pretty much "Other," confirming that.)

Comment author: Alicorn 07 September 2011 08:15:40PM 1 point [-]

What's your second choice?

Comment author: smk 07 September 2011 02:30:39PM 4 points [-]

A kind of uncomfortably funny video about turning yourself bisexual, a topic that's come up a few times here on LW. http://youtu.be/zqv-y5Ys3fg

Comment author: atucker 09 September 2011 05:23:43AM *  -1 points [-]

I don't know why I clicked on this link, but the video is pretty funny. I feel like its a parody, mostly because everyone fits their stereotypical role so well.

...Upon reading the bottom of the page, yeah its a parody.

Comment author: [deleted] 06 September 2011 06:15:06AM 20 points [-]

Wondering vaguely if I'm the only person here who has attempted to sign up for cryonics coverage and been summarily rejected for a basic life insurance plan (I'm transgendered, which automatically makes it very difficult, and have a history of depression, which apparently makes it impossible to get insurance according to the broker I spoke with).

I see a lot of people make arguments (some of them suggesting a hidden true rejection) about why they don't want it, or why it would be bad. I see a lot of people here make arguments for its widespread adoption, and befuddlement at its rejection (the "Life sucks, but at least you die" post) and the difficulties this poses for spreading the message. And I see a few people argue (somewhat mendaciously in my opinion) for its exclusivity or scarcity, arguing that it's otherwise of little to no value if just anyone can get signed up.

What I don't see is a lot of people who'd like to and can't, particularly for reasons of discrimination. For me, my biggest rejection for a long time was the perception that it was just out of reach of anyone who wasn't very wealthy, and once I learned otherwise, that obstacle dissipated. Now I'm kind of back to feeling like it's that way in practice -- if you're not one of the comparatively small number of people who can pay for it out of hand, or a member of any group who's already statistically screwed by the status quo, then it may as well be out of reach for you.

I doubt the average person who has heard of, and rejected cryonics has gone through this specifically, but it certainly suggests some reasons why it might be a tough sell outside the "core communities" who're already well-represented in cryonics. Even if we want it, we can't get it, and the more widely-known that is, the more difficult PR's going to be among people who've already had their opportunities and futures scuppered by the system as it stands.

I'm not saying it's rational, but from where I stand it's very hard to blame someone for cynically dismissing the prospect out of hand, or actively opposing it. IMO, the cryonics boosters either need to acknowledge the role that stuff like this plays in people's relationship to Shiny New Ideas Proposed By Well Educated Financially-Comfortable White Guys From The Bay Area, or just concede that, barring massive systematic reforms in other sectors of society, this will not be an egalitarian technology.

Comment author: Dennis 07 September 2011 10:04:57AM 1 point [-]

If you don't mind me asking - how old are you and how much money do you typically save a year?

Comment author: [deleted] 07 September 2011 03:37:32PM 3 points [-]

Bad assumption, but I'll answer.

I am 28. long-term unemployed, cannot get a bank account due to issues years ago, living on disability payments and now with support of my domestic partner (which is the main reason my situation isn't actually desperate any longer). We have to keep our finances pretty separate or my income (~7k a year, wholly inadequate to live on by myself anyplace where I could actually do so) goes away.

I keep a budget, I'm pragmatic and savvy enough to make sure our separate finances on paper don't unduly restrict us from living our lives as necessary, but I can't remember the last time I made it to the end of the month with money left over from my benefits check. Sometimes if I'm having a very good month, I'll not need to use my food stamps balance for that cycle, meaning it's there when I need extra later.

Comment author: [deleted] 07 September 2011 03:41:22PM 6 points [-]

And to stave off questions about how I could afford cryonics on this level of income: Life insurance can fall within a nice little window of 50 dollars or less, which could plausibly be taken out of my leisure and clothing budgets (it doesn't consume all of them, but those are the only places in the budget with much wiggle room). Maintaining a membership with the Cryonics institute that depends on a beneficiary payout of that insurance is something like 120 dollars a year - even I can find a way to set that aside.

Comment author: handoflixue 07 September 2011 05:59:51AM 4 points [-]

I'm both transgendered and diagnosed with depression, and I've had good luck getting insured via Rudi Hoffman. I don't recall what the name of the insurance company was, and I haven't heard the final OK since the medical examination, but I don't foresee any difficulties. I was warned they'll most likely put me down on male rates (feh) despite being legally female, but I can deal with that even if I don't like it.

Comment author: [deleted] 07 September 2011 06:01:29AM 2 points [-]

Same broker. Did you mention the depression to him explicitly?

Comment author: handoflixue 07 September 2011 07:03:48PM 2 points [-]

Yes. I'm not taking any medication for it, which might have affected it.

Comment author: [deleted] 07 September 2011 07:46:23PM 2 points [-]

That question never came up in my conversation with him, oddly. So I'm left wondering what the decisive difference is. shrug

Comment author: lsparrish 07 September 2011 05:24:15AM 7 points [-]

I hope you don't mind, I've copied your message to the New Cryonet mailing list. This is an important issue for the cryonics community to discuss. I think there needs to be a system in place for collecting donations and/or interest to pay for equal access for those who can't get life insurance. There are a couple of cases I'm aware of where the community raised enough donations to cover uninsurable individuals for CI suspensions.

Comment author: [deleted] 07 September 2011 05:31:28AM 2 points [-]

I don't mind.

While my personal case is obviously important to me (it is my life after all), it's important to me in a more general sense -- a lot of people are talking on this site about various ways to fix the world or make it better, yet they're often not members of the groups who've had to pay the costs (through exploitation, marginalization or just by being subject to some society-wide bias against them) to get it to where it is now.