Open Thread: September 2011
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (441)
I keep running into problems with various versions of what I internally refer to as the "placebo paradox", and can't find a solution that doesn't lead to Regret Of Rationality. Simple example follows:
You have an illness from wich you'll either get better, or die. The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking. Before learning this you have 80% confidence in your recovery. Since you estimate 80%, your actual chance is 40% so you update to this. Since the estimate is now 40%, the actual chance is 20%, so you update to this. Then it's 10%, so you update to that. etc. Until both your estimated and actual chance of recovery are 0. then you die.
An irrational agent, on the other hand, upon learning this could self delude to 100% certainty of recovery, and have a 50% chance of actually recovering.
This is actually causing me real world problems, such as inability to use techniques based on positive thinking, and a lot of cognitive dissonance.
Another version of this problem features in HP:MoR, in the scene where harry is trying to influence the behaviour of dementors.
And to show this isn't JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.
For actual humans, I'd look into ways of possibly activating the placebo effect without explicit degrees of belief, such as intense visualization of the desired outcome.
any data on if this is actually possible, and if so how to do it? Does it work for other things such as social confidence, positive thinking, etc.?
It certainly SEEMS like it's the declarative belief itself, not visualizations of outcomes, that cause effects. And the fact so many attempts at perfect deception have failed seems to indicate it's not possible to disentangle [your best rational belifs] from what your "brain thinks" you believe.
(... I really need some better notation for talking about these kind of things unambiguously.)
Taboo "declarative". To me, it sounds like you're talking about a verbal statement ("declared"), in which case it's pretty obviously false. AFAIK, priming effects work just fine without words.
yea, bad choice of words. Maybe "explicit", "direct" or "first order" would work better?
I'm skeptical as to how common it is for your beliefs to influence anything outside of your head, except through your actions. If your belief X makes Y happen because of method Z, then in order to get Y you only need to know about Z, and that it works. Then you can do Z regardless of X, because what you do mostly screens off what you think.
If you can't get yourself to do something because of a particular belief, that's another issue.
No, in humans this is not the case, unless you have a much broader definition of "action" than is useful. For example, other humans can read your intentions and beliefs from your posture and facial expression, the body reacts autonomously to beliefs with stuff like producing drugs and shunting around blood flow, and some entire classes of problems such as mental illness or subjective well being reside entirely in your brain.
Sorry about my last sentence in the previous post sounding dismissive, that was sloppy, and not representative of my views.
I guess my real issue with this is that I don't think that there's a 50% placebo, and disagree that the "declarative belief" does things directly. My anticipation of success or failure has an influence on my actions, but a 50% placebo I would imagine would work in real life based on hidden, unanticipated factors to the point that someone with accurate beliefs could say that "my anticipation contributes this much, X contributes this much, Y contributes this much, Z contributes this much, and given my x,y,z I anticipate this" and be pretty much correct.
In the least convenient possible universe, there seems to be enough hacks that rationality enables that I would reject the 50% placebo, and still net a win. I don't think we live in a universe where the majority of utility is behind 50% placebos.
Why does everyone get stuck on that highly simplified example that I just made like that so that the math would be easy to follow?
Or are you simply saying that placebos and the like are an unavoidable cost of being a rationalist and we just have to deal with it and it's not that big a cost anyway?
More the latter, with the added caveat that I think that there are fewer things falling under the category of "and the like" than you think there are.
I used to think that my social skills were being damaged by rationality, but then through a combination of "fake it till you make it", learning a few skills, and dissolving a few false dillemas, they're now better than they were pre-rationality.
If you want to go into more personal detail, feel free to PM.
This is an interesting idea but I'm skeptical that this would actually work. There are studies which I don't have the citations for (they are cited in Richard Wiseman's "59 Seconds") which strongly suggest that positive thinking in many forms doesn't actually work. In particular, having people visualize extreme possibilities of success (e.g. how strong they'll be after they've worked out, or how much better looking they will be when they lose weight, etc.) make people less likely to actually succeed (possibly because they spend more time simply thinking about it rather than actually doing it.). This is not strong evidence but it is suggestive evidence that visualization is not sufficient to do that much. These studies didn't look at medical issues where placebos are more relevant.
http://articles.latimes.com/2010/dec/22/health/la-he-placebo-effect-20101223
The human brain is a weird thing. Also, see the entire body of self-hypnosis literature.
Another method to try is affirmations.
An AI can presumably self-modify. For a sufficient reward from Omega, it is worth degrading the accuracy of one's beliefs, especially if the reward will immediately allow one to make up for the degradation by acquiring new information/engaging in additional processing.
(A hypothetical: Omega offers me 1000 doses of modafinil, if I will lie on one PredictionBook.com entry and say -10% what I truly believe. I take the deal and chuckle every few minutes the first night, when I register a few hundred predictions to make up for the falsified one.)
This entirely misses the point. Yes, you could self modify, but it's a self modification away from rationality and that gives rise to all sorts of trouble as has been elaborated many times in the sequences. For example: http://lesswrong.com/lw/je/doublethink_choosing_to_be_biased/
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
I was trying to apply the principle of charity and interpret your post as anything but begging the question: 'assume rational agents are penalized. How do they do better than irrational agents explicitly favored by the rules/Omega?'
Question begging is boring, and if that's really what you were asking - 'assume rational agents lose. How do they not lose?' - then this thread is deserving only of downvotes.
And Eliezer was talking about humans, not the finer points of AI design in a hugely arbitrary setup. It may be a bad idea for LWers to choose to be biased, but a perfectly good idea for AIXI stuck in a particularly annoying computable universe.
Since I'm not an AI with direct access to my beliefs in storage on a substrate, I was using an analogy to as close as I can get.
Sorry, I were hoping that there were some kind of difference between "penalize this specific belief in this specific way" and "penalize rationality as such in general", some kind of trick to work around the problem, that I hadn't noticed and which resolved the dilemma.
And your analogy didn't work for me, is all I'm saying.
Your model assumes a constant effect in each iteration. Is this justified?
I would envisage a constant chance of recovery and an asymptotically declining estimate of recovery. It seems more realistic, but maybe it's just me?
It's a toy case, in reality the chance of recovery might be "0.2+0.3*estimate", but the same general reasoning applies and the end result is still regret of rationality.
Actually, you can solve this problem just by snapping your fingers, and this will give you all the same benefits as the placebo effect! Try it - it's guaranteed to work!
... Even YOU miss the point? guess I utterly failed at explaining it then.
IF I could solve the problem I'm stating in the first post, then this would indeed be almost true. It might be true in 99% of cases, but 0.99^infinity is still ~0. Thus that is the only probability I can consistently assign to it. I MIGHT be able to self modify to be able to hold inconsistent beliefs, but that's double think and you have explicitly, loudly and repeatedly warned against and condemned it.
I'm baffled at how I seem unable to point at/communicate the concept. I even tired pointing at a specific instance of you using something very similar in MoR.
Eliezer is not "the most capable of understanding (or repairing to an understandable position) commentor on LessWrong". He is "the most capable of presenting ideas in a readable format" AND "the person with the most rational concepts" on LessWrong. Please stop assuming these qualities are good proxies for, well, EVERYTHING.
Agree. I wouldn't go as far as to say he was worse than average at understanding others but it certainly isn't what he is renowned for!
I though it was all just g factor + understanding of language.
Not quite. Having the right priors about other people's likely beliefs, patience and humility are all rather important.
There are some people who I consider incredibly intelligent and who clearly understand the language that I basically expect to be replying to a straw man whenever they make a reply, all else being equal. (Not Eliezer.)
Eliezer has always come of as having plenty of those as well.
What does this mean?
Each one of his sequence posts represents a concept in rationality - so he has many more of these concepts than anyone else here on LW.
(I just noticed there's some ambiguity - it's the largest amount of rational concepts, not concepts of the highest standard of rational. [most] [rational concepts], not [most rational] [concepts].)
The supposed equivalent version in HP:MOR... (I do not wish to speak for anyone else - feel free to chime in yourselves)
That scene was a clear example - to me - of TDT being successful outside of the prisoner's dilemma scheme. In a case where apparently only ignorance would help, TDT can transcend and provide (almost) the same power.
Huh? Maybe we're thinking of different scenes.
It would take an artificially bad situation for this to be the case. In the real world, the placebo effect still works, even if you know it's a placebo--although with diminished efficacy.
But that's beside the point. More on-point is that intentional self-delusion, if possible, is at best a crapshoot. It's not systematic; it relies on luck, and it's prone to Martingale-type failures.
The HPMOR and placebo examples appear, to me, to share another confounding factor: The active ingredient isn't exactly belief. It's confidence, or affect, or some other mental condition closely associated with belief. If it weren't, there'd be no way Harry could monitor his level of belief that the dementors would do what he wanted them to, while simultaneously trying to increase it. Anecdotally, my own attempts at inducing placebo effects feel similar.
Updating on the evidence of yourself updating is almost as much as a problem as is updating on the evidence of "I updated on the evidence of myself updating". Tongue-in-cheek!
That is to say, the decision theory you are currently running is not equipped to handle the class of problems where your response to a problem is evidence that changes the nature of the very problem you are responding to - in the same way that arithmetic is not equipped to handle problems requiring calculus or CDT is not equipped to handle Omega's two-box problem.
(If it helps your current situation, placebo effects are almost always static modifiers on your scientific/medical chances of recovery)
Do you have a suggestion for a better decision theory, or a suggestion on how exactly I have misinterpreted TDT to cause my current problems?
Knowing that MIGHT help, but probably not in practice. Specifically I'd need to know for every given instance of the problem a probability to assign which if it is assigned is also the actual chance.
The scenario you propose does seem inevitably to cause a rational agent to lose. However, it is not realistic, and I can't think of any situations in real life that are like this-- your fate is not magically entangled with your beliefs. Though real placebo effects are still not fully understood, they don't seem to work this way: they may make you feel better, but they don't actually make you better. Merely feeling better could actually be dangerous if, say, you think your asthma is cured and decide to hike down into the Grand Canyon.
Maybe there are situations I haven't thought of where this is a problem, though. Can you give a detailed example of how this paradox obtrudes on your life? I think you might get more useful feedback that way.
MAYBE asthma is an exception (I doubt it), but generally, in humans the scenario it actually IS realistic exactly because outcomes are entangled with your beliefs in a great many and powerful ways that influence you every day. It's why you can detect lies, why positive thinking and placebos work, etc.
Edit: realized this might come of as more hostile than i intended, but to lazy to come up with somehting better.
I was really hoping for a detailed example. As I said, the evidence, though not unequivocal, does not indicate that placebos improve outcomes in any objective way.
I think one way to avoid having to call this regret of rationality would be to see optimism as deceiving, not yourself, but your immune system. The fact that the human body acts differently depending on the person's beliefs is a problem with human biology, which should be fixed. If Omega does the same thing to an AI, Omega is harming that AI, and the AI should try to make Omega stop it.
Well, deceiving somehting else by means of deceiving yourself still involves doublethink. It's the same as saying humans should not try to be rational.
It's saying that it may be worth sacrificing accuracy (after first knowing the truth so you know whether to deceive yourself!) in order to deceive another agent: your immune system. It's still important to be rational in order to decide when to be irrational: all the truth still has to pass through your mind at some point in order to behave optimally.
On another note, you may benefit from reciting the Litany of Tarski:
If lying to myself can sometimes be useful, I want to believe that lying to myself can sometimes be useful.
If lying to myself cannot be useful, I want to believe that lying to myself cannot be useful.
Let me not become attached to beliefs I may not want.
I know by brain is a massively parallel neural network with only smooth fitness curves, and certainly isn't running an outdated version of Microsoft Windows, but for how it's behaving in response to this you couldn't tell. I'm a sucky rationalist. :(
I think that humans can mentally self-modify to some extant, especially if it really really matters. If you really needed to be optimistic, you might be able to modify yourself to be such by significantly participating in certain types of organized religion. (This is a rather extreme example -- a couple minutes of brainstorming would probably yield ideas with (much?) lower cost and similar results, but it illustrates the possibility.)
Expected utility maximizers are not necessarily served by updating their map to accurately reflect the territory -- there are cases such as the above when one might make an effort to willingly make one's map reflect the territory less accurately. The reason why expected utility maximizers often do try to update their map to accurately reflect the territory is that it usually yields greater utility in comparison to alternative strategies -- having an accurate map is (I would guess) not much of a source of terminal utility for most.
ETA: Missing words. >.<
I might theoretically be able to do this, but it would involve rejecting the entirely of rationality and becoming a sophilist or somehting, so after recovery the thing my body would have become would not undo the modification and instead go intentionally create UFAI as an artistic statement or somehting.
Ok, a slight exaggeration, but far less slight than I'm comfortable with.
Since you're likely the one who would benefit from it, hopefully you brainstormed for a few minutes before you decided that my "religion" approach was really the most effective one -- I just typed the first idea that popped in my head and seemed to work.
Huh? Not only was it just an example, but Sophilism is incompatible with every religion I know of.
Anyway, I didn't brainstorm it for roughly the same reason I don't brainstorm specific ways to build a pepertum mobile. The way my brain is set up, I can't reject rationality in any single situation like that without rejecting the entire concept of rationality, and without that my entire belief structure disintegrates onto postmodern relativist sophilism. Similar but more temporary things have happened before and the consequences are truly catastrophic.
And yea, this obviously isn't how it's supposed to work but I've not been able to fix it, or even figure out what would be needed to do so.
If the placebo effect actually worked exactly like that, then yes, you would die while the self-deluded person would do better. However, from personal experience, I highly suspect it doesn't (I have never had anything that I was told I'd be likely to die from, but I believe even minor illnesses give you some nonzero chance of dying). Here is how I would reason in the world you describe:
There is some probability I will get better from this illness, and some probability I will die.
The placebo effect isn't magic, it is a real part of the way the mind interacts with the body. It will also decrease my chances of dying.
I don't want to die.
Therefore I will activate the effect.
To activate the effect for maximum efficiency, I must believe that I will certainly recover.
I have activated the placebo effect. I will recover (Probability: 100%). Max placebo effect achieved!
The world I live in is weird.
In the real world, the above mental gymnastics are not necessary. Think about the things that would make you, personally, feel better during your illness. What makes you feel more comfortable, and less unhappy, when you are ill? For me, the answer is generally a tasty herbal tea, being warm (or cooled down if I'm overheated), and sleeping. If I am not feeling too horrible, I might be up to enjoying a good novel. What would make you feel most comfortable may differ. However, since both of us enjoy thinking rationally, I doubt spouting platitudes like "I have 100% chances of recovery! Yay!" is going to make you personally feel better. Get the benefits of pain reduction and possibly better immune response of the placebo effect by making yourself more physically and mentally comfortable. When I do these things, I don't think they help me get better because they have some magical ability in and of themselves. I think they will help me get better because of the positive associations I have for them. Hope that helps you in some way.
Well, yea obviously it's a simplified model to make the math easier, but the end result is the same. The real formula might for example look more like P=0.2+(expectation^2)/3 than P=expectation/2. In that case, the end result is both a real probability and expectation equal to 0.215377 (source: http://www.wolframalpha.com/input/?i=X%3D0.2%2B%28X^2%29%2F3 )
Also, while I used the placebo effect as a dramatic and well known example, it crops up in a myriad other places. I am uncomfortable revealing to much detail, but it has an extremely real and devastating effect on my daily life which means I'm kind of desperate to resolve this and get pissed that people are saying the problem doesn't exist without showing how mathematically.
You're asking too general a question. I'll attempt to guess at your real question and answer it, but that's notoriously hard. If you want actual help you may have to ask a more concrete question so we can skip the mistaken assumptions on both sides of the conversation. If it's real and devastating and you're desperate and the general question goes nowhere, I suggest contacting someone personally or trying to find an impersonal but real example instead of the hypothetical, misleading placebo example (the placebo response doesn't track calculated probabilities, and it usually only affects subjective perception).
Is the problem you're having that you want to match your emotional anticipation of success to your calculated probability of success, but you've noticed that on some problems your calculated probability of success goes down as your emotional anticipation of success goes down?
If so, my guess is that you're inaccurately treating several outcomes as necessarily having the same emotional anticipation of success.
Here's an example: I have often seen people (who otherwise play very well) despair of winning a board game when their position becomes bad, and subsequently make moves that turn their 90% losing position into a 99% losing position. Instead of that, I will reframe my game as finding the best move in the poor circumstances I find myself. Though I have low calculated probability of overall success (10%), I can have quite high emotional anticipation of task success (>80%) and can even be right about that anticipation, retaining my 10% chance rather than throwing 9% of it away due to self-induced despair.
Sounds like we're finally getting somewhere. Maybe.
I have no way to store calculated probabilities other than as emotional anticipations. Not even the logistical nightmare of writing them down, since they are not introspectively available as numbers and I also have trouble with expressing myself linearly.
I can see how reframing could work for the particular example of game like tasks, however I can't find similar workaround for the problems I'm facing and even if I could I don't have the skill to reframe and self modify with sufficient reliability.
One thing that seems like it's relevant here is that I seem to mainly practice rationality indirectly, by changing the general heuristics, and usually don't have direct access to the data I'm operating on nor the ability to practice rationality in realtime.
... that last paragraph somehow became more of an analogy because I cant explain it well. Whatever, just don't take it to literally.
I asked a girl out today shortly after having a conversation with her. She said no and I was crushed. Within five seconds I had reframed as "Woo, I made a move! In daytime in a non-pub environment! Progress on flirting!"
My apologies if the response is flip but I suggest going from "I did the right thing, woo!" to "I made the optimal action given my knowledge, that's kinda awesome, innit?"
that's still the same class of problem: "screwed over by circumstances beyond reasonable control". Stretching it to full generality, "I made the optimal decision given my knowledge, intelligence, rationality, willpower, state of mind, and character flaws", only makes the framing WORSE because you remember how many things you suck at.
I don't think it's a paradox, it's just that the perfect is sometimes the enemy of the good. Your brain has a lot of different components. With a lot of effort, you can change the way some of them think. Some of them will always be irrational no matter what either because they are impossible to change much or because there just isn't enough time in your life to do it.
Given that some components are irretrievably irrational, you may be better off in terms of accomplishing your goals if other components -- which you might be able to change -- stay somewhat irrational.
Thing is I can't consciously chose to be irrational. I'd first have to entirely reject a huge network of ideals that are the only thing making me even attempt to be slightly rational ever.
Can you see what an absurdly implausible scenario you must use as a ladder to demonstrate rationality as a liability? Rather than being a strike against strict adherence to reality. The fact that we have to stretch so hard to paint it this way, further legitimizes the pursuit of rationality.
Except I happen to, as far as I can tell, be in that "implausible" scenario IRL, or at least an isomorphic one.
I mean no disrespect for your situation whatever it may be. I gave this some additional thought. You are saying that you have an illness in which the rate of recovery is increased by fifty percent due to a positive outlook and the placebo effect this mindset produces. Or that an embrace of the facts of your condition lead to an exponential decline at the rate of fifty percent. Is it depression, or some other form of mental illness? If it is, then the cause of death would likely be suicide. I am forced to speculate because you were purposefully vague.
For the sake of argument I will go with my speculative scenario. It is very common for those with bi-polar disorder and clinical depression to create a negative feedback loop which worsens their situation in the way you have highlighted. But it wouldn't carry the exacting percentages of taper (indeed no illness would carry that exact level of decline based merely on the thoughts in the patients head). But given your claims that the illness exponentially declines, wouldn't the solution be knowledge of this reality? It seems that the delusion has come in the form of accepting that an illness can be treated with positive thinking alone. The illness is made worse by an acceptance not of rationality, but of this unsupported data which by my understanding is irrational.
I am very skeptical of your scenario, merely because I do not know of any illnesses which carry this level of health decline due to the absence of a placebo. If you have it please tell me what it is as I would like to begin research now.
It's not depression or bipolarity, probably, but for the purposes of this discussion the difference is probably irrelevant.
I never claimed the 50% thing was ever anything other than a gross simplification to make the math easier. Obviously it's much more complicated than that with other factors, less extreme numbers, and so on, but the end result is still isomorphic to it. Maybe it's even polynomial rather than exponential, but it's still a huge problem.
atucker wrote a Discussion post about this.
Thanks! Finally something relevant!
Now for the bad news: the parts about the solution are confusing and I can't figure out how I would apply it to my situation. Could someone please translate it to math?
Speaking of Omega setting up an isomorphic situation, the Newcomb's Box problems do a good job of expressing this.
http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/
However, I also though of a side question. Is the person who is caught in a cycle of negative thinking like the placebo effect that you mention, engaging in confirmation bias?
I mean, if that person thinks "I am caught in a loop of updates that will inexorably lead to my certain death." And they are attempting to establish that that is true, they can't simply say "I went from 80%/40% to 40%/20% to 20%/10%, and this will continue. I'm screwed!" as evidence of it's truth, because that's like saying "4,6,8" "6,8,10" "8,10,12" as the guesses for the rule that you know "2,4,6" follows. and then saying "The rule is even numbers, right? Look at all this evidence!"
If a person has a hypothesis that their thoughts are leading them to an inexorable and depressing conclusion, then to test the hypothesis, the rational thing to do is for that person to try proving themselves wrong. By trying "10,8,6" and then getting "No, that is not the case." (Because the real rule is numbers in increasing order.)
I actually haven't confirmed that this idea myself yet. I just thought of it now. But casting it in this light makes me feel a lot better about all the times I perform what appear at the time to be self delusions on my brain when I'm caught in depressive thinking cycles, so I'll throw it out here and see if anyone can contradict it.
Edit: Original version moved to karma sink to hide it away and leave it available for reference. New version:
Is what we refer to as "status" always best thought of as relative? Is a person's status like shares in a corporation or money in an economy, where the production of more diminishes what they have and does not create wealth? Is it an ability to compel others and resist compulsion? Or is it more like widgets, where if I happen to lose out from you getting more widgets, it is only because of secondary effects like your ability to out-compete me with your widgets?
I am not trying to find a really true definition of "status". To some, it seems right to answer the question "Is status all relative or is status not all relative?" with "It depends on which reasonable meaning of status you mean." Everyone (?) agrees that a valid way of discussing status is to talk about something like what portion of the total (subcategory of) status a person has.
Not everyone agrees that there is a reasonable meaning by which one might speak of non-relative status, other than the one that is shorthand for ignoring small or infinitesimal losses by others. In the same way we may say "The government printed one million dollars and gave it to an agency, no one else lost or gained anything." It's fine to say that, but only because: a) the inflation caused by printing a million dollars is miniscule, b) we can count on the listener to infer that increasing money does not increase wealth in that way.
So if one's answer is "It depends," then one thinks it is more than just linguistically valid to think about status in terms of an absolute that can be increased or decreased, but literally, logically, true. Not everyone agrees with that, and the poll is to get a general feel for how many here think each way.
So, as a hypothetical: A person in a room magically becomes awesome - say a guy has knowing kung fu downloaded into his brain, and he tells everyone, and they believe him. Does it make any sense at all to say that the status of others has not changed, other than in a way susceptible to a money/inflation/wealth (simple truth sheep/rock) metaphor?
Poll:
Status all is relative
Status is not all relative
My intuition is that status is meaningful relative to other people's, so this is similar to the inflation of a currency. In all the ways that status can be used to get people to do things, there isn't any more or less of it.
Whether or not the others help em depends on the temperature of the island. Like I said before, my intuition is that status is relative. If they do help em, ey gains some amount of status relative to them. If they don't, ey loses a similar amount of status.
EDIT: The following is based on a misinterpretation of lessdazed.
Assuming you mean third island: The other people help em, and ey gains a bit of status in the process. Ey now has slightly more status than the others. The reverse happens on the fourth island.
I clarified the scenarios, they weren't typos.
Status isn't the only variable in these scenarios. One can feel more or less bonded to someone independent of status, for example.
Or one person could have a firearm, or a conch shell.
Assume variables not mentioned are constant.
But they wouldn't be constant given what you describe which makes me skeptical of the intuitions provoked. The fire is probably more likely to get built cooperatively on the island where the jokes got laughs-- but that has to do with bonding and mood, not status.
Good point. What example of status changing can I use to best clarify I'm talking about just one variable?
I will try mentioning varying ways of gaining status, each with side effects, and specify that only one variable is considered. Hopefully someone can think of a single good scenario.
I can't really think of a scenario where total status could be raised or lowered-- because I think status is (obviously) always relative. Independent of coming up with intuition pumps I'd like to know if there are people who disagree with this-- it is a shame your poll was ruined.
I finally got some criticism and I tried to correct the problems. Do you think the spirit of the post was preserved?
Two grim-trigger strategies are playing the iterated prisoner's dilemma over a noisy telephone line. One mishears the other as saying "defect", and they switch from both always cooperating to both always defecting.
I'm missing the connection.
I define status to be "your ability to be treated favorably, all else being equal." I regard bonding as a form of status - members of the in-group have more status than the outgroup. A group of three strangers on an island has collectively less status than the same group after they've bonded. Once they've bonded, they are all willing to do each other favors and treat each other nicely in ways that they weren't willing to before. In my mind, this is entire point of status.
You can define status as "how much more ability to be treated favorably you have compared to other people," but I don't think that's a useful definition. The word "status" has gain popularity particularly because it flexibly describes a wide array of social interactions.
Status SYMBOLS are often zero-sum (buying a big TV makes people want to come over to your house more often to watch football games, and this only works if your TV is bigger than other people's). But those are only one form of status-gain.
(I spoke to less dazed in real life about this. Our conversation was the impetus for this thread)
Beware 'defining' things too early.
I think that after multiple years of discussing the word status on this site, we BETTER start actually defining it. If there are disagreements as to the definition we need to get them out into the open, so that at the very least we can start mentally translating "Raemon-Status" and "Jack-Status".
Lets say that humans have special circuits for figuring out if a person is more like the band leader or more like the band outcast. Human minds use these circuits to change their behavior towards that person. It seems plausible that those circuits can be 'gamed' , say people get into the habit of speaking badly about people who don't exist, then perhaps everyone actually existing will seem high status.
Status is not all relative
Status is all relative
Could someone please explain the response to this comment? What I'm most curious about are the responses to the attached poll replies. Multiple people have downvoted each entry in the poll without comment. This ruins the poll for the participants, as one can no longer tell how many people have voted for each option. Do not do this on polls until either LW shows more than net votes, or there is a better way to poll.
I also don't understand downvoting this comment without criticizing it and helping me fix its problems. I have discussed this topic with several LW participants and have gotten each of the two types of responses multiple times, and I think a previously undiscussed issue that gets divergent intuitions from people who theretofore have believed themselves having very similar philosophies is potentially interesting. If I am not criticized, I do not know how to improve. It is currently sitting at -2 but it has been upvoted several times as well, five or more people have downvoted without comment.
I'm not shy about posting things in discussion if I think they merit it, but I didn't think this topic did, so I posted it in the open thread. If this issue is not appropriate for an open thread, where is it appropriate for?
One user upvoted "Status is not all relative", two users upvoted "Status is all relative", those three users downvoted the karma sink, and three other users downvoted all three comments.
Thank you very much!
I've not downvoted you, nor participated in the poll, but...
...your question about how relative 'status' is, reminds me of debates about whether a tree falling in the forest makes a sound. Depends how one defines the word. You don't seem to have an option in your poll for "Depends how one defines 'status' ".
...also you seem to be first posing a detailed specific scenario with a concrete question about what happens with the fires on the first and second islands -- but then the polls don't offer that specific, concrete question, they offer the vague "status is relative/not all relative" questions instead. Which seems you want to jumble different questions together, or making people seem to support one thing by answering another. Or something.
In short it all seems a bit muddled. Mind you, as I said, I wasn't among the people downvoting this, so I don't know their own reasoning behind their votes.
Thank you for your feedback!
I am not used to making up intuition pumps. I will try to become better at writing them.
This is a legitimate response, and I certainly didn't intend to debate or try and discover the true meaning of a word. However, it consists of the claim that for somewhat reasonable definitions of "status", "status is all relative" is true, and for others, "status is not all relative" is true. I consider that equivalent to "status is not all relative" - something I will make clear. By "status is all relative" I mean something like: "for no reasonable (to me, though this is something I expect others can guess at with good accuracy) definition of status is status anything but relative".
Part of the difficult expressing this is part of why I resorted to examples, and I do take to heart that difficulty expressing an idea is often a sign it isn't coherent.
I edit the post to try again.
I'm getting increasingly pessimistic about technology.
If we don't get an AI wiping us out or some form of unpleasant brain upload evolution, we'll get hooked by superstimuli and stuff. We don't optimize for well-being, we optimize for what we (think we) want, which are two very different things. (And often, even calling it "optimization" is a stretch.)
You think we optimize for what we think we want? That's a stretch in itself. ;)
(Totally agree with what you are saying!)
Natural selection does not cease operation. Say, for example, that someone invents a box that fully reproduces in every respect the subjective experience of eating and of having eaten by directly stimulating the brain. Dieters would love this device. Here's a device that implements in extreme form the very danger that you fear. In this case, the specific danger is that you will stop eating and die.
So the question is, will the device wipe out the human race? Almost certainly it will not wipe out the entire human race, simply because there are enough people around who would nevertheless choose to eat despite the availability of the device, possibly because they make a conscious decision to do so. These people will be the survivors, and they will reproduce, and their children will have both their values (transmitted culturally) and their genes, and so will probably be particularly resistant to the device.
That's an extreme case. In the actual case, there are doubtless many people who are not adapting well to technological change. They will tend to die out disproportionately, will tend to reproduce disproportionately less.
We have a model of this future in today's addictive drugs. Some people are more resistant to the lure of addictive drugs than others. Some people's lives are destroyed as they pursue the unnatural bliss of drugs, but many people manage to avoid their fate.
Many people have so far managed the trick of pursuing super stimuli without destroying their lives in the process.
Sure, I don't think humanity is in any danger of being destroyed by conventional technologies, and I'm pretty sure the Singularity will be happen - in one form or another - way before then. But there may very well be a lot of suffering on the way.
Are you suggesting to leave everything to natural selection? Doesn't strike me as the rationalists' way.
It is not at all clear that the people resistant to addictive drugs are reproducing at a higher rate than those who aren't.
Keep in mind, it's possible to evolve to extinction.
I wish I could upvote that more than once.
The post or the comment? If the former then you just prompted me to vote it up for you. :)
Are there particular technologies (or uses of) that have especially earned your pessimism?
Lots of things, but some off the top of my head:
Communication technologies probably top the list. Sure, the Internet has given birth to lots of great communities, like the one where I'm typing this comment. But it has also created a hugely polarized environment. (See the picture on page 4 of this study.) It's ever easier to follow your biases and only read the opinions of people who agree with you, and to think that anyone who disagrees is stupid or evil or both. On one hand, it's great that people can withdraw to their own subcultures where they feel comfortable, but the groupthink that this allows...
"Television is the first truly democratic culture - the first culture available to everybody and entirely governed by what the people want. The most terrifying thing is what people do want." -- Clive Barnes. That's even more true for the Internet.
Also, it's getting easier and easier to work, study and live for weeks without talking to anyone else than the grocery store clerk. I don't think that's a particularly good thing from a mental health perspective.
Talking with your mouth, or talking? Because it's not clear to me that talking online is significantly worse than talking in person at sustaining mental health. I suspect getting a girlfriend/boyfriend will do more for your mental health and social satisfaction than interacting with people face-to-face more.
I've been working from home for a year now. I don't get out and see people often, my family live far away, so I don't have many opportunities to see people in person. The exception is, my brother is staying with me while he studies at University. There have been a few periods however where he's been away up with our parents, or off at a different university in a different state. I have a few friends I talk with regularly online through IM, and it helps, but the periods when my brother was away were still very difficult and I was getting very stressed towards the end, even though we don't interact all that much on a day to day basis, and even though I've always been much more tolerant and even thriving on loneliness than most people I know.
Maybe video chatting with people would be an adequate substitute? I haven't tried that, but my anecdote is that IM / talking online alleviates some of the stress, but goes nowhere near to mitigating it.
Personally I find that if I don't hang out with people in real life every 2-4 days I will get increasingly lethargic and incapable of getting anything done. To what degree this generalizes is another matter.
Very much the same way. The internet has been a mixed blessing -- it allowed me to have the life I have at all, way back when, but now it's also a massive hook for akrasia and encourages sub-optimal use of free time. I'm still trying to get that under control.
I find the same thing as Kaj. I've started literally percieving myself as having that set of "needs" bars in the Sims. Bladder bar gets empty, and I need to use the toilet or I'll be uncomfortable. Sleep bar gets low, and I'll be tired until I get enough. Social bar (face to face time) gets low, and I'll feel bleah until I get some face to face time.
The good news is that I've noticed this, become able to distinguish between "not enough facetime Bleah" and other types of Bleah, and then make sure to get face-to-face time when I need it.
It's spooky how similar I am in this regard.
What's the bad news?
That up until recently the internet (and a wide array of other neural-reward-generating things) made it very easy to NOT notice this and distinguish between various types of mental lethargy.
If you mean a face-to-face bf/gf, you're not actually disagreeing with Kaj. Also, I concur with his points about social deprivation leading to lethargy, based on personal experience.
I gain great confidence from the principle that rational people win, on average. It is rational people that make the world, and if it gets to be something we don't want, we change it. The only real threat is rationalists with different utility functions (e.g. Quirrelmort).
(Disclaimer: please don't take this as a promotion of an "us/them" dichotomy.)
I, on the contrary, remain a techno optimist, even more so.
It's a kind of sad, that so many clever people here are losing their confidence into the techno progress. Well, maybe not sad, but it certainly means, that they are not onto something big themselves.
What's this SingInst House which I have heard about, which people go to, and it is exciting?
What's the Visiting Fellows program?
Is there some public list of people who've been on it, for verification purposes?
http://singinst.org/aboutus/visitingfellows
http://lesswrong.com/lw/1hn/call_for_new_siai_visiting_fellows_on_a_rolling/
http://lesswrong.com/lw/29c/be_a_visiting_fellow_at_the_singularity_institute/
http://lesswrong.com/lw/3fk/where_in_the_world_is_the_siai_house/
While the SIAI page says that "We are currently accepting applications for new Visiting Fellows", I'm under the impression that the program's no longer running.
I'm confused about Kolmogorov complexity. From what I understand, it is usually expressed in terms of Universal Turing Machines, but can be expressed in any Turing-complete language, with no difference in the resulting ordering of programs. Why is this? Surely a language that had, say, natural language parsing as a primitive operation would have a very different complexity ordering than a Universal Turing Machine?
The Kolmogorov complexity changes by an amount bounded by a constant when you change languages, but the order of the programs is very much allowed to change. Where did you get that it wasn't?
I knew Kolmogorov complexity was used in Solomonoff induction, and I was under the impression that using Universal Turing Machines was an arbitrary choice.
Solomonoff induction is only optimal up to a constant, and the constant will change depending on the language.
The bitcoin market seems to be experiencing well-funded deliberate market manipulation. Someone who's good at economics should pick up some of that free money.
I don't recall any discussion on LW -- and couldn't find any with a quick search -- about the "Great Rationality Debate", which Stanovich summarizes as:
Stanovich, K. E., & West, R. F. (2003). Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality. In D. E. Over (Ed.), Evolution and the psychology of thinking: The debate, Psychological Press. [Series on Current Issues in Thinking and Reasoning]
The lack of discussion seems like a curious gap given the strong support to both the schools of thought that Cosmides/Tooby/etc. represent on the one hand, and Kahneman/Tversky/etc. on the other, and that they are in radical opposition on the question of the nature of human rationality and purported deviations from it, both of which are central subjects of this site.
I don't expect to find much support here for the Tooby/Cosmides position on the issue, but I'm surprised that there doesn't seem to have been any discussion of the issue. Maybe I've missed discussions or posts though.
I don't understand the basis for the Cosmides and Tooby claim. In their first study, Cosmides and Tooby (1996) solved the difficult part of a Bayesian problem so that the solution could be found by a "cut and paste" approach. The second study was about the same with some unnecessary percentages deleted (they were not needed for the cut and paste solution--yet the authors were surprised when performance improved). Study 3 = Study 2. Study 4 has the respondents literally fill in the blanks of a diagram based on the numbers written in the question. 92% of the students answered that one correctly. Studies 5 & 6 returned the percentages and the students made many errors.
Instead of showing innate, perfect reasoning, the study tells me that students at Yale have trouble with Bayesian reasoning when the question is framed in terms of percentages. The easy versions do not seem to demonstrate the type of complex reasoning that is needed to see the problem and frame it without somebody framing it for you. Perhaps Cosmides and Tooby are correct when they show that there is some evidence that people use a "calculus of probability" but their study showed that people cannot frame the problems without overwhelming amounts of help from somebody who knows the correct answer.
Reference
Cosmides, L. & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition 58, 1–73, DOI: 10.1016/0010-0277(95)00664-8
I agree. I was hoping somebody could make a coherent and plausible sounding argument for their position, which seems ridiculous to me. The paper you referenced shows that if you present an extremely simple problem of probability and ask for the answer in terms of a frequency (and not as a single event), AND you present the data in terms of frequencies, AND you also help subjects to construct concrete, visual representations of the frequencies involved by essentially spoon-feeding them the answers with leading questions, THEN most of them will get the correct answer. From this they conclude that people are good intuitive statisticians after all, and they cast doubt on the entire heuristics and biases literature because experimenters like Kahneman and Tversky don't go to equally absurd lengths to present every experimental problem in ways that would be most intuitive to our paleolithic ancestors. The implication seems to be that rationality cannot (or should not) mean anything other than what the human brain actually does, and the only valid questions and problems for testing rationality are those that would make sense to our ancestors in the EEA.
Wondering vaguely if I'm the only person here who has attempted to sign up for cryonics coverage and been summarily rejected for a basic life insurance plan (I'm transgendered, which automatically makes it very difficult, and have a history of depression, which apparently makes it impossible to get insurance according to the broker I spoke with).
I see a lot of people make arguments (some of them suggesting a hidden true rejection) about why they don't want it, or why it would be bad. I see a lot of people here make arguments for its widespread adoption, and befuddlement at its rejection (the "Life sucks, but at least you die" post) and the difficulties this poses for spreading the message. And I see a few people argue (somewhat mendaciously in my opinion) for its exclusivity or scarcity, arguing that it's otherwise of little to no value if just anyone can get signed up.
What I don't see is a lot of people who'd like to and can't, particularly for reasons of discrimination. For me, my biggest rejection for a long time was the perception that it was just out of reach of anyone who wasn't very wealthy, and once I learned otherwise, that obstacle dissipated. Now I'm kind of back to feeling like it's that way in practice -- if you're not one of the comparatively small number of people who can pay for it out of hand, or a member of any group who's already statistically screwed by the status quo, then it may as well be out of reach for you.
I doubt the average person who has heard of, and rejected cryonics has gone through this specifically, but it certainly suggests some reasons why it might be a tough sell outside the "core communities" who're already well-represented in cryonics. Even if we want it, we can't get it, and the more widely-known that is, the more difficult PR's going to be among people who've already had their opportunities and futures scuppered by the system as it stands.
I'm not saying it's rational, but from where I stand it's very hard to blame someone for cynically dismissing the prospect out of hand, or actively opposing it. IMO, the cryonics boosters either need to acknowledge the role that stuff like this plays in people's relationship to Shiny New Ideas Proposed By Well Educated Financially-Comfortable White Guys From The Bay Area, or just concede that, barring massive systematic reforms in other sectors of society, this will not be an egalitarian technology.
I think they don't have any deep understanding of it at all -- the statistics tell the story the insurance adjusters need to decide on an investment (well, sort of -- there actually is no really good data about our long-term heatlh outcomes apart from our rates of violent murder, and it's hard to tell what would even constitute a reasonable null hypothesis to default to when so many complicated variables are churned up by the medical procedures we often seek), but that decision and those statistics are not truly value-neuitral.
Show me a trans person who hasn't dealt with depression. I'm sure they exist, but it does not appear to be common. Depression is such a common symptom for us because we're a mostly-despised minority in the wider world, and just being coerced into our birth-assigned gender roles is often painful and stressful for us (and it only gets worse as we grow up).
Transgendered people in the US face one-in-eight to one-in-twelve murder rates depending on race and geographic location [edit: this claim is unsourced and should be considered retracted; investigation recorded further downthread attempts to pin down the rate more precisely-Jandila]; we're also something like four times more likely than the national average to be unemployed. From an actuarial perspective, this is clear-cut: bad investment prospect, and that is the purpose of insurance after all.
It's not neutral to the person affected by it though, because those conditions stem from discrimination against trans people -- we aren't murdered at such high rates because of some evopsychological predisposition in cis people to murder us, or because we're inherently less capable of fitting into society and/or being value-creating agents in some hypothetical free market. We aren't unemployed at such vastly high rates because we tend not to have skills or education as a population -- and for many of us that don't, it's not because we couldn't cut it in school or the work we were doing before we transitioned.
But instead of, say, considering me on the basis of my actual health (which according to my practicing physician is excellent), it's a look at the tables. Context is irrelevant in the decision.
Because I'm trans and have a medical history of depression, I am rendered me unable to acquire the otherwise-affordable means of obtaining at least some chance of ensuring my future existence, past the limits of my body as it stands.
It may be legal, it may be justifiable with recourse to a profit motive, it may not be willfully directed at my person in order to cause me ill -- but it is discrimination. Our heightened rates of murder and unemployment aren't typically personally-directed either (we're targeted for being what we are, not who we are).
It's also still legal to fire me from a job in most jurisdictions for being transgendered, without even having to hide the fact. Does that tacit authorization in any way cast doubt on whether or not such behavior is discriminatory?
I want to live as much as anybody does. I even want to live an arbitrarily long time, and see the world grow into a better place, as much as any other cryonics booster on this site. I don't take comfort in beliefs of a spiritual afterlife when faced with the seeming inevitability of death, I don't consider the fact that dying would hardly make me unique or rarely-disadvantaged among humanity to be any negative influence on seeking to avoid it by whatever plausible means. I don't think immortality will inherently lead to stagnation or regression in society.
And I don't get the choice. There is a choice available, but not to me, because the only available means (like most people in the world, I am not arbitrarily able to afford setting aside 30k or so) is denied out of hand, no further fact-finding necessary. That shiny future we cryo-types are hoping to see, but that will likely take longer than our natural lifespans to reach? Is closed off to me.
There's a whole lot of people like me in the world, who either don't have financial and social access to the kinds of things that make one rationally able to choose cryo in the first place for whatever reason. I daresay most of them would also reject cryonics because they don't have a rationalist's understanding of death, its implications and what they could do about it -- but rationality training will only solve one of those problems.
I've seen this claim before but I've never seen it attached to a reliable source. Do you have a citation for it? The HCR estimates that there are about 15-30 murders of transgendered people each year. If we underestimate the percentage of the American population that is trans using the HCR's data and use the lower bound estimate that 1 in every 3000 people are transgendered (here I'm using the cited Conway study that says lower bound of 1 in 2500 and underestimating a bit more both to make the math easier and to make sure we're very definitely not overcounting, note that Conway's upper bound is in 1 in 500) then we get with a US population of around three hundred million, a total of about 100,000 trans people in the US. Now if we assume that all those trans murders are evenly distributed (which seems to be really unlikely), we get assuming that they have around 60 years of time to get murdered, with a 30/100,000 chance each year, we get a chance of 1-(1-(30/100,000))^60 chance of getting murdered in their lifetimes (60 comes from assuming that they know they are transgendered around age 12 and then have 60 years of time to get murdered). That's around a 2.8% chance. That's really high, but nowhere near 1 in 12 which is more than twice that (8.3%) . In this context, this occurs with 1/12 being the claimed lower bound, and with us assuming a generously large number of murders yearly and a generously small transgender population, and are still off by a factor of 2.
Note if one uses for example a population estimate based on the middle of Conway's range (1 in a 1000 being transgendered) then one gets a result of around .006%, which is about 50% percent more likely than the entire US pop but is even farther from the claimed numbers.
Edit: Ok. Th HRC also on the same webpage but with minimal arithmetic claim that 1 in every 1000 murders might be a transgendered person. If we use this estimate and assume that there are then around 140 transgendered murders yearly, and use the reasonable estimate of 1 in every 1000 people being transgendered so a total pop of around 300,000 then one gets (1-(1-140/300,000)^60) which is around a 3% chance.
Edit: If you use the most generous estimate for the murder total (140), and the smallest population estimate for the transgendered population then you can get 8% which is a little under 1/12. Here I'm using my underestimate of Conway's estimate. If one uses Conway's actual lower bound one gets around 7%. I don't think I need to discuss in detail why this estimate is unlikely to be accurate. It seems clear from these estimates that the murder rate of transgendered individuals is much higher than that of the general population (especially when considered as a relative rate), but it is not likely to be anywhere near 1/12th.
You know, I can't find a good source for it now, and it appears to be an apocryphal claim. Wouldn't be the first time I've picked up an oft-quoted but exaggerated statistic about this issue. I'm a bit of a newb, but I'll try to strikethrough that claim. ETA: The Help guide doesn't list that particular markup. Someone throw me a bone?
A look at Carsten Balzer's 2009 study claims that a recent attempt to monitor the rate of reported murders worldwide (their criteria were basically "can be accessed by a newspaper website or some other online source during a google search, after filtering for duplicates") gave a rate of about one reported murder every three days. Source is here:
http://www.liminalis.de/2009_03/TMM/tmm-englisch/Liminalis-2009-TMM-report2008-2009-en.pdf
As far as I'm aware, strikethrough is not available through markdown as it is implemented on this site; to get the strikethrough effect you have to retract your entire post.
Thank you for the clarification.
I think the current norm on LessWrong is putting "edit to add: I no longer believe this claim to be true" in parentheses after the claim. I think your idea of strikethrough is really good, though.
The deleted comment was mine. It was deleted before anyone responded or up/down voted it.
I feared that I had completely misunderstood what Jandila had said and didn't think anyone would miss it. Now that I see that I didn't misunderstand the original comment, I regret having deleted it. Is there anyway to recover a deleted comment?
No.
No, but you could just reply to Jandila's current comment with a comment explaining what you had meant in the deleted comment.
I hope you don't mind, I've copied your message to the New Cryonet mailing list. This is an important issue for the cryonics community to discuss. I think there needs to be a system in place for collecting donations and/or interest to pay for equal access for those who can't get life insurance. There are a couple of cases I'm aware of where the community raised enough donations to cover uninsurable individuals for CI suspensions.
I don't mind.
While my personal case is obviously important to me (it is my life after all), it's important to me in a more general sense -- a lot of people are talking on this site about various ways to fix the world or make it better, yet they're often not members of the groups who've had to pay the costs (through exploitation, marginalization or just by being subject to some society-wide bias against them) to get it to where it is now.
I'm both transgendered and diagnosed with depression, and I've had good luck getting insured via Rudi Hoffman. I don't recall what the name of the insurance company was, and I haven't heard the final OK since the medical examination, but I don't foresee any difficulties. I was warned they'll most likely put me down on male rates (feh) despite being legally female, but I can deal with that even if I don't like it.
Same broker. Did you mention the depression to him explicitly?
If you don't mind me asking - how old are you and how much money do you typically save a year?
Bad assumption, but I'll answer.
I am 28. long-term unemployed, cannot get a bank account due to issues years ago, living on disability payments and now with support of my domestic partner (which is the main reason my situation isn't actually desperate any longer). We have to keep our finances pretty separate or my income (~7k a year, wholly inadequate to live on by myself anyplace where I could actually do so) goes away.
I keep a budget, I'm pragmatic and savvy enough to make sure our separate finances on paper don't unduly restrict us from living our lives as necessary, but I can't remember the last time I made it to the end of the month with money left over from my benefits check. Sometimes if I'm having a very good month, I'll not need to use my food stamps balance for that cycle, meaning it's there when I need extra later.
And to stave off questions about how I could afford cryonics on this level of income: Life insurance can fall within a nice little window of 50 dollars or less, which could plausibly be taken out of my leisure and clothing budgets (it doesn't consume all of them, but those are the only places in the budget with much wiggle room). Maintaining a membership with the Cryonics institute that depends on a beneficiary payout of that insurance is something like 120 dollars a year - even I can find a way to set that aside.
EDIT: this comment was made when I was in a not-too-reasonable frame of mind, and I'm over it.
Is teaching, learning, studying rationality valuable?
Not as a bridge to other disciplines, or a way to meet cool people. I mean, is the subject matter itself valuable as a discipline in your opinion? Is there enough to this? Is there anything here worth proselytizing?
I'm starting to doubt that. "Here, let me show you how to think more clearly" seems like an insult to anyone's intelligence. I don't think there's any sense teaching a competent adult how to change his or her habits of thought. Can you imagine a perfectly competent person -- say, a science student -- who hasn't heard of "rationalism" in our sense of the world, finding such instruction appealing? I really can't.
Of course I'm starting to doubt the value (to myself) of thinking clearly at all.
You're confuting two things here: whether rationality is valuable to study, and whether rationality is easy to proselytize.
My own experience is that it's been very valuable for me to study the material on Less Wrong- I've been improving my life lately in ways I'd given up on before, I'm allocating my altruistic impulses more efficiently (even the small fraction I give to VillageReach is doing more good than all of the charity I practiced before last year), and I now have a genuine understanding (from several perspectives) of why atheism isn't the end of truth/meaning/morals. These are all incredibly valuable, IMO.
As for proselytizing 'rationality' in real life, I haven't found a great way yet, so I don't do it directly. Instead, I tell people who might find Less Wrong interesting that they might find Less Wrong interesting, and let them ponder the rationality material on their own without having to face a more-rational-than-thou competition.
This phrase jumped out in my mind as "shiny awesome suggestion!" I guess in a way it's what I've been trying to do for awhile, since I found out early, when learning how to make friends, that most people and especially most girls don't seem to like being instructed on living their life. ("Girls don't want solutions to their problems," my dad quotes from a book about the male versus the female brain, "they want empathy, and they'll get pissed off if you try to give them solutions instead.")
The main problem is that most of my social circle wouldn't find LW interesting, at least not in its current format. Including a lot of people who I thought would benefit hugely from some parts, especially Alicorn's posts on luminosity. (I know, for example, that my younger sister is absolutely fascinated by people, and loves it when I talk neuroscience with her. I would never tell her to go read a neuroscience textbook, and probably not a pop science book either. Book learning just isn't her thing.)
Depending on what you mean by 'format', you might be able to direct those people to the specific articles you think they'd benefit from, or even pick out particular snippets to talk to them about (in a 'hey, isn't this a neat thing' sense, not a 'you should learn this' sense).
"Pick out particular snippets" seems to work quite well. If something in the topic of conversation tags, in my mind, to something I read on LessWrong, I usually bring it up and add it to the conversation, and my friends usually find it neat. But except with a few select people (and I know exactly who they are) posting an article on their facebook wall and writing "this is really cool!" doesn't lead to the article actually being read. Or at least they don't tell me about reading it.
If facebook is like twitter in that regard, I mostly wouldn't expect you to get feedback about an article having been read - but I'd also not expect an especially high probability that the intended person actually read it, either. What I meant was more along the lines of emailing/IMing them individually with the relevant link. (Obviously this doesn't work too well if you know a whole lot of people who you think should read a particular article. I can't advise about that situation - my social circle is too small for me to run into it.)
I, uh, just did that, and received this reply half an hour later:
I think that counts as a success.
Where's that coming from, then?
Well, there's been some talk about organizing a meetup group in my area, and I'm not really comfortable with that.
Are you not comfortable with that happening at all, or not comfortable with being involved in one?
What are your concerns - wasting your time, being perceived as belonging to a "weird" group, being drawn into a group process that is a net negative value to you?
I realize I'm not answering your original question. I'm still thinking about that one.
I'm not comfortable with it existing. I think it's not useful.
I'm more than a little surprised to see you say this, given your past writings on the subject - if asked I would certainly have guessed that your reply to your own question would have been "yes, of course".
I'm curious to know more, if you're comfortable saying more. Not sure what to say otherwise.
People with a common interest meeting up seems natural enough. I have reservations about normativism with respect to ways of thinking, but it does seem to me that what we are learning here is worthwhile in and of itself: because it is about finding out exactly what we are, and because - just like a zebra - what we are is something rare and peculiar and fascinating.
Well, if there are other people who feel that way, they're free to meet up to share that interest.
My serious answer: I'm not sure there's a well-defined, cumulative, discipline-like body of knowledge in the LessWrong memeplex. I don't know how it could be presented to an intelligent outsider who's never heard of it. I don't know whether it could be presented in a way that makes us look good.
My not-so-serious answer: a lot of the time I just don't care any more.
It sounds to me like you might be in some kind of depression or low-enthusiasm state. I don't hear a coherent critique in these comments, so much as a general sense of "boo 'rationality'/LW".
Contrast:
and
This feels inconsistent; as if you had been caught giving a non-true rejection.
Yesterday I spoke with my doctor about skirting around the FDA's not having approved of a drug that may be approved in Europe first (it may be approved in the US first). I explained that one first-world safety organization's imprimatur is good enough for me until the FDA gives a verdict, and that harm from taking a medicine is not qualitatively different than harm from not taking a medicine.
We also discussed a clinical trial of a new drug, and I had to beat him with a stick until he abandoned "I have absolutely no idea at all if it will be better for you or not". I explained that abstractly, a 50% chance of being on a placebo and a 50% chance of being on a medicine with a 50% chance of working was better than assuredly taking a medicine with a 20% chance of working, and that he was able to give a best guess about the chances of it working.
In practice, there are other factors involved, in this case it's better to try the established medicine first and just see if it works or not, as part of exploration before exploitation.
This is serious stuff.
Better yet, if you aren't feeling like being altruistic you go on the trial then test the drug you are given to see if it is the active substance. If not you tell the trial folks that placebos are for pussies and go ahead and find either an alternate source of the drug or the next best thing you can get your hands on. It isn't your responsibility to be a control subject unless you choose to be!
Downvoted for encouraging people to screw over other people by backing out of their agreements... What would happen to tests if every trial patient tested their medicine to see if it's a placebo? Don't you believe there's value in having control groups in medical testing?
Upvoted comment and parent.
Downvoted for actively polluting the epistemic belief pool for the purpose of a shaming attempt. I here refer especially (but not only) to the rhetorical question:
I obviously believe there's a value in having control groups. Not only is that an obvious belief but it is actually conveyed by my comment. It is a required premise for the assertion of altruism to make sense.
My comment observes that sacrificing one's own (expected) health for the furthering of human knowledge is an act of altruism. Your comment actively and directly sabotages human knowledge for your own political ends. The latter I consider inexcusable and the former is both true and necessary if you wish to encourage people who are actually capable of strategic thinking on their own to be altruistic.
You don't persuade rationalists to conform to your will by telling them A is made of fire or by trying to fool them into believing A, B and C don't even exist. That's how you persuade suckers.
Not so, there exists altruism that is worthless or even of negative value. An all-altrustic CooperateBot is what allows DefectBots to thrive. Someone can altruistically spend all his time praying to imaginary deities for the salvation of mankind, and his prayers would still be useless. To think that altruism is about value is a map-territory confusion.
Your comment doesn't just say it's altruistic. It also tells him that if he doesn't feel like being an altruist, that he should tell people that "placebos are for pussies". Perhaps you were just joking when you effectively told him to insult altruists, and I didn't get it.
Either way, if he defected in this manner, not just he'd be partially sabotaging the experiment he signed up for, he'd probably be sabotaging his future chances of being accepted in any other trial. I know that if I was a doctor, I would be less likely to accept you in a medical trial.
Um, what? I don't understand. What deceit do you believe I committed in my above comment?
Let me see if I can summarize this thread:
Wedrifid made a strategic observation that if a person cares more about their own health then the integrity of the trial it makes sense to find out whether they are on placebo and, if they are, leave the trial and seek other solutions. He did this with somewhat characteristic colorful language.
You then voted him down for expressing values you disagree with. This is a use of downvoting that a lot of people here frown on, myself included (though I don't downvote people for explaining their reasons for downvoting, even if those reasons are bad). Even if wedrifid thought people should screw up controlled trials for their own benefit his comment was still clever, immoral or not.
Of course, he wasn't actually recommending the sabotage of controlled trials-- though his first comment was sufficiently ambiguous that I wouldn't fault someone for not getting it. Luckily, he clarified this point for you in his reply. Now that you know wedrifid actually likes keeping promises and maintaining the integrity of controlled trials what are you arguing about?
To me it didn't feel like an observation, it felt like a very strong recommendation, given phrases like "Better yet", "tell them placebos are for pussies", "It isn't your responsibility!", etc
Eh, not really. It seemed shortsighted -- it doesn't really give an alternate way of procuring this medicine, it has the possibilty to slightly delay the actual medicine from going on the market (e.g. if other test subjects follow the example of seeking to learn if they're on a placebo and also abandon the testing, that forcing the thing to be restarted from scratch), and if a future medicine goes on trial, what doctor will accept test subjects that are known to have defected in this way?
Primarily I fail to understand what deceit he's accusing me of when he compares my own attitude to claiming that "A is made of fire" (in context meaning effectively that I said defectors will be punished posthumously go to hell; that I somehow lied about the repercussions of defections).
He attacks me for committing a crime against knowledge -- when of course that was what I thought he was committing, when I thought he was seeking to encourage control subjects to find out if they're a placebo and quit the testing. Because you know -- testing = search for knowledge, sabotaging testing = crime against knowledge.
Basically I can understand how I may have misunderstood him --- but I don't understand in what way he is misunderstanding me.
OK, see, I thought this might happen. I love your first comment, much more than ArisKatsaris', but despite it having some problems ArisKatsaris is referring to, not because it is perfect. I only upvoted his comment so I could honestly declare that I had upvoted both of your comments, as I thought that might diffuse the situation - to say I appreciated both replies.
Don't get me wrong - I don't really mind ArisKatsaris' comment and I don't think it's as harmful as you seem to, but I upvoted it for the honesty reason.
You just committed an escalation of the same order of magnitude that he did, or more, as his statements were phrased as questions and were far less accusatory. I thought you might handle this situation like this and I mildly disapprove of being this aggressive with this tone this soon in the conversation.
A very slightly harmful instance of a phenomenon that is moderately bad when done on things that matter.
Where 'this soon' means the end. There is nothing more to say, at in this context. (As a secondary consideration my general policy is that conversations which begin with shaming terminate with an error condition immediately.) I do, however, now have inspiration for a post on the purely practical downsides of suppression of consideration of rational alternatives in situations similar to that discussed by the post.
EDIT: No, not post. It is an open thread comment by yourself that could have been a discussion post!
I'm not unsympathetic.
Compare and contrast my(September 7th, 2011) approach to yours(September 7th, 2011), I guess.
ADBOC, it didn't have to be.
It sort of soon became one.
Lessdazed is describing quite a messy situation. Let me split out various subcases.
First is the situation with only one approval authority running randomised controlled trials on medicines. These trials are usually in three phases. Phase I on healthy volunteers to check for toxicity and metabolites. Phase II on sufferers to get an idea of the dose needed to affect the course of the illness. Phase III to prove that the therapeutic protocol established in Phase II actually works.
I have health problems of my own and have fancied joining a Phase III trial for early access to the latest drugs. Reading around for example it seems to be routine for drugs to fail in Phase III. Outcomes seem to be vaguely along the lines of three in ten are harmful, six in ten are useless, one in ten is beneficial. So the odds that a new drug will help, given that it was the one out of ten that passed Phase III, are good, while the odds that a new drug will help, given that it is about to start on Phase III are bad.
Joining a Phase III trial is a genuinely altruistic act by which the joiner accepts bad odds for himself to help discover valuable information for the greater good.
I was confused by the idea of joining a Phase III trial and unblinding it by testing the pill to see whether one had been assigned to the treatment arm of the study or the control arm. Since the drug is more likely to be harmful than to be beneficial, making sure that you get it is playing against the odds!
Second, Lessdazed seemed to be considering the situation in which EMA has approved a drug and the FDA is blocking it in America, simply as a bureaucratic measure to defend its home turf. If it were really as simple as that, I would say that cheating to get round the bureaucratic obstacles is justified.
However the great event of my lifetime was man landing on the Moon. NASA was brilliant and later became rubbish. I attribute the change to the Russians dropping out of the space race. In the 1960's NASA couldn't afford to take bad decisions for political reasons, for fear that the Russians would take the good decision themselves and win the race. The wider moral that I have drawn is that big organisations depend on their rivals to keep them honest and functioning.
Third: split decisions with the FDA and the EMA disagreeing, followed by a treat-off to see who was right, strike me as essential. I dread the thought of a single, global medicine agency that could prohibit a drug world wide and never be shown up by approval and successful use in a different jurisdiction.
Hmm, my comment is losing focus. My main point is that joining a Phase III trial is, on average, a sacrifice for the common good.
It's in Phase III.
At some point I was that person. Weren't you?
Doesn't this non-true-believer sort of mentality make you the perfect proponent?
Then we're all doomed. Literally.
You might be reading SarahC as saying that teaching a competent adult to change his or her habits of thought is not possible (if you're not, ignore this comment), but I think she's saying that it's not worthwhile.
A little bit but it varies wildly based on who you are.
Not really.
A kind of uncomfortably funny video about turning yourself bisexual, a topic that's come up a few times here on LW. http://youtu.be/zqv-y5Ys3fg