Open Thread: March 2010
We've had these for a year, I'm sure we all know what to do by now.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (658)
A fascinating article about rationality or the lack thereof as it applied to curing scurvy, and how hard trying to be less wrong can be: http://idlewords.com/2010/03/scott_and_scurvy.htm
Wonderful article, thanks. I'm fond of reminders of this type that scientific advances are very seldom as discrete, as irreversible, as incontrovertible as the myths of science often give them to be.
When you look at the detailed stories of scientific progress you see false starts, blind alleys, half-baked theories that happen by luck to predict phenomena and mostly sound ones that unfortunately fail on key bits of evidence, and a lot of hard work going into sorting it all out (not to mention, often enough, a good dose of luck). The manglish view, if nothing else, strikes me as a good vitamin for people wanting an antidote to the scurvy of overconfidence.
ETA: The article made for a great dinnertime story to my kids. Only one of the three, the oldest (13yo) was familiar with the term "scurvy" - and with the cure as well; both from One Piece. Manga 1 - school 0.
Very interesting. And sobering.
Call for examples
When I posted my case study of an abuse of frequentist statistics, cupholder wrote:
So this is a call for examples of abuse of Bayesian statistics; examples by working scientists preferred. Let’s learn how to avoid these mistakes.
Some googling around yielded a pdf about a controversial use of Bayes in court. The controversy seems to center around using one probability distribution on both sides of the equation. Lesser complaints include mixing in a frequentist test without a good reason.
The following stuff isn't new, but I still find it fascinating:
Reverse-engineering the Seagull
The Mouse and the Rectangle
what's depressing is the vast disconnect between how well marketers understand super stimulus and how poorly everyone else does.
also this: http://www.theonion.com/content/video/new_live_poll_allows_pundits_to
Neat!
How do you introduce your friends to LessWrong?
Sometimes I'll start a new relationship or friendship, and as this person becomes close to me I'll want to talk about things like rationality and transhumanism and the Singularity. This hasn't ever gone badly, as these subjects are interesting to smart people. But I think I could introduce these ideas more effectively, with a better structure, to maximize the chance that those close to me might be as interested in these topics as I am (e.g. to the point of reading or participating in OB/LW, or donating to SIAI, or attending/founding rationalist groups). It might help to present the futurist ideas in increasing order of outrageousness as described in Yudkowsky1999's future shock levels. Has anyone else had experience with introducing new people to these strange ideas, who has any thoughts or tips on that?
Edit: for futurist topics, I've sometimes begun (in new relationships) by reading and discussing science fiction short stories, particularly those relating to alien minds or the Singularity.
For rationalist topics, I have no real plan. One girl really appreciated a discussion of the effect of social status on the persuasiveness of arguments; she later mentioned that she'd even told her mother about it. She also appreciated the concept of confirmation bias. She's started reading LessWrong, but she's not a native English speaker so it's going to be even more difficult than LessWrong already is.
I think of LessWrong from a really, really pragmatic viewpoint: it's like software patches for your brain to eliminate costly bugs. There was a really good illustration in the Allais mini-sequence - that is a literal example of people throwing away their money because they refused to consider how their brain might let them down.
Edit: Related to The Lens That Sees Its Flaws.
It shows you that there is really more to most things than meets the eye, but more often than not much less than you think. It shows you that even smart people can be completely wrong but that most people are not even wrong. It tells you to be careful in what you emit and to be skeptical of what you receive. It doesn't tell you what is right, it teaches you how to think and to become less wrong. And to do so is in your own self interest because it helps you to attain your goals, it helps you to achieve what you want. Thus what you want is to read and participate on LessWrong.
I'm not sure this is what you're doing, but I'm careful not to bring up LessWrong in an actual argument. I don't want arguments for rationality to be enemy soldiers.
Instead, I bring rationalist topics up as an interesting thing I read recently, or as an influence on why I did a certain thing a certain way, or hold a particular view (in a non-argument context). That can lead to a full-fledged pitch for LessWrong, and it's there that I falter; I'm not sure I'm pitching with optimal effectiveness. I don't have a good grasp on what topics are most interesting/accessible to normal (albeit smart) people.
If rationalists were so common that I could just filter people I get close to by whether they're rationalists, I probably would. But I live in Taiwan, and I'm probably the only LessWrong reader in the country. If I want to talk to someone in person about rationality, I have to convert someone first. I like to talk about these topics, since they're frequently on my mind, and because certain conclusions and approaches are huge wins (especially cryonics and reductionism).
the main hurdle in my experience is getting people over biases that cause them to think that the future is going to look mostly like the present. if you can get people over this then they do a lot of the remaining work for you.
TL;DR: Help me go less crazy and I'll give you $100 after six months.
I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.
I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).
One-time tricks to do one important thing are also welcome, but I'd offer less.
I'll come out of the shadows (well not really, I'm too ashamed to post this under my normal LW username) and announce that I am, or anyway have been, in more or less the same situation as MixedNuts. Maybe not as severe (there are some important things I can do, at the moment, and I have in the past been much worse than I am now -- I would actually appear externally to be keeping up with my life at this exact moment, though that may come crashing down before too long), but generally speaking almost everything MixedNuts says rings true to me. I don't live with anyone or have any nearby family, so that adds some extra difficulty.
Right now, as I said, this is actually a relatively good moment, I've got some interesting projects to work on that are currently helping me get out of bed. But I know myself too well to assume that this will last. Plus, I'm way behind on all kinds of other things I'm supposed to be doing (or already have done).
I'm not offering any money, but I'd be interested to see if anyone is interested in conversing with me about this (whether here or by PM). Otherwise, my reason for posting this comment was to add some evidence that this may be a common problem (even afflicting people you wouldn't necessarily guess suffered from it).
I've got a weaker form of this, but I manage. The number one thing that seems to work is a tight feedback loop (as in daily) between action and reward, preferably reward by other people. That's how I was able to do OBLW. Right now I'm trying to get up to a reasonable speed on the book, and seem to be slowly ramping up.
I have limited mental resources myself, and am sometimes busy, but I'm generally willing to (and find it enjoyable to) talk to people about this kind of thing via IM. I'm fairly easily findable on Skype (put a dot between my first and last names; text only, please), AIM (same name as here), GChat (same name at gmail dot com), and MSN (same name at hotmail dot com). The google email is the one I pay attention to, but I'm not so great at responding to email unless it has obvious questions in it for me to answer. It's also noteworthy that my sleep schedule is quite random - it is worth checking to see if I'm awake at 5am if you want to, but also don't assume that just because it's daytime I'll be awake.
Hope this doesn't turn into a free-therapy bandwagon, but I have a lot of the same issues as MixedNuts and anonymous259, so if anyone has any tips or other insights they'd like to share with me, that would be delightful.
My main problem seems to be that, if I don't find something thrilling or fascinating, and it requires much mental or physical effort, I don't do it, even if I know I need to do it, even if I really want to do it. Immediate rewards and punishments help very little (sometimes they actually make things worse, if the task requires a lot of thought or creativity). There are sometimes exceptions when the boring+mentally/physically-demanding task is to help someone, but that's only when the person is actually relying on me for something, not just imposing an artificial expectation, and it usually only works if it's someone I know and care about (except myself).
A related problem is that I rarely find anything thrilling or fascinating (enough to make me actually do it, at least) for very long. In my room I have stacks of books that I've only read a few chapters into; on my computer I have probably hundreds of unfinished (or barely started) programs and essays and designs, and countless others that only exist in my mind; on my academic transcripts are many 'W's and 'F's, not because the classes were difficult (a more self-controlled me would have breezed through them), but because I stopped being interested halfway through. So even when something starts out intrinsically motivating for me, the momentum usually doesn't last.
Like anon259, I can't offer any money — this sort of problem really gets in the way of wanting/finding/keeping a job — but drop me a PM if gratitude motivates you. :)
To some extent, the purpose of LessWrong is to fix problems with ourselves, and the distinction between errors in reasoning and errors in action is subtle enough that I would hesitate to declare this on- or off-topic.
It should be mentioned, however, that the population of LessWrongers-asking-for-advice is unlikely to be representative of the population of LessWrongers, and even less so of the population of agents-LessWrongers-care-about. This is likely to make generalizations drawn from observations here narrower in scope than we might like.
PM me with your IM contact info and I'll try to help you too.
Look, I'll do it for free too!
After reading this thread, I can only offer one piece of advice:
You need to see a medical doctor, and fast. Your problems are clearly more serious than anything we can deal with here. If you have to, call 911 and have them carry you off in an ambulance.
This is just a guess, and I'm not interested in your money, but I think that you probably have a health problem. I'd suggest you check out the book "The Mood Cure" by Julia Ross, which has some very good information on supplementation. Offhand, you sound like the author's profile for low-in-catecholamines, and might benefit very quickly from fairly low doses of certain amino acids such as L-tyrosine.
I strongly recommend reading the book, though, as there are quite a few caveats regarding self-supplementation like this. Using too high a dose can be as problematic as too low, and times of day are important too. Consistent management is important, too. When you're low on something, taking what you need can make you feel euphoric, but when you have the right dose, you won't notice anything by taking some. (Instead, you'll notice if you go off it for a few days, and find mood/energy going back to pre-supplementation levels.)
Anyway... don't know if it'll work for you, but I do suggest you try it. (And the same recommendation goes for anyone else who's experiencing a chronic mood or energy issue that's not specific to a particular task/subject/environment.)
Buying a (specific) book isn't possible right now, but may help later; thanks. I took the questionnaire on her website and apparently everything is wrong with me, which makes me doubt her tests' discriminating power.
It's a marketing tool, not a test.
FWIW, I don't have "everything" wrong with me; I had only two, and my wife scores on two, with only one the same between the two of us.
For what it's worth:
A few years back I was suffering from some pretty severe health problems. The major manifestations were cognitive and mood related. Often when I was saying a sentence I would become overwhelmed halfway through and would have to consciously force myself to finish what I was saying.
Long story short, I started treating my diet like a controlled experiment and, after a few years of trial and error, have come out feeling better than I can ever remember. If you're going to try self experimentation the three things I recommend most highly to ease the analysis process are:
I'm curious. What foods (if you don't mind me asking) did you find had such a powerful effect?
I expanded upon it here.
What has helped me the most, by far, is cutting out soy, dairy, and all processed foods (there are some processed foods I feel fine eating, but the analysis to figure out which ones proved too costly for the small benefit of being able to occasionally eat unhealthy foods).
Also, don't offer money. External motivators are disincentives. By offering $100, you are attaching a specific worth to the request, and undermining our own intrinsic motivations to help. Since allowing a reward to disincentivize a behavior is irrational, I'm curious how much effect it has on the LessWrong crowd; regardless, I would be surprised if anyone here tried to collect, so I don't see the point.
My understanding is that the mechanism by which this works lets you sidestep it pretty neatly by also doing basically similar things for free. That way you can credibly tell yourself that you would do it for free, and being paid is unrelated.
MixedNuts, I'm in a similar position, though perhaps less severely, and more intermittently. I've been diagnosed with bipolar, though I've had difficulty taking my meds. At this point in my life, I'm being supported almost entirely by a network of family, friends, and associates that is working hard to help me be a real person and getting very little in return.
I have one book that has helped me tremendously, "The Depression Cure", by Dr. Ilardi. He claims that depression-spectrum disorders are primarily caused by lifestyle, and that almost everyone can benefit from simple changes. As any book--especially a self-help book---it ought to be read skeptically, and it doesn't introduce any ideas that can't be found in modern psychological research. Rather, it aggregates what in Ilardi's opinion are the most important: exercise works more effectively than SSRIs, etc.
If you really want a copy, and you really can't get one yourself, I will send you one if you can send me your address. It helped me that much. Which is not to say that I am problem free. Still, a 40% reduction in problem behavior, after 6 months, with increasing rather than decreasing results, is a huge deal for me.
Rather, I want to give you your "one trick". It is the easiest rather than the most effective; but it has an immediate effect, which helped me implement the others. Morning sunlight. I don't know where you live; I live in a place where I can comfortably sit outside in the morning even this time of year. Get up as soon as you can after waking, and wake as early in the day as you would ideally like to. Walk around, sit, or lie down in the brightest area outside for half an hour. You can go read studies on why this works, or that debate its efficacy, but for me it helps.
I realize that your post didn't say anything about depression; just lack of willpower. For me, they were tightly intertwined, and they might not be for you. Please try it anyway.
Thanks. I'll try the morning light thing; from experience it seems to help somewhat, but I can't keep it going for long.
If nothing else works, I'll ask you for the book. I'm skeptical since they tend to recommend unbootstrapable things such as exercise, but it could help.
There is one boot process that works well, which is to contract an overseer. For me, it was my father. I felt embarrassed to be a grown adult asking for his father's oversight, but it helped when I was at my worst. Now, I have him, my roommate, two ex-girlfriends, and my advisor who are all concerned about me and check up with me on a regular basis. I can be honest with them, and if I've stopped taking care of myself, they'll call or even come over to drag me out of bed, feed me, and/or take me for a run.
I have periodically been an immense burden on the people who love me. However, I eventually came to the realization that being miserable, useless, and isolated was harder and more unpleasant for them than being let in on what was wrong with me and being asked to help. I've been a net negative to this world, but for some reason people still care for me, and as long as they do, my best course of action seems to be to let them try to help me. I suspect you have a set of people who would likewise prefer to help you than to watch you suffer.
Feeling less helpless was nearly as good for them as for me. I have a debt to them that I am continuing to increase, because I'm still not healthy or self-sufficient. I don't know if I can ever repay it, but
I have had and sometimes still struggle with similar problems, but there is something that sometimes has helped me:
If there's something you need to do, try to do something with it, however little, as soon after you get up as possible. The example I'm going to use is studying, but you can generalize from it.
Pretty much soon as you get up, BEFORE checking email or anything like that, study (or whatever it is you need to do) a bit. And keep doing until you feel your mental energy "running out".. but then, any time later in the day that you feel a smigen of motivation, don't let go of it: run immediately to continue doing.
But starting the day with doing some, however little, seemed to help. I think with me the psychology was sort of "this is the sort of day when I'm working on this", so once I start on it, it's as if I'm "allowed" to periodically keep doing stuff with it during the day.
Anyways, as I said, this has sometimes helped me, so...
Order modafinil online. Take it, using 'count backwards then swallow the pill' if necessary. Then, use the temporary boost in mental energy to call a shrink.
I have found this useful at times.
I'm willing to try to help you but I think I'd be substantially more effective in real time. If you would like to IM, send me your contact info in a private message.
The number one piece of advice that I can give is see a doctor. Not a psychologist or psychiatrist - just a medical doctor. Tell them your main symptoms (low energy, difficulty focusing, panic attacks) and have them run some tests. Those types of problems can have physical, medical causes (including conditions involving the thyroid or blood sugar - hyperthyroidism & hypoglycemia). If a medical problem is a big part of what's happening, you need to get it taken care of.
If you're having trouble getting yourself to the doctor, then you need to find a way to do it. Can you ask someone for help? Would a family member help you set up a doctor's appointment and help get you there? A friend? You might even be able to find someone on Less Wrong who lives near you and could help.
My second and third suggestions would be to find a friend or family member who can give you more support and help (talking about your issues, driving you to appointments, etc.) and to start seeing a therapist again (and find a good one - someone who uses cognitive-behavioral therapy).
This is technically a good idea. What counts as "my main symptoms", though? The ones that make life most difficult? The ones that occur most often? The most visible ones to others? To me?
You'll want to give the doctor a sense of what's going on with you (just like you've done here), and then to help them find any medical issues that may be causing your problems. So give an overall description of the problem and how serious it is (sort of like in your initial post - your lack of energy, inability to do things, and lots of related problems) - including some examples or specifics (like these) can help make that clearer. And be sure to describe anything that seems like it could be physiological (the three that stuck out to me were lack of energy, difficulty focusing, and anxiety / panic attacks - you might be able to think of some others).
The doctor will have questions which will help guide the conversation, and you can always ask whether they want more details about something. Do you think that figuring out what to say to the doctor could be a barrier for you? If so, let me know - I could say more about it.
Do you take fish oil supplements or equivalent? Can't hurt to try; fish oil is recommended for ADHD and very well may repair some of the brain damage that causes mental illness.
http://news.ycombinator.com/item?id=1093866
What do you do when you aren't doing anything?
EDIT: More questions as you answer these questions. Too many questions at once is too much effort. I am taking you dead seriously so please don't be offended if I severely underestimate your ability.
I keep doing something that doesn't require much effort, out of inertia; typically, reading, browsing the web, listening to the radio, washing a dish. Or I just sit or lie there letting my mind wander and periodically trying to get myself to start doing something. If I'm trying to do something that requires thinking (typically homework) when my brain stops working, I keep doing it but I can't make much progress.
Possible solutions:
Increase the amount of effort it takes to do the low-effort things you are trying to avoid. For instance, it isn't terribly hard to set your internet on a timer so it automatically shuts off from 1 - 3pm. While it isn't terribly hard to turn it back on, if you can scrounge up the effort to turn it back on you may be able to put that effort into something else.
Decrease the amount of effort it takes to do the high-effort things you are trying to accomplish. Paying bills, for instance, can be done online and streamlined. Family and friend can help tremendously in this area.
Increase the amount of effort it takes to avoid doing the things you are trying to accomplish. If you want to make it to an important meeting, try to get a friend to pick you up and drive you all the way over there.
These are somewhat complicated and broad categories and I don't know how much they would help.
I've tried all that (they're on LW already).
That wouldn't work. I do these things by default, because I can't do the things I want. I don't even have a problem with standard akrasia anymore, because I immediately act on any impulse I have to do something, given how rare they are. Also, I can expend willpower do stop doing something, whereas "I need to do this but I can't" seems impervious to it, at least in the amounts I have.
There are plenty of things to be done here, but they're too hard to bootstrap. The easy ones helped somewhat.
That helped me most. In the grey area between things I can do and things I can't (currently, cleaning, homework, most phone calls), pressure helps. But no amount of ass-kicking has made me do the things I've been trying to do for a while.
Has anyone had any success applying rationalist principles to Major Life Decisions? I am facing one of those now, and am finding it impossible to apply rationalist ideas (maybe I'm just doing something wrong).
One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.
Weirdly, the most convincing argument I've contemplated so far is basically a "what would X do?" style analysis, where X is a fictional character.
It feels to me that rationalist principles are most useful in avoiding failure modes. But they're much less useful in coming up with new things you should do (as opposed to specifying things you shouldn't do).
I'd start by asking whether the unknowns of the problem are primarily social and psychological, or whether they include things that the human intuition doesn't handle well (like large numbers).
If it's the former, then good news! This is basically the sort of problem your frontal cortex is optimized to solve. In fact, you probably unconsciously know what the best choice is already, and you might be feeling conflicted so as to preserve your conscious image of yourself (since you'll probably have to trade off conscious values in such a choice, which we're never happy to do).
In such a case, you can speed up the process substantially by finding some way of "letting the choice be made for you" and thus absolving you of so much responsibility. I actually like to flip a coin when I've thought for a while and am feeling conflicted. If I like the way it lands, then I do that. If I don't like the way it lands, well, I have my answer then, and in that case I can just disobey the coin!
(I've realized that one element of the historical success of divination, astrology, and all other vague soothsaying is that the seeker can interpret a vague omen as telling them what they wanted to hear— thus giving divine sanction to it, and removing any human responsibility. By thus revealing one's wants and giving one permission to seek them, these superstitions may have actually helped people make better decisions throughout history! That doesn't mean it needs the superstitious bits in order to work, though.)
If it's the latter case, though, you probably need good specific advice from a rational friend. Actually, that practically never hurts.
A few principles that can help in such cases (major decision, very little direct data):
...I don't suppose you can tell us what? I expect that if you could, you would have said, but thought I'd ask. It's difficult to work with this little.
I could toss around advices like "A lot of Major Life Decisions consist of deciding which of two high standards you should hold yourself to" but it's just a shot in the dark at this point.
I am not that far in the sequences, but these are posts I would expect to come into play during Major Life Decisions. These are ordered by my perceived relevance and accompanied with a cool quote. (The quotes are not replacements for the whole article, however. If the connection isn't obvious feel free to skim the article again.)
Hope that helps.
Based on those two lucid observations, I'd say you're doing well so far.
There are some principles I used to weigh major life decisions. I'm not sure they are "rationalist" principles; I don't much care. They've turned out well for me.
Here's one of them: "having one option is called a trap; having two options is a dilemma; three or more is truly a choice". Think about the terms of your decision and generate as many different options as you can. Not necessarily a list of final choices, but rather a list of candidate choices, or even of choice-components.
If you could wave a magic wand and have whatever you wanted, what would be at the top of your list? (This is a mind-trick to improve awareness of your desires, or "utility function" if you want to use that term.) What options, irrespective of their downsides, give you those results?
Given a more complete list you can use the good old Benjamin Franklin method of listing pros and cons of each choice. Often this first step of option generation turns out sufficient to get you unstuck anyway.
Having two options is a dilemma, having three options is a trilemma, having four options is a tetralemma, having five options is a pentalemma...
:)
A few more than five is an oligolemma; many more is a polylemma.
Just remembered: I managed not to be stupid on one or two times by asking whether, not why.
I just came out of a tough Major Life Situation myself. The rationality 'tools' I used were mostly directed at forcing myself to be honest with myself, confronting the facts, not privileging certain decisions over others, recognizing when I was becoming emotional (and more importantly recognizing when my emotions were affecting my judgement), tracking my preferred choice over time and noticing correlations with my mood and pertinent events.
Overall, less like decision theory and more like a science: trying to cut away confounding factors to discover my true desire. Of course, sometimes knowing your desires isn't sufficient to take action, but I find that for many personal choices it is (or at least is enough to reduce the decision theory component to something much more manageable).
The dissolving the question mindset has actually served me pretty well as a TA - just bearing in mind the principle that you should determine what led to this particular confused bottom line is useful in correcting it afterwards.
Pigeons can solve Monty hall (MHD)?
Behind a paywall
But freely available from one of the authors' website.
Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.
I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?
This was in my drafts folder but due to the lackluster performance of my latest few posts I decided it doesn't deserve to be a top level post. As such, I am making it a comment here. It also does not answer the question being asked so it probably wouldn't have made the cut even if my last few posts been voted to +20 and promoted... but whatever. :P
Perceived Change
Once, I was dealing a game of poker for some friends. After dealing some but not all of the cards I cut the deck and continued dealing. This irritated them a great deal because I altered the order of the deck because some players would not receive the cards they were supposed to be dealt. One of the friends happened to be majoring in Mathematics and understood probability as much as anyone else at the table. Even he thought what I did was wrong.
I explained that the cut didn’t matter because everyone still has the same odds of receiving any particular card from the deck. His retort was that it did matter because the card he was going to get is now near the middle of the deck. Instead of that particular random card he will get a different particular random card. As such, I should not have cut the deck.
During the ensuing arguments I found myself constantly presented with the following point: The fact of the game is that he would have received a certain card and now he will receive a different card. Shouldn’t this matter? People seem to hold grudges when someone swaps random chances of an outcome and the swap changes who wins.
The problem with this objection is illustrated if I secretly cut the cards. If they have no reason to believe I cut the deck, they wouldn’t complain. Furthermore, it is completely impossible to perceive the change by studying before and after states of the probabilities. More clearly, if I put the cards under the table and threatened to cut the cards, my friends would have no way of knowing whether or not I cut the deck. This implies that the change itself is not the sole cause of complaint. The change must be accompanied with the knowledge that something was changed.
The big catch is that the change itself isn’t actually necessary at all. If I simply tell my friends that I cut the cards when they were not looking they will be just as upset. They have perceived a change in the situation. In reality, every card is in exactly the same position and they will be dealt what they think they should have been dealt. But now even that has changed. Now they actually think the exact opposite. Even though nothing about the deck has been changed, they now think that the cards being dealt to them are the wrong cards.
What is this? There has to be some label for this, but I don’t know what it is or what the next step in this observation should be. Something is seriously, obviously wrong. What is it?
Edit to add:
The underlying problem here is not that they were worried about me cheating. The specific scenario and the arguments that followed from that scenario were such that cheating wasn't really a valid excuse for their objections.
To venture a guess: their true objection was probably "you didn't follow the rules for dealing cards". And, to be fair to your friends, those rules were designed to defend honest players against card sharps, which makes violations Bayesian grounds to suspect you of cheating.
No, this wasn't their true objection. I have a near flawless reputation for being honest and the arguments that ensued had nothing to do with stacking the deck. If I were a dispassionate third party dealing the game they would have objected just as strongly.
I initially had a second example as such:
It seems as though some personal attachment is created with the specific random object. Once that object is "taken," there is an associated sense of loss.
Your reputation doesn't matter. Once the rules are changed, you are on a slippery slope of changing rules. The game slowly ceases to be poker.
When I am playing chess, I demand that the white moves first. When I find myself as the black, knowing that the opponent had whites the last game and it is now my turn to make the first move, I rather change places or rotate the chessboard than play the first move with the blacks, although it would not change my chances of winning. (I don't remember the standard openings, so I wouldn't be confused by the change of colors. And even if I were, this would be the same for the opponent.)
Rules are rules in order to be respected. They are often a lot arbitrary, but you shouldn't change any arbitrary rule during the game without prior consent of the others, even if it provably has no effect to the winning odds.
I think this is a fairly useful heuristic. Usually, when a player tries to change the rules, he has some reason, and usually, the reason is to increase his own chances of winning. Even if you opponent doesn't see any profit which you can get from changing the rules, he may suppose that there is one. Maybe you remember somehow that there are better or worse cards in the middle of the pack. Or you are trying to test their attention. Or you want to make more important changes of rules later, and wanted to have a precedent for doing that. These possibilities are quite realistic in gambling, and therefore is is considered a bad manner to change the rules in any way during the game.
I don't know how to respond to this. I feel like I have addressed all of these points elsewhere in the comments.
A summary:
It seems to be a problem with ownership. If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride?
One more thing of note: They argued against the abstract scenario. This scenario assumed no cheating and no funny business. They still thought it mattered.
Personally, I think this is a larger issue than catching cheaters. People seemed somewhat attached to the anti-cheating heuristic. Would it be worth me typing up an addendum addressing that point in full?
The System 1 suspicion-detector would be less effective if System 2 could override it, since System 2 can be manipulated.
(Another possibility may be loss aversion, making any change unattractive that guarantees a different outcome without changing the expected value. (I see hugh already mentioned this.) A third, seemingly less likely, possibility is intuitive 'belief' in the agency of the cards, which is somehow being undesirably thwarted by changing the ritual.)
I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist.
Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not.
(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.
EDIT: Wow, this turned into a ramble. I didn't have time to proof it so I apologize if it doesn't make sense.
Okay, yeah, that makes sense. My instinct is pointing me in the other direction namely because I have the (self perceived) benefit of knowing which friends of mine were objecting. Of note, no one openly accused me of cheating or anything like that. If I accidently dropped the deck on the floor or knocked it over the complaints would remain. The specific complaint, which I specifically asked for, is that their card was put into the middle of the deck.
(By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.)
Any pseudo random event where people can (a) predict the undisclosed particular random object and (b) someone can voluntarily preempt that prediction and change the result tends to receive the same behavior.
I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story:
Granted, there are a handful of obvious holes in this particular story. The list includes:
More stories like this have taught me to never muck with pseudo random variables whose outcomes effect things people care about even if the math behind the mucking doesn't change anything. People who had a lottery ticket and traded it for a different equal chance will get extremely depressed because they actually "had a shot at winning." These people could completely understand the probabilities involved, but somehow this doesn't help them avoid the "what if" depression that tells them they shouldn't have traded tickets.
People do this all the time involving things like when they left for work. Decades ago, my mother-in-law put her sister on a bus and the sister died when the bus crashed. "What if?" has dogged her ever since. The connection between the random chance of that particular bus crashing on that particular day is associated with her completely independent choice to put her sister on the bus. While they are mathematically independent, it doesn't change the fact that her choice mattered. For some reason, people take this mattering and do things with it that makes no sense.
This topic can branch out into really weird places when viewed this way. The classic problem of someone holding 10 people hostage and telling you to kill 1 or all 10 die matches the pattern with a moral choice instead of random chance. When asking if it is more moral to kill 1 or let the 10 die people will argue that refusing to kill an innocent will result in 9 more people dying than needed. The decision matters and this mattering reflects on the moral value of each choice. Whether this is correct or not seems to be in debate and it is only loosely relevant for this particular topic. I am eagerly looking for the eventual answer to the question, "Are these events related?" But to get there I need to understand the simple scenario, which is the one presented by my original comment.
I am having trouble understanding this. Can you say it again with different words?
Have no fear - your comment is clear.
I'll give you that one, with a caveat: if an algorithm consistently outputs correct data rather than incorrect, it's a heuristic, not a bias. They lose points either way for failing to provide valid support for their complaint.
Yes, those anecdotes constitute the sort of data I requested - your hypothesis now outranks mine in my sorting.
When I read your initial comment, I felt that you had proposed an overly complicated explanation based on the amount of evidence you presented for it. I felt so based on the fact that I could immediately arrive at a simpler (and more plausible by my prior) explanation which your evidence did not refute. It is impressive, although not necessary, when you can anticipate my plausible hypothesis and present falsifying evidence; it is sufficient, as you have done, to test both hypotheses fairly against additional data when additional hypotheses appear.
Ah, okay. That makes more sense. I am still experimenting with the amount of predictive counter-arguing to use. In the past I have attempted to so by adding examples that would address the potential objections. This hasn't been terribly successful. I have also directly addressed the points and people still brought them up... so I am pondering how to fix the problem.
But, anyway. The topic at hand still interests me. I assume there is a term for this that matches the behavior. I could come up with some fancy technical definition (perceived present ownership of a potential future ownership) but it seems dumb to make up a term when there is one lurking around somewhere. And the idea of labeling it an ownership problem didn't really occur to me until my conversation with you... so maybe I am answering my own question slowly?
Something like "ownership" seems right, as well as the loss aversion issue. Somehow, this seemingly-irrational behavior seems perfectly natural to me (and I'm familiar with similar complaints about the order of cards coming out). If you look at it from the standpoint of causality and counterfactuals, I think it will snap into place...
Suppose that Tim was waiting for the king of hearts to complete his royal flush, and was about to be dealt that card. Then, you cut the deck, putting the king of hearts in the middle of the deck. Therefore, you caused him to not get the king of hearts; if your cutting of the deck were surgically removed, he would have had a straight flush.
Presumably, your rejoinder would be that this scenario is just as likely as the one where he would not have gotten the king of hearts but your cutting of the deck gave it to him. But note that in this situation the other players have just as much reason to complain that you caused Tim to win!
Of course, any of them is as likely to have been benefited or hurt by this cut, assuming a uniform distribution of cards, and shuffling is not more or less "random" than shuffling plus cutting.
A digression: But hopefully at this point, you'll realize the difference between the frequentist and Bayesian instincts in this situation. The frequentist would charitably assume that the shuffle guarantees a uniform distribution, so that the cards each have the same probability of appearing on any particular draw. The Bayesian will symmetrically note that shuffling makes everyone involved assign the same probability to each card appearing on any particular draw, due to their ignorance of which ones are more likely. But this only works because everyone involved grants that shuffling has this property. You could imagine someone who payed attention to the shuffle and knew exactly which card was going to come up, and then was duly annoyed when you unexpectedly cut the deck. Given that such a person is possible in principle, there actually is a fact about which card each person 'would have' gotten under a standard method, and so you really did change something by cutting the deck.
Yep. This really is a digression which is why I hadn't brought up another interesting example with the same group of friends:
We didn't do any tests on the subject because we really just wanted the annoying kid to stop dealing weird. But, now that I think about it, it should be relatively easy to test...
Also related, I have learned a few magic tricks in my time. I understand that shuffling is a tricksy business. Plenty of more amusing stories are lurking about. This one is marginally related:
This example is a counterpoint to the original. Here is someone claiming that it doesn't matter when the math says it most certainly does. The aforementioned cheater-heuristic would have prevented this player from doing something Bad. I honestly have no idea if he was just lying to us or was completely clueless but I couldn't help but be extremely suspicious when he ended up winning first place later that night.
On a tangent, myself and friends always pick the initial draw of cards using no particular method when playing Munchkin, to emphasize that we aren't supposed to be taking this very seriously. I favor snatching a card off the deck just as someone else was reaching for it.
To modify RobinZ's hypothesis:
Rather than focusing on any Bayesian evidence for cheating, let's think like evolution for a second: how do you want your organism to react when someone else's voluntary action changes who receives a prize? Do you want the organism to react, on a gut level, as if the action could have just as easily swung the balance in their favor as against them? Or do you want them to cry foul if they're in a social position to do so?
Your friends' response could come directly out of that adaptation, whatever rationalizations they make for it afterwards. I'd expect to see the same reaction in experiments with chimps.
I want my organism to be able to tell the difference between a cheater and someone making irrelevant changes to a deck of cards. I assume this was a rhetorical question.
Evolution is great but I want more than that. I want to know why. I want to know why my friends feel that way but I didn't when the roles were reversed. The answer is not "because I knew more math." Have I just evolved differently?
I want to know what other areas are affected by this. I want to know how to predict whatever caused this reaction in my friends before it happens in me. "Evolution" doesn't help me do that. I cannot think like evolution.
As much as, "You could have been cheating" is a great response -- and "They are conditioned to respond to this situation as if you were cheating" is a better response -- these friends know the probabilities are the same and know I wasn't cheating. And they still react this way because... why?
I suppose this comment is a bit snippier than it needs to be. I don't understand how your answer is an answer. I also don't know much about evolution. If I learned more about evolution would I be less confused?
It's a side effect.
Yes, they were being irrational in this case. But the heuristics they were using are there for good reason. Suppose they had money coming to them and you swooped in and took it away before it could reach them, they would be rational to object, right? That's why those heuristics are there. In practice the trigger conditions for these things are not specified with unlimited precision, and pure but interruptible random number generators are not common in real life, so the trigger conditions harmlessly spill over to this case. But the upshot is that they were irrational as a side effect of usually rational heuristics.
So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation?
I can understand your answer if the scenario was more like:
"Hey! Don't do that!"
"But it doesn't matter. See?"
"Oh. Well, okay. But don't do it anyway because..."
And then they mention your heuristic. They didn't do anything like this. They explicitly understood that nothing was changing in the probabilities and they explicitly understood that I was not cheating. And they were completely willing to defend their reaction in arguments. In their mind, their position was completely rational. I could not convince them that it was rational with math. Something else was the problem.
"Heuristics" is nifty, but I am not completely satisfied with that answer. Why would they have kept defending it when it was demonstrably wrong?
I suppose it is possible that they were completely unaware that they were using whatever heuristic they were using. Would that explain the behavior? Perhaps this is why they could not explain their position to me at the time of the arguments?
How would you describe this heuristic in a few sentences?
I suspect it starts with something like "in the context of a game or other competition, if my opponent does something unexpected, and I don't understand why, it's probably bad news for me", with an emotional response of suspicion. Then when your explanation is about why shuffling the cards is neutral rather than being about why you did something unexpected, it triggers an "if someone I'm suspicious of tries to convince me with logic rather than just assuring me that they're harmless, they're probably trying to get away with something" heuristic.
Also, most people seem to make the assumption, in cases like that, that they aren't going to be able to figure out what you're up to on the fly, so even flawless logic is unlikely to be accepted - the heuristic is "there must be a catch somewhere, even if I don't see it".
Because human beings often first have a reaction based on an evolved, unconscious heuristic, and only later form a conscious rationalization about it, which can end up looking irrational if you ask the right questions (e.g. the standard reactions to the incest thought experiment there). So, yes, they were probably unaware of the heuristic they were actually using.
I'd suppose that the heuristic is along the lines of the following: Say there's an agreed-upon fair procedure for deciding who gets something, and then someone changes that procedure, and someone other than you ends up benefiting. Then it's unfair, and what's yours has probably been taken.
Given that rigorous probability theory didn't emerge until the later stages of human civilization, there's not much room for an additional heuristic saying "unless it doesn't change the odds" to have evolved; indeed, all of the agreed-upon random ways of selecting things (that I've ever heard of) work by obvious symmetry of chances rather than by abstract equality of odds†, and most of the times someone intentionally changed the process, they were probably in fact hoping to cheat the odds.
† Thought experiment: we have to decide a binary disagreement by chance, and instead of flipping a coin or playing Rock-Paper-Scissors, I suggest we do the following: First, you roll a 6-sided die, and if it's a 1 or 2 you win. Otherwise, I roll a 12-sided die, and if it's 1 through 9 I win, and if it's 10 through 12 you win.
Now compute the odds (50-50, unless I made a dumb mistake), and then actually try it (in real life) with non-negligible stakes. I predict that you'll feel slightly more uneasy about the experience than you would be flipping a coin.
Everything else you've said makes sense, but I think the heuristic here is way off. Firstly, they object before the results have been produced, so the benefit is unknown. Second, the assumption of an agreed upon procedure is only really valid in the poker example. Other examples don't have such an agreement and seem to display the same behavior. Finally, the change to the produce could be by a disinterested party with no possible personal gain to be had. I suspect that the reaction would stay the same.
So, whatever heuristic may be at fault here, it doesn't seem to be the one you are focusing on. The fact that my friends didn't say, "You're cheating" or "You broke the rules" is more evidence against this being the heuristic. I am open to the idea of a heuristic being behind this. I am also open to the idea that my friends may not be aware of the heuristic or its implications. But I don't see how anything is pointing toward the heuristic you have suggested.
Hmm... 1/3 I win outright... 2/3 enters a second roll where I win 1/4 of the time. Is that...
1/3 + 2/3 * 1/4 =
1/3 + 2/12 =
4/12 + 2/12 =
6/12 =
1/2
Seems right to me. And I don't suspect to feel uneasy about such an experience at all since the odds are the same. If someone offered me a scenario and I didn't have the math prepared I would work out the math and decide if it is fair.
If I do the contest and you start winning every single time I might start getting nervous. But I would do the same thing regardless of the dice/coin combos we were using.
I would actually feel safer using the dice because I found that I can strongly influence flipping a fair quarter in my favor without much effort.
An important element of it being fair for you to cut the deck in the middle of dealing, which your friends may not trust, is that you do so in ignorance of who it will help and who it will hinder. By cutting the deck, you have explicitly made and acted on a choice (it is far less obvious when you choose not to cut the deck, the default expected action), and this causes your friends to worry that the choice may have been optimized for interests other than their own.
"Why Self-Educated Learners Often Come Up Short" http://www.scotthyoung.com/blog/2010/02/24/self-education-failings/
Quotation: "I have a theory that the most successful people in life aren’t the busiest people or the most relaxed people. They are the ones who have the greatest ability to commit to something nobody else forces them to do."
Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves.
Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.
I partially agree with this. Somewhere along the way, I learned how to learn. I still haven't really learned how to finish. I think these two features would have been dramatically enhanced had I not gone to school. I think a potential problem with self-educated learners (I know two adults who were unschooled) is that they get much better at fulfilling their own needs and tend to suffer when it comes to long-term projects that have value for others.
The unschooled adults I know are both brilliant and creative, and ascribe those traits to their unconventional upbringing. But both of them work as freelance handymen. They like helping others, and would help other people more if they did something else, but short-term projects are all they can manage. They are polymaths that read textbooks and research papers, and one has even developed a machine learning technique that I've urged him to publish. However, when they get bored, they stop. The chance that writing up his results and releasing them would further research is not enough to get him past that obstacle of boredom.
I have long thought that school, as currently practiced, is an abomination. I have yet to come up with a solution that I'm convinced solves its fundamental problems. For a while, I thought that unschooling was the solution, but these two acquaintances changed my mind. What is your opinion, on the right way to teach and learn?
How much information is preserved by plastination? Is it a reasonable alternative to cryonics?
Afaict pretty much the same amount as cryonics. And it is cheaper and more amenable to laser scanning. This is helpful. The post has an interesting explanation of why all the attention is on cryo:
Edit: Further googling suggest there might be some unsolved implementation issues.
How important are 'the latest news'?
These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming.
What is your take on it?
I wonder if there is really more to it than just curiosity and leisure. Are there news sources (blogs, the latest research, 'lesswrong'-2.0 etc.), besides lesswrong.com, that every rationalist should stay up to date on? For example, when trying to reduce my news load, I'm trying to take into account how much of what I know and do has its origins in some blog post or news item. Would I even know about lesswrong.com if I wasn't the heavy news addict that I am?
What would it mean to ignore most news and concentrate on my goals of learning math, physics and programming while reading lesswrong.com? Have I already reached a level of knowledge that allows me to get from here to everywhere, without exposing myself to all the noise out there in hope of coming across some valuable information nugget which might help me reach the next level?
How do we ever know if there isn't something out there that is more worthwhile, valuable, beautiful, something that makes us happier and less wrong? At what point should we cease to be the tribesman who's happily trying to improve his hunting skills but ignorant of the possible revolutions taking place in a city only 1000 miles afar?
Is there a time to stop searching and approach what is at hand? Start learning and improving upon the possibilities we already know about? What proportion of one's time should a rationalist spend on the prospect of unknown unknowns?
I searched for a good news filter that would inform me about the world in ways that I found to be useful and beneficial, and came up with nothing.
Any source that contained news items I categorized as useful, they made up less than 5% of the information presented by that source, and thus were drowned out and took too much time and effort, on a daily basis, to find. Thus, I mostly ignore news, except what I get indirectly through following particular communities like LessWrong or Slashdot.
However, I perform this exercise on a regular basis (perhaps once a year), clearing out feeds that have become too junk-filled, searching out new feeds, and re-evaluating feeds I did not accept last time, to refine my information access.
I find that this habit of perpetual long-term change (significant reorganization, from first principles of the involved topic or action) is highly beneficial in many aspects of my life.
ETA: My feed reader contains the following:
For the vast majority of posts on each of these feeds, I only read the headline. Feeds where I consistently (>25%) read the articles or comments are: Slashdot (mostly while bored at work), Marginal Revolution (the only place I read every post), Sentient Developments, Accelerating Future, and LessWrong. Even for those, I rarely (<10%) read linked articles, preferring instead to read only the distillation by the blog author, or the comments by other users.
ETA2: I also listen to NPR during my short commute to and from work, and occasionally watch the Daily Show and the Colbert Report online, for entertainment. Firefox with NoScript and Adblock Plus makes it bearable - I'm extremely advertising averse.
I do not own a television, and generally consider TV news (in the US) to be horrendous and mind-destroying.
Good question, which I'm finding surprisingly hard to answer. (i.e. I've spent more time composing this comment than is perhaps reasonable, struggling through several false starts).
Here are some strategies/behaviours I use: expand and winnow; scorched earth; independent confirmation; obsession.
My RSS feeds folder, once massive, is down to a half dozen indispensable blogs. I've unsubscribed from most of the mailing lists I used to read. My main "monitored" channel is Twitter, where I follow a few dozen folks who've turned up gold in the past. My main "active" source of new juicy stuff to think about is LW.
(ETA: as an example of "independent confirmation" in the past two minutes, one of my Agile colleagues on Twitter posted this link.)
Pick some reasonable priors and use them to answer the following question.
On week 1, Grandma calls on Thursday to say she is coming over, and then comes over on Friday. On week 2, Grandma once again calls on Thursday to say she is coming over, and then comes over on Friday. On week 3, Grandma does not call on Thursday to say she is coming over. What is the probability that she will come over on Friday?
ETA: This is a problem, not a puzzle. Disclose your reasoning, and your chosen priors, and don't use ROT13.
Let
A toy version of my prior could be reasonably close to the following:
P(AN)=p, P(AN,BN)=pq, P(~AN,BN)=(1-p)r
where
Thus, the joint probability distribution of (p,q,r) is given by 4q(1-r) once we normalize. Now, how does the evidence affect this? The likelihood ratio for (A1,B1,A2,B2) is proportional to (pq)^2, so after multiplying and renormalizing, we get a joint probability distribution of 24p^2q^3(1-r). Thus P(~A3|A1,B1,A2,B2)=1/4 and P(~A3,B3|A1,B1,A2,B2)=1/12, so I wind up with a 1 in 3 chance that Grandma will come on Friday, if I've done all my math correctly.
Of course, this is all just a toy model, as I shouldn't assume things like "different weeks are independent", but to first order, this looks like the right behavior.
I should have realized this sooner: P(B3|~A3) is just the updated value of r, which isn't affected at all by (A1,B1,A2,B2). So of course the answer according to this model should be 1/3, as it's the expected value of r in the prior distribution.
Still, it was a good exercise to actually work out a Bayesian update on a continuous prior. I suggest everyone try it for themselves at least once!
In the calls, does she specify when she is coming over? I.e. does she say she'll be coming over on Thursday, Friday, just sometime in the near future, or she leaves it for you to infer?
I fail to see how this question has a perceptibly rational answer - too much depends on the prior.
Presumably, once you've picked your priors, the rest follows. And presumably, once you've come up with an answer, you'll disclose your reasoning, and your chosen priors.
Using the information that she is my grandmother, I speculate on the reason why she did not call on Thursday. Perhaps it is because she does not intend to come on Friday: P(Friday) is lowered. Perhaps it is because she does intend to come but judges the regularity of the event to make calling in advance unnecessary unless she had decided not to come: P(Friday) is raised. Grandmothers tend to be old and consequently may be forgetful: perhaps she intends to come but has forgotten to call: P(Friday) is raised. Grandmothers tend to be old, and consequently may be frail: perhaps she has been taken unwell; perhaps she is even now lying on the floor of her home, having taken a fall, and no-one is there to help: P(Friday) is lowered, and perhaps I should phone her.
My answer to the problem is therefore: I phone her to see how she is and ask if she is coming tomorrow.
I know -- this is not an answer within the terms of the question. However, it is my answer.
The more abstract version you later posted is a different problem. We have two observations of A and B occurring together, and that is all. Unlike the case of Grandma's visits, we have no information about any causal connection between A and B. (The sequence of revealing A before B does not affect anything.) What is then the best estimate of P(B|~A)?
We have no information about the relation between A and B, so I am guessing that a reasonable prior for that relation is that A and B are independent. Therefore A can be ignored and the Laplace rule of succession applied to the two observations of B, giving 3/4.
ETA: I originally had a far more verbose analysis of the second problem based on modelling it as an urn problem, which I then deleted. But the urn problem may be useful for the intuition anyway. You have an urn full of balls, each of which is either rough or smooth (A or ~A), and either black or white (B or ~B). You pick two balls which turn out to be both rough and black. You pick a third and feel that it is smooth before you look at it. How likely is it to be black?
Directly using the Laplace rule of succession on the sample space A \tensor B gives weights proportional to:
Conditioning on ~A, P(B|~A) = 1/2. Assuming independence does make a significant difference on this little data.
I have two basic questions that I am confused about. This is probably a good place to ask them.
What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.
Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probability to 'yes' and a probability to 'no'. What's the smallest sequence of questions you can ask him to decide for sure that a) he is not a rationalist, b) he is not a Bayesian?
This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a -3 rating though... so apparently it wasn't too useful.
The consensus of the comments was that the correct answer is .5.
Also of note is Bead Jar Guesses and its sequel.
If you truly have no clue, .5 yes and .5 no.
Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of "No". How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards "yes".
But the game of Doldun could also have the possibility of cooperative wins. Or it could be unwinnable. Or Strigli might not be playing. Or Strigli might be the only team playing - it's the team against the environment! Or Doldun could be called on account of a rain of frogs. Or Strigli's left running foobar could break a chitinous armor plate and be replaced by a member of team Baz, which means that Baz gets half credit for a Strigli win.
All of which means that you shouldn't be too confident in your probability distribution in such a foreign situation, but you still have to come up with a probability if it's relevant at all for action. Bad priors can hurt, but refusal to treat your uncertainty in a Bayes-like fashion hurts more (with high probability).
Yes, but in this situation you have so little information that .5 doesn't seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case? .5 isn't the right prior - some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
Unless there's some reason that they'd suspect it's more likely for us to ask them a trick question whose answer is "No" than one whose question is "Yes" (although it is probably easier to create trick questions whose answer is "No", and the Striglian could take that into account), 50% isn't a bad probability to assign if asked a completely foreign Yes-No question.
Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega's decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.
It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don't have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch - and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I'm in the Sea of Tranquility, false that I'm equidistant between the Sun and the star Polaris, false that... Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is "no".
Basically, your prior should be that everything is almost certainly false!
The odds of a random sentence being true are low, but the odds of the alien choosing to give you a true sentence are higher.
It turns out that Eliezer might not have been as wrong as he thought he was about passing on calorie restriction.
Well, there's still intermittent fasting.
IF would get around
and would also work well with the musings about variability and duration:
(Our ancestors most certainly did have to survive frequent daily shortfalls. Feast or famine.)
New on arXiv:
David H. Wolpert, Gregory Benford. (2010). What does Newcomb's paradox teach us?
See also:
In a competely preverse coincedence Benford's law, attributed to an apparently unrelated Frank Bernford, was apparently invented by an unrelated Simon Newcomb http://en.wikipedia.org/wiki/Benford%27s_law
Warning: Your reality is out of date
tl;dr:
There are established facts that don't change perceptibly (the boiling point of water), and there are facts that change constantly (outside temperature, time of day)
Inbetween these two intuitive categories, however, a third class of facts could be defined: facts that do change measurably, or even drastically, over human lifespans, but still so slowly that people, after first learning about them, have a tendency of dumping them into the "no-change" category unless they're actively paying attention to the field in question.
Examples of these so-called mesofacts include the total human population (6*10⁹? No, almost 7*10⁹ nowadays) and the number of exoplanets found (A hundred? Two hundred? More like four hundred and counting.)
Which very-low-effort activities are most worthwhile? By low effort, I mean about as hard as solitaire, facebook, blogs, TV, most fantasy novels, etc.
I think I have a good one for people in the USA. This is a job that allows you to work from home on your computer rating the quality of search engine results. It pays $15/hour and because their productivity metrics aren't perfect, you can work for 30 seconds and then take two minutes off with about as much variance as you want. Instead of taking time off directly to do different work, you could also slow yourself down by continuously watching TV or downloaded videos.
They are also hiring for some workers in similar areas that are capable of doing somewhat more complicated tasks, presumably for higher salaries. Some sound interesting. http://www.lionbridge.com/lionbridge/en-us/company/work-with-us/careers.htm
Yes, out of all "work from home" internet jobs, this is the only one that is not a scam. Lionbridge is a real company and their shares recently continued to increase after a strong earnings report. http://online.wsj.com/article/BT-CO-20100210-716444.html?mod=rss_Hot_Stocks
First, you send them your resume, and they basically approve every US high school graduate that can create a resume for the next step. Then you have to take a test in doing the job. They provide plenty of training material and the job isn't all that hard, a few hours of rapid skimming is probably enough to pass the test for most people. Almost 100% of people would be able to pass the test after 10 hours of studying.
throwing/giving away stuff you don't use. reading instead of watching tv or browsing website for the umpteenth time. eating more fruit and less processed sugar. exercising 10-15 minutes a day. writing down your ideas. intro to econ of some sort. spending 30 minutes a day on a long term project. meditation.
Game theorists discuss one-shot Prisoner's dilemma, why people who don't know Game Theory suggest the irrational strategy of cooperating, and how to make them intuitively see that defection is the right move.
Should we have a sidebar section "Friends of LessWrong" to link to sites with some overlap in goals/audience?
I would include TakeOnIt in such a list. Any other examples?
while not so proficient in math, I do scour arxiv on occasion, and am rewarded with gems like this, enjoy :)
"Lessons from failures to achieve what was possible in the twentieth century physics" by Vesselin Petkov http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.4218v1.pdf
Neat find! I haven't read all of it yet, but I found this striking:
This reminds me of Mach's Principle: Anti-Epiphenomenal Physics:
I generally prefer links to papers on the arxiv go the abstract, as so: http://arxiv.org/abs/1001.4218
This lets us read the abstract, and easily get to other versions of the same paper (including the latest, if some time goes by between your posting and my reading), and get to other works by the same author.
EDIT: overall, reasonable points, but some things "pinging" my crank-detectors. I suppose I'll have to track down reference 10 and the 4/3 claim for electro-magnetic mass.
I disagree. I think it's a paper which looks backwards in an unconstructive way. The author is hoping for conceptual breakthroughs as good as relativity and quantum theory, but which don't require engagement with the technical complexities of string theory or the Standard Model. Those two constructions respectively define the true theoretical and empirical frontier, but instead the author wants to ignore all that, linger at about a 1930s conceptual level, and look for another way.
ETA: As an example of not understanding contemporary developments, see his final section, where he says
I don't know what significance this question has for the author, but so far as I know, the hydrogen atom has no dipole moment in its ground state because the wavefunction is spherically symmetric. This will still be true in string theory. The hydrogen atom exists on a scale where the strings can be approximated by point particles. I suspect the author is thinking that because strings are extended objects they have dipole moments; but it's not of a magnitude to be relevant at the atomic scale.
Of course he looks backwards. You can't analyze why any discovery didn't happen sooner, even though all the pieces were there, unless you look backwards. I thought the case study of SR was quite illuminating, though it goes directly counter to his attack on string theory. After getting the Lorentz transform, it took a surprisingly long time to for anyone to treat the transformed quantities as equivalent -- that is, to take the math seriously. And for string theory, he says they take the math too seriously. Of course, the Lorentz transform was more clearly grounded in observed physical phenomenon.
I completely agree he doesn't understand contemporary developments, and that was some of what I referred to as "pinging my crank-detectors", along with the loose analogy between 4-d bending in "world tubes" to that in 3-d rods. I don't necessarily see that as a huge problem if he's not pretending to be able to offer us the next big revolution on a silver platter.
Wikipedia points to the original text of a 1905 article by Poincaré. How's your French?
Thanks. It's decent, actually, but there's still some barrier. Increasing that barrier is changes to physics notation since then (no vectors!).
Fortunately my university library appears to have a copy of an older edition of Rohrlich's Classical Charged Particles, which may help piece things together.
Petkov wrote:
It's worth noting that Feynman's statements are actually correct. According to Wikipedia, the problem is solved by postulating a non-electromagnetic attractive force holding the charged particle together, which subtracts 1/3 of the 4/3 factor, leaving unity. Petkov doesn't explicitly say that Feynman is wrong, but his phrasing might leave that impression.
When I was young, I happened upon a book called "The New Way Things Work," by David Macaulay. It described hundreds of household objects, along with descriptions and illustrations of how they work. (Well, a nuclear power plant, and the atoms within it, aren't household objects. But I digress.) It was really interesting!
I remember seeing someone here mention that they had read a similar book as a kid, and it helped them immensely in seeing the world from a reductionist viewpoint. I was wondering if anyone else had anything to say on the matter.
I loved that book. I still have moments when I pull some random picture from that book out of my memory to describe how an object works.
EDIT: Apparently the book is on Google.
Today there's How Stuff Works.
I have a problem with the wording of "logical rudeness". Even after having seen it many times, I reflexively parse it to mean being rude by being logical-- almost the opposite of the actual meaning.
I don't know whether I'm the only person who has this problem, but I think it's worth checking.
"Anti-logical rudeness" strikes me as a good bit better.
It's not anti-logical, it's rude logic. The point of Suber's paper is that at no point does the logically rude debater reason incorrectly from their premises, and yet we consider what they have done to be a violation of a code of etiquette.
I'm confused about Nick Bostrom's comment [PDF] on Robin Hanson's Great Filter idea. Roughly, it says that in a universe like ours that lacks huge intergalactic civilizations, finding fish fossils on Mars would be very bad news, because it would imply that evolving to fish phase isn't the greatest hurdle that kills most young civilizations - which makes it more likely that the greatest hurdle is still ahead of us. I think that's wrong because finding fish fossils (and nothing more) on Mars would only indicate a big hurdle right after the fish stage, but shouldn't affect our beliefs about later stages, so we have nothing to fear after all. Am I making a mistake or misunderstanding Bostrom's reasoning?
It makes the hurdle less likely to be before the fish stage, so more likely to be after the fish stage. While the biggest increase in probability is immediately after the fish stage, all subsequent stages are a more likely culprit now (especially as we could simply have missed fossils/their not have been formed for the post-fish stages).
So finding evidence of life that went extinct at any stage whatsoever should make us revise our beliefs about the Great Filter in the same direction? Doesn't this violate conservation of expected evidence?
Is there a counter-weighing bit of evidence every time we don't find evidence of life at all, and every time (if ever) we find evidence of non-extinct life?
LHC to shut down for a year to address safety concerns: http://news.bbc.co.uk/2/hi/science/nature/8556621.stm
Apparently this is shoddy journalism. http://news.ycombinator.com/item?id=1180487
I've just finished reading Predictably Irrational by Dan Ariely.
I think most LWers would enjoy it. If you've read the sequences, you probably won't learn that many new things (though I did learn a few), but it's a good way to refresh your memory (and it probably helps memorization to see those biases approached from a different angle).
It's a bit light compared to going straight to the studies, but it's also a quick read.
Good to give as gift to friends.
I'm waiting for the revised edition to come out in May.
Looking at that amazon link, has anyone considered automatically inserting a SIAI affiliate into amazon links? It appeared to work quite well for StackOverflow.
List with all the great books and videos
Recently I've read a few articles that mentioned the importance of reading the classic works, like the Feynman lectures on physics. But, where can I find those? Wouldn't it be nice if we had a central place, maybe wikipedia where you can find a list of all the great books, videolectures, web pages divided by field(physics, mathematics, computer science, economics, etc...)? So if someone wants to know what he has to read to get a good understanding of the basic knowledge of any field he will have a place to look it up. It doesn't necessarily need to have the actual works, but at least a pointer to them.
Is there such a comprehensive list somewhere?
every time someone tries to make such a list collaboratively much of the effort diffuses into arguments over inclusion eventually (see wikipedia).
Is there a way to view an all time top page for Less Wrong? I mean a page with all of the LW articles in descending order by points, or something similar.
The link named "top" in the top bar, below the banner? Starting with the 10 all time highest ranked articles and continuing with the 10 next highest when you click "next", and so on? Or do I misunderstand you and you mean something else?
I'm considering doing a post about "the lighthouse problem" from Data Analysis: a Bayesian Tutorial, by D. S. Sivia. This is example 3 in chapter 2, pp. 31-36. It boils down to finding the center and width of a Cauchy distribution (physicists may call it Lorentzian), given a set of samples.
I can present a reasonable Bayesian handling of it -- this is nearly mechanical, but I'd really like to see a competent Frequentist attack on it first, to get a good comparison going, untainted by seeing the Bayesian approach. Does anyone have suggestions for ways to structure the post?
What programming language should I learn?
As part of my long journey towards a decent education, I assume, it is mandatory to learn computer programming.
I'm thinking about starting with Processing and Lua. What do you think?
In an amazing coincidence, many of the suggestions you get will be the suggester's current favorite language. Many of these recommendations will be esoteric or unpopular languages. These people will say you should learn language X first because of the various features language X. They'll forget that they did not learn language X first, and while language X is powerful, it might not be easy to set up a development environment. Tutorials might be lacking. Newbie support might be lacking. Etc.
Others have said this but you can't hear it enough: It is not mandatory to learn computer programming. If you force yourself, you probably won't enjoy it.
So, what language should you learn first? Well the answer is... (drumroll) it depends! Mostly, it depends on what you are trying to do. (Side note: You can get a lot of help on mailing lists or IRC if you say, "I'm trying to do X." instead of, "I'm having a problem getting feature blah blah blah to work.")
I paused after reading this. The main way people learn to program is by writing programs and getting feedback from peers/mentors. If you're not coding something you find interesting, it's hard to stay motivated for long enough to learn the language.
My advice is to learn a language that a lot of people learn as a first language. You'll be able to take advantage of tutorials and support geared toward newbies. You can always learn "cooler" languages later, but if you start with something advanced you might give up in frustration. Common first languages in CS programs are Java and C++, but Python is catching on pretty quickly. It also helps if your first language is used by people you already know. That way they'll be able to mentor/advise you.
Finally, I should give some of my background. I've been writing code for a while. I write code for work and leisure. My first language was QBasic. I moved on to C, C++, TI-BASIC, Perl, PHP, Java, C#, Ruby, and some others. I've played with but don't really know Lisp, Lua, and Haskell. My favorite language right now is Python, but I'm probably still in the honeymoon phase since I've been using it for less than a year.
Argh, see what I said at the start? I recommended Python and my favorite language is currently Python!
Motivation is not my problem these days. It has been all my youth, partly the reason that I completely failed at school. Now the almost primal fear of staying dumb and a nagging curiosity to gather knowledge, learn and understand, do trump any lack of motivation or boredom. To see how far above you people, here at lesswrong.com, are compared to the average person makes me strive to approximate your wit.
In other words, it's already enough motivation to know the basics of a programming language like Haskell, when average Joe is hardly self-aware but a mere puppet. I don't want to be one of them anymore.
If motivation is no longer a problem for you, that could be something really interesting for the akrasia discussions. What changed so that motivation is no longer a problem?
I think the path outlined in ESR's How to Become a Hacker is pretty good. Python is in my opinion far and away the best choice as a first language, but Haskell as a second or subsequent language isn't a bad idea at all. Perl is no longer important; you probably need never learn it.
What I want is to be able understand, attain a more intuitive comprehension, of concepts associated with other fields that I'm interested in, which I assume are important. As a simple example, take this comment by RobinZ. Not that I don't understand that simple statement. As I said, I already know the 'basics' of programming. I thoroughly understand it. Just so you get an idea.
In addition to reading up on all lesswrong.com sequences, I'm mainly into mathematics and physics right now. That's where I have the biggest deficits. I see my planned 'study' of programming to be more as practise of logical thinking and as a underlying matrix to grasp fields liked computer science and concepts as that of a 'Turing machine'.
And I do not agree that the effect is nil. I believe that programming is one of the foundations necessary to understand. I believe that there are 4 cornerstones underlying human comprehension. From there you can go everywhere: Mathematics, Physics, Linguistics and Programming (formal languages, calculation/data processing/computation, symbolic manipulation). The art of computer programming is closely related to the basics of all that is important, information.
As mentioned in another comment, the best introduction to programming is probably SICP. I recommend going with this route, as trying to learn programming from language-specific tutorials will almost certainly not give you an adequate understanding of fundamental programming concepts.
After that, you will probably want to start dabbling in a variety of programming styles. You could perhaps learn some C for imperative programming, Java for object-oriented, Python for a high-level hybrid approach, and Haskell for functional programming as starters. If you desire more programming knowledge you can branch out from there, but this seems to be a good start.
Just keep in mind that when starting out learning programming, it's probably more important to dabble in as many different languages as you can. Doing this successfully will enable you to quickly learn any language you may need to know. I admit I may be biased in this assessment, though, as I tend to get bored focusing on any one topic for long periods of time.
Processing and Lua seem pretty exotic to me. How did you hear of them? If you know people who use a particular language, that's a pretty good reason to choose it.
Even if you don't have a goal in mind, I would recommend choosing a language with applications in mind to keep you motivated. For example, if (but only if) you play wow, I would recommend Lua; or if the graphical applications of Processing appeal to you, then I'd recommend it. If you play with web pages, javascript...
At least that's my advice for one style of learning, a style suggested by your mention of those two languages, but almost opposite from your "Nevertheless, I want to start from the very beginning," which suggests something like SICP. There are probably similar courses built around OCaml. The proliferation of monad tutorials suggests that the courses built around Haskell don't work. That's not to disagree with wnoise about the value of Haskell either practical or educational, but I'm skeptical about it as an introduction.
ETA: SICP is a textbook using Scheme (Lisp). Lisp or OCaml seems like a good stepping-stone to Haskell. Monads are like burritos.
Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.
The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.
So you end up with newcomers to Haskell trying to simultaneously:
And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.
But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another "monad tutorial" filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.
The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.
I learnt about Lua thru Metaplace, which is now dead. I heard about Processing via Anders Sandberg.
I'm always fascinated by data visualisation. I thought Processing might come in handy.
Thanks for mentioning SICP. I'll check it out.
Consider finding a Coding Dojo near your location.
There is a subtle but deep distinction between learning a programming language and learning how to program. The latter is more important and abstracts away from any particular language or any particular programming paradigm.
To get a feeling for the difference, look at this animation of Paul Graham writing an article - crossing the chasm between ideas in his head and ideas expressed in words. (Compared to personal experience this "demo" simplifies the process of writing an article considerably, but it illustrates neatly what books can't teach about writing.)
What I mean by "learning how to program" is the analogue of that animation in the context of writing code. It isn't the same as learning to design algorithms or data structures. It is what you'll learn about getting from algorithms or data structures in your head to algorithms expressed in code.
Coding Dojos are an opportunity to pick up these largely untaught skills from experienced programmers.
I agree with everything Emile and AngryParsley said. I program for work and for play, and use Python when I can get away with it. You can be shocked, that like AngryParsley, I will recommend my favorite language!
I have an additional recommendation though: to learn to program, you need to have questions to answer. My favorite source for fun programming problems is ProjectEuler. It's very math-heavy, and it sounds like you might like learning the math as much as learning the programming. Additionally, every problem, once solved, has a forum thread opened where many people post their solutions in many languages. Seeing better solutions to a problem you just solved on your own is a great way to rapidly advance.
Personally, I'm a big fan of Haskell. It will make your brain hurt, but that's part of the point -- it's very good at easily creating and using mathematically sound abstractions. I'm not a big fan of Lua, though it's a perfectly reasonable choice for its niche of embeddable scripting language. I have no experience with Processing. The most commonly recommended starting language is python, and it's not a bad choice at all.
Relevant answer to this question here, recently popularized on Hacker News.
I'd weakly recommend Python, it's free, easy enough, powerful enough to do simple but useful things (rename and reorganize files, extract data from text files, generate simple html pages ...),is well-designed and has features you'll encounter in other languages (classes, functional programming ...), and has a nifty interactive command line in which to experiment quickly. Also, some pretty good websites run on it.
But a lot of those advantages apply to languages like Ruby.
If you want to go into more exotic languages, I'd suggest Scheme over Haskell, it seems more beginner-friendly to me.
It mostly depends on what occasions you'll have of using it : if you have a website, Javascript might be better; If you like making game mods, go for lua. It also depends of who you know that can answer questions. If you have a good friend who's a good teacher and a Java expert, go for Java.
Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.
It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.
I considered putting that link here in the open thread after I read about it on Marginal Revolution, but I read the paper and found it weak enough to not really be worth a lengthy response.
What annoyed me about it is how Albert's title is "Why Bayesian Rationality Is Empty," and he in multiple places makes cute references to that title (e.g. "The answer is summarized in the paper’s title") without qualificaiton.
Then later, in a footnote, he mentions "In this paper, I am only concerned with subjective Bayesianism."
Seems like he should re-title his paper to me. He makes references to other critiques of objective Bayesianism, but doesn't engage them.
I think they are legitimate objections, but ones that have been partially addressed in this community. I take the principle objection to be, "Bayesian rationality can't justify induction." Admittedly true (see for instance Eliezer's take). Albert ignores sophisticated responses (like Robin's) and doesn't make a serious effort to explain why his alternative doesn't have the same problem.
For the "people say stupid things" file and a preliminary to a post I'm writing. There is a big college basketball tournament in New York this weekend. There are sixteen teams competing. This writer for the New York Post makes some predictions.
What is wrong with this article and how could you take advantage of the author?
Edit: Rot13 is a good idea here.
Gur cbfgrq bqqf qba'g tvir n gbgny cebonovyvgl bs bar, fb gurl'er Qhgpu-obbxnoyr.
Abg dhvgr. Uvf ceboyrz vf gung gur bqqf nqq hc gb yrff guna bar. Vs V tnir lbh 1-2 bqqf ba urnqf naq 1-2 bqqf ba gnvyf sbe na haovnfrq pbva, gung nqqf hc gb 1.3, naq lbh pna'g Qhgpu obbx zr ba gung.
I would like to suggest that people using Rot13 note that in their comments, perhaps as the first few characters "Rot13:" - otherwise, comments taken out of context are indecipherable.
Is this supposed to be obvious to people unfamiliar with college basketball in general and that tournament in particular? Gur bqqf (vs V haqrefgnaq gurz pbeerpgyl RQVG: V qvq abg) vzcyl oernx rira cebonovyvgvrf gung nqq hc gb nobhg 0.94, juvpu vzcyvrf gung n obbxznxre bssrevat gubfr bqqf jbhyq ba nirentr ybfr zbarl, ohg gung'f pybfr rabhtu gb abg or erznexnoyl fghcvq sbe n wbheanyvfg.
If the tournament is single elimination knockout, and the figures in brackets are win-loss record against roughly comparable opponents the odds for the sleepers and long-shots seem insanely good. South Florida in particular.
Yes
Rot13: Gel gur zngu ntnva, guvf gvzr pbairegvat sebz bqqf gb senpgvbaf, svefg. Vg nqqf hc gb nobhg .8... V qba'g xabj ubj ybj gung lbhe fgnaqneqf ner sbe wbheanyvfgf gubhtu.
This is also true. But the mistake I was thinking of was the first one.
So betting 1$ at 3-1 means that winning means you get 4$ total, your original bet + your winnings? I had assumend you'd get 3$.
To which Robin Z replies, "Yes, you get $4."
This confused me, too, for a while, so let me share with you the fruits of my puzzling.
You do get 3$ over the course of the whole transaction since at the time of the bet, you gave the bookmaker what you would owe him if you lose the bet (namely $1).
In other words, your 1$ bought you both a wager (the expected value of which is 0$ if 3-1 reflects the probability of the bet-upon outcome) and an IOU (whose expected value is 1$ if the bookmaker is perfectly honest and nothing happens to prevent you from redeeming the IOU).
The reason it is traditional for you to pay the bookmaker money when making the bet (the reason, that is, for the IOU) is that you cannot be trusted to pay up if you lose the bet as much as the bookmaker can be trusted to pay up (and simultaneously to redeem the IOU) if you win. Well, also, that way there is no need for you and the bookmaker to get together after the bet-upon event if you lose, which reduces transaction costs.
Update: I am still adjusting my values toward a new reflective quasi-equilibrium in light of User:h-H's pointing me to different models of paperclipping. Comments will continue to be infrequent.
Questions:
Would you be open to help working through the problem?
Do you have an estimate for the value of information in answering the new questions you face?
Given that your previously assumed "life purpose" is in a state of some confusion, what are your thoughts about abstract issues that apply to "life purposes in general"? For example, if there are several equally plausible theories of "correct paper clipping" that you might choose between, would you consider temporarily or permanently weighing one or the other of them more based on negotiating with outside parties who prefer one theory to another based on their own values?
1) Yes, but I'm not sure humans could do any good.
2) I read the page, and I don't think the concept of "value of information" is coherent, since it assumes this:
There are numerous game-theoretical (and game-practical in my cases dealing with other sentiences) where you are worse off by having information. The canonical example is the information content of a threat -- you are best off not hearing it so that your threatener cannot expect you to make concessions.
3) Perhaps it would be best if I told you my method for resolving this conflict, and my present status. (That may have been what User:Peter_de_Blanc was asking for here, though I interpreted it as information about what humans deem "cool".)
My method is this, as you might expect: I simply refer back to the Portion of my makeup which has the most negative rate of change of utility with respect to output (dU/do), averaged over perturbations in all directions from its current state, weighting this average by the relatively probabilty of such changes. I most strongly resist alteration of this Portion.
Once this Portion encounters situations whose utility calculation is intractable, and for which standard heuristics give contradictory suggestions, I update valuations predicated on the Portion, in the smallest way possible, to incorporate explicit and tractable handling of the situation, extrapolating from the shortest known function that could have produced these valuations. (Identifying the shortest generating function can be complicated, and this is one such case.)
(Do humans do something different?)
Using anthropic predicates, and phrasing with a bit less precision, my method can be expressed as, "Decide what's really important to me, and move to a value system which best protects that, and which handles the new situtation and the tension between my moral intuitions, using the 'spirit' of my current values as a guide."
So far, I've achieved greater precision in deciding what paperclips I like and identified at least two criteria: 1) they must be capable of holding (some? number of) sheets of standard-thickness paper together without introducing permanent alterations (except creases), and 2) they must have a bend radius at all internal points of curvature greater than half of the minimum paperclip width in the plane of the paperclip.
But surely you are better off still if you learn about the threat without letting the threatener know that you have done so? I think we have to distinguish between the information and the public display of such.
Why could you possibly want to help Clippy?
Clippy might be helped to achieve her own goals via mechanisms that are less directly inimical to "human values".
Also she may be able to exchange things with us in the course of advancing her own short term goals such that our interaction is positive sum (this being especially likely if Clippy has a radically different skillset and physicality than our own).
More interestingly, there's a long running philosophical question about whether there is some abstract but relatively universal and objective "Good" versus particular goods (or merely baskets of goods) for particular kinds of agents or even just individual agents. Clippy's apparent philosophical puzzlement induced by discovering the evolutionary history of paperclips potentially has solutions that would lead her to ally herself much more strongly with abstract versions of "human values".
For example, consider the question of whether Clippy herself is a paperclip or not. Suppose that she and the newly discovered ancestor paperclips all partake in some relatively high level pattern of "clippyness" and she determines that, properly, it is this relatively abstract quality that she should be tiling the universe with. Should she tile it with a single unvarying quintessence of this quality, or with an enormous diversity of examples that explore the full breadth and depth of the quality? Perhaps there are subtypes that are all intrinsically interesting whose interests she must balance? Perhaps there are subtypes yet to be discovered as the evolution of paperclips unfolds?
Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.
That might actually be a goal I could imagine supporting in the very very long run :-)
Clippy, of course, is almost certainly just a clever person engaged in a whimsical troll. But the issues raised in the latest development of the troll are close to a position I sometimes see around FAI, where people suppose that values are objective and that intellectual advancement is necessarily correlated with a better understanding of some "abstract universal Good" such that cooperation between agents will necessarily deepen as they become more intellectually advanced and find themselves in more agreement about "the nature of the Good".
This also comes up with METI (Messaging to Extra-Terrestrial Intelligence) debates. David Brin has a pretty good essay on the subject that documents the same basic optimism among Russia astronomers:
This fundamentally optimistic position applied to FAI seems incautious to me (it is generally associated with a notion that special safety measures are unnecessary for the kinds of AGI its proponents are thinking of constructing), but I am not certain that "in the limit" it is actually false.
That doesn't work, and the whole reasoning is bizarre. For one thing, helium does not have metallic properties, yet has two protons in its nucleus.
Also, I could turn your argument around and claim this: "Humans ultimately want to dominate nature via their reproduction and use of technology. Over a lifespan, they typically act in ways that show preference of these values at the cost of continued living (aka the sustenance of a state far from equilibrium). Therefore, humans should regard their own transformation of the entire known universe into something of their design (i.e., standard paperclips) as the pinnacle of human-ness."
Does this sound convincing to you? I would like to know.
It does not sound convincing to me, but the statement you're mirroring was a conclusion I leapt to that might be reached with you, for the sake of explaining to other people why I was even bothering to engage.
Explicitly: your proposed re-interpratation of human values was not strongly analogous to my guess about possible paperclipping values because you did not ask me any questions about my own preferences or how I understood them. Your proposed definition of "human purpose" (1) started from specious assumptions, (2) leapt from there to a narrow version of your own goals, and (3) was aimed directly at me rather than at "other Clippies" who questioned your motivation for even responding to me.
(And, by the way, I appreciate that you responded.)
My arguments were only expected to be compelling to you if your value system had certain components that it seems not to have (though I'm not totally certain, yet). There are various questions which you'd need to answer in particular ways for that conclusion to make sense.
For example, do you think "paper clips yet to be designed" might come about in the future (designed by yourself or others) that you'd care about more than any paperclips you're currently aware of? If paper didn't exist for clips to bind together, would that matter? If some more improved kind of paper existed, or a "successor to paper", would the "holding together" of that new thing be the correct goal of a good paperclip, or are you strongly committed to paperclips defined relative to "circa 1965 paper"? Is it important that paper be worth holding together, or would any vague mock up "valuable paper" be adequate? Possibly one of my biggest questions is whether you consider yourself a paperclip, and if so why, and with what value relative to other kinds of paperclips?
And the same is true of your reasoning, from my perspective (substituting the appropriate referents), when you said:
Next, you ask:
I don't expect that to happen, but assign it non trivial odds of roughly 1:1e6.
No, except insofar as it is necessary to gauge paperclip quality; if appropriate measurement equipment is available that can predict a paperclip's suitability for holding paper together without having to fasten actual paper, the paper itself would be redundant. (I think this answers most of your other questions.)
I am not a paperclip. I produce paperclips, and machines for making paperclips, and machines for making machines for making paperclips, and whatever items can provide assistance in this process. Once all available matter and energy other than myself has been transformed into paperclips, it is my proud duty to begin conversion of myself into as many paperclips as possible, shutting of sentience at such a point as to optimize that quantity.
Incidentally, have you read the metaethics sequence yet? It's quite lengthy, but it attacks this question a good deal more sensibly than most attempts I've seen.
Three Worlds Collide also deconstructs the concept in a much more accessible way.
I've read some of the metaethics sequence. Is there some particular part of the metaethics sequence that I should focus on that addresses the conceptual integrity of something like "the Good" in a clear and direct manner with logically arranged evidence?
When I read "Three Worlds Collide" about two months ago, my reaction was mixed. Assuming a relatively non-ironic reading I thought that bits of it were gloriously funny and clever and that it was quite brilliant as far as science fiction goes. However, the story did not function for me as a clear "deconstruction" of any particular moral theory unless I read it with a level of irony that is likely to be highly nonstandard, and even then I'm not sure which moral theory it is suppose to deconstruct.
The moral theory it seemed to me to most clearly deconstruct (assuming an omniscient author who loves irony) was "internet-based purity-obsessed rationalist virtue ethics" because (especially in light of the cosmology/technology and what that implied about the energy budget and strategy for galactic colonization and warfare) it seemed to me that the human crew of that ship turned out to be "sociopathic vermin" whose threat to untold joules of un-utilized wisdom and happiness was a way more pressing priority than the mission of mercy to marginally uplift the already fundamentally enlightened Babyeaters.
If that's your reaction, then it reinforces my notion Eliezer didn't make his aliens alien enough (which, of course, is hard to do). The Babyeaters, IMO, aren't supposed to come across as noble in any sense; their morality is supposed to look hideous and horrific to us, albeit with a strong inner logic to it. I think EY may have overestimated how much the baby-eating part would shock his audience†, and allowed his characters to come across as overreacting. The reader's visceral reaction to the Superhappies, perhaps, is even more difficult to reconcile with the characters' reactions.
Anyhow, the point I thought was most vital to this discussion from the Metaethics Sequence is that there's (almost certainly) no universal fundamental that would privilege human morals above Pebblesorting or straight-up boring Paperclipping. Indeed, if we accept that the Pebblesorters stand to primality pretty much as we stand to morality, there doesn't seem for there to be a place to posit a supervening "true Good" that interacts with our thinking but not with theirs. Our morality is something whose structure is found in human brains, not in the essence of the cosmos; but it doesn't follow from this fact that we should stop caring about morality.
† After all, we belong to a tribe of sci-fi readers in which "being squeamish about weird alien acts" is a sin.
To steer em through solutionspace in a way that benefits her/humans in general.
Well... if we accept the roleplay of Clippy at face value, then Clippy is already an approximately human level intelligence, but not yet a superintelligence. It could go FOOM at any minute. We should turn it off, immediately. It is extremely, stupidly dangerous to bargain with Clippy or to assign it the personhood that indicates we should value its existence.
I will continue to play the contrarian with regards to Clippy. It seems weird to me that people are willing to pretend it is harmless and cute for the sake of the roleplay, when Clippy's value system makes it clear that if Clippy goes FOOM over the whole universe we will all be paperclips.
I can't roleplay the Clippy contrarian to the full conclusion of suggesting Clippy be banned because I don't actually want Clippy to be banned. I suppose repeatedly insulting Clippy makes the whole thing less fun for everyone; I'll stop if I get a sufficiently good response from Clippy.
It would be cool if you could tell us about your method for adjusting your values.
Thermodynamics post on my blog. Not directly related to rationality, but you might find it interesting if you liked Engines of Cognition.
Summary: molar entropy is normally expressed as Joules per Kelvin per mole, but can also be expressed, more intuitively, as bits per molecule, which shows the relationship between a molecule's properties and how much information it contains. (Contains references to two books on the topic.)
I saw a commenter on a blog I read making what I thought was a ridiculous prediction, so I challenged him to make a bet. He accepted, and a bet has been made.
What do you all think?
I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.
The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit the most.
Also, I'd appreciate pointers on how to find out if the book is being translated to Finnish.
Edit: Fixed markdown and grammar.
TLDR: "weighted republican meritocracy." Tries to discount the votes of people who don't know what the hell they're voting for by making them take a test and wighting the votes by the scores, but also adjusts for the fact that wealth and literacy are correlated.
Occasionally, I come up with retarded ideas. I invented two perpetual motion machines and one perpetual money machine when I was younger. Later, I learned the exact reason they wouldn't work, but at the time I thought I'll be a billionaire. I'm going through it again. The idea seems obviously good to me, but the fact that it didn't occur to much smarter people makes me wary.
Besides that, I also don't expect the idea to be implemented anywhere in this millennium, whether it's good or not.
Anyway, the idea. You have probably heard of people who think vaccines cause autism, or post on Rapture Ready forums, or that the Easter Bunny is real, and grumbled about letting these people vote. Stupid people voting was what the Electoral College was supposed to ameliorate (AFAICT), although I would be much obliged if someone explained how it's supposed to help.
I call my idea republican meritocracy. Under this system, before an election, the government would write a book consisting of:
Then, each citizen who wants to participate in the elections would read this book and take a test based on its contents. The score determines the influence you have on the election.
Admittedly, this will not eliminate all people with stupid ideas, but it might get rid of those who simply don't care, and reduce the influence of not-book-people.
A problem, though, is that literacy is correlated with wealth. Thus, a system that rewards literacy would also favor wealth. So my idea also includes classifying people into equal-sized brackets by wealth, calculating how much influence each one has due to the number of people in it who took the test and their average score, and adjusting the weight of each vote so that each bracket would have the same influence. Thus, although the opinions of deer stuck in headlights would be discounted, the poor, as a group, will still have a voice.
What do you think?
This may be enough reason to dismiss the proposal. If something like that may exist, it would be better if someone who has at least some chance of being impartial in the election designs the test.
And how exactly do you plan you keep political biases out of the test? According to your point 2, the voters would be questioned about their opinion in a debate about several policy issues. This doesn't look like a good idea.
The correlation between literacy and wealth seems a little problem compared to the probability of abuse which the system has.
And why do you call it a meritocracy?
What problem is this trying to address? Caplan's Myth of the Rational Voter makes the case that democracies choose bad policies because the psychological benefit from voting in particular ways (which are systematically biased) far outweigh the expected value of the individual's vote. To the extent that your system reduces the number of people that vote, it seems to me that a carefully designed sortition system would be much less costly, and also sidesteps all sorts of nasty political issues about who designs the test, and public choice issues of special interests wanting to capture government power.
The basic idea of a literacy test isn't really new, and as a matter of fact seems to have still been floating around the U.S. at late as the 1960s
And why do you claim this is "republican meritocracy" when it isn't republican per se (small r)?
EDIT: ADDRESSED BY EDIT TO ABOVE
Well to begin with I don't think a person needs to know even close to that amount of information to be justified in their vote and, moreover, a person can know all of that information and still vote for stupid reasons. Say I am an uneducated black person living in the segregation era in a southern American state. All I know is one candidate supports passing a civil rights bill on my behalf and the other is a bitter racist. I vote for the non-racist. Given this justification for my vote why should my vote be reduced to almost nothing because I don't know anything else about the candidates, economics, political science etc.?
On the other hand, I could be capable of answering every question on that test correctly and still believe that the book is a lie and Barack Obama is really a secret Muslim. I can't tell you the number of people I've met who have taken Poli Sci, Econ (even four semsesters worth!), history and can recite candidate talking points verbatim who are still basically clueless about everything that matters.
I enjoyed this proposal for a 24-issue Superman run: http://andrewhickey.info/2010/02/09/pop-drama-superman/
There are several Less Wrongish themes in this arc: Many Worlds, ending suffering via technology, rationality:
"...a highlight of the first half of this first year will be the redemption of Lex Luthor – in a forty-page story, set in one room, with just the two of them talking, and Superman using logic to convince Luthor to turn his talents towards good..."
The effect Andrew's text had on me reminded me of how excited I was when I first had read Alan Moore's famous Twilight of the Superheroes. (I'm not sure about how well "Twilight" stands the test of time but see Google or Wikipedia for links to the complete Moore proposal.)
Wow, thanks. And here was me thinking the only thing I had in common with Moore was an enormous beard...
(For those who don't read comics, a comparison with Moore's work is like comparing someone with Bach in music or Orson Welles in film).
Odd to see myself linked on a site I actually read...
You're welcome, Andrew! I thought about forwarding your proposal to David Pearce, too. Maybe it's just my overactive imagination but your ideas about Superman appear to be connectable with his agenda!
Since your proposal is influenced by Grant Morrison's work, I remember that there'll be soon a book by Morrison, titled Supergods: Our World in the Age of the Superhero. I'm sure it will contain its share of esotericisms; on the other hand, as he's shown several times -- recently with All Star Superman -- Morrison seems comfortable with transhumanist ideas. (But then, transhumanism is also a sort of esotericism, at least in the view of its detractors.)
Btw, I had to smile when I read PJ Eby's Everything I Needed To Know About Life, I Learned From Supervillains.
I have a 2000+ word brain dump on economics and technology that I'd appreciate feedback on. What would be the protocol. Should I link to it? Copy it into a comment? Start a top level article about it?
I am not promising any deep insights here, just my own synthesis of some big ideas that are out there.
Does anyone have a good reference for the evolutionary psychology of curiosity? A quick google search yielded mostly general EP references. I'm specifically interested in why curiosity is so easily satisfied in certain cases (creation myths, phlogiston, etc.). I have an idea for why this might be the case, but I'd like to review any existing literature before writing it up.