We've had these for a year, I'm sure we all know what to do by now.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

New Comment
680 comments, sorted by Click to highlight new comments since: Today at 9:03 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A fascinating article about rationality or the lack thereof as it applied to curing scurvy, and how hard trying to be less wrong can be: http://idlewords.com/2010/03/scott_and_scurvy.htm

4Morendil14y
Wonderful article, thanks. I'm fond of reminders of this type that scientific advances are very seldom as discrete, as irreversible, as incontrovertible as the myths of science often give them to be. When you look at the detailed stories of scientific progress you see false starts, blind alleys, half-baked theories that happen by luck to predict phenomena and mostly sound ones that unfortunately fail on key bits of evidence, and a lot of hard work going into sorting it all out (not to mention, often enough, a good dose of luck). The manglish view, if nothing else, strikes me as a good vitamin for people wanting an antidote to the scurvy of overconfidence. ETA: The article made for a great dinnertime story to my kids. Only one of the three, the oldest (13yo) was familiar with the term "scurvy" - and with the cure as well; both from One Piece. Manga 1 - school 0.
2Tyrrell_McAllister14y
Very interesting. And sobering.

Call for examples

When I posted my case study of an abuse of frequentist statistics, cupholder wrote:

Still, the main post feels to me like a sales pitch for Bayes brand chainsaws that's trying to scare me off Neyman-Pearson chainsaws by pointing out how often people using Neyman-Pearson chainsaws accidentally cut off a limb with them.

So this is a call for examples of abuse of Bayesian statistics; examples by working scientists preferred. Let’s learn how to avoid these mistakes.

4khafra14y
Some googling around yielded a pdf about a controversial use of Bayes in court. The controversy seems to center around using one probability distribution on both sides of the equation. Lesser complaints include mixing in a frequentist test without a good reason.
0Cyan14y
That's a great find!

How do you introduce your friends to LessWrong?

Sometimes I'll start a new relationship or friendship, and as this person becomes close to me I'll want to talk about things like rationality and transhumanism and the Singularity. This hasn't ever gone badly, as these subjects are interesting to smart people. But I think I could introduce these ideas more effectively, with a better structure, to maximize the chance that those close to me might be as interested in these topics as I am (e.g. to the point of reading or participating in OB/LW, or donating to SIAI, or attending/founding rationalist groups). It might help to present the futurist ideas in increasing order of outrageousness as described in Yudkowsky1999's future shock levels. Has anyone else had experience with introducing new people to these strange ideas, who has any thoughts or tips on that?

Edit: for futurist topics, I've sometimes begun (in new relationships) by reading and discussing science fiction short stories, particularly those relating to alien minds or the Singularity.

For rationalist topics, I have no real plan. One girl really appreciated a discussion of the effect of social status on the persuasiveness of argume... (read more)

7RobinZ14y
I think of LessWrong from a really, really pragmatic viewpoint: it's like software patches for your brain to eliminate costly bugs. There was a really good illustration in the Allais mini-sequence - that is a literal example of people throwing away their money because they refused to consider how their brain might let them down. Edit: Related to The Lens That Sees Its Flaws.
4XiXiDu14y
It shows you that there is really more to most things than meets the eye, but more often than not much less than you think. It shows you that even smart people can be completely wrong but that most people are not even wrong. It tells you to be careful in what you emit and to be skeptical of what you receive. It doesn't tell you what is right, it teaches you how to think and to become less wrong. And to do so is in your own self interest because it helps you to attain your goals, it helps you to achieve what you want. Thus what you want is to read and participate on LessWrong.
4[anonymous]14y
I am probably a miserable talker, as usually after my introduction of rationality/singularity related topics people tend to even strengthen their former opinions. I could well use a "good argumentation for rationality dummys" article. No, reading through all the sequences does not help. (Understanding would?) Often enough it seems that I achieve better results by trying not to touch any "religious" topic too early; religious meaning that the argument for not having that opinion requires an understanding of reductionism and epistemology worth a third year philosophy student (btw, acceptance is also required). This may seem to take enormous amounts of time to get people onto this train, but, well, the average IQ is 100, and getting rationality seems to be even less far spread than intelligence, so it may actually be more useful to hint in the right direction for special topics than to catch it all. And, how does this actually help your own intentions? It seems non-trivial to me that finding a utility-function where taking the time to improve the rationality-q of a few philosophy/arts students or electricians or whatever is actually a net-win for what one can improve. Or is everybody here just hanging out with (gonna-be) scientists?
4michaelkeenan14y
I'm not sure this is what you're doing, but I'm careful not to bring up LessWrong in an actual argument. I don't want arguments for rationality to be enemy soldiers. Instead, I bring rationalist topics up as an interesting thing I read recently, or as an influence on why I did a certain thing a certain way, or hold a particular view (in a non-argument context). That can lead to a full-fledged pitch for LessWrong, and it's there that I falter; I'm not sure I'm pitching with optimal effectiveness. I don't have a good grasp on what topics are most interesting/accessible to normal (albeit smart) people. If rationalists were so common that I could just filter people I get close to by whether they're rationalists, I probably would. But I live in Taiwan, and I'm probably the only LessWrong reader in the country. If I want to talk to someone in person about rationality, I have to convert someone first. I like to talk about these topics, since they're frequently on my mind, and because certain conclusions and approaches are huge wins (especially cryonics and reductionism).
2nazgulnarsil14y
the main hurdle in my experience is getting people over biases that cause them to think that the future is going to look mostly like the present. if you can get people over this then they do a lot of the remaining work for you.

The following stuff isn't new, but I still find it fascinating:

Reverse-engineering the Seagull

The Mouse and the Rectangle

3AdeleneDawner14y
Neat!
2nazgulnarsil14y
what's depressing is the vast disconnect between how well marketers understand super stimulus and how poorly everyone else does. also this: http://www.theonion.com/content/video/new_live_poll_allows_pundits_to

TL;DR: Help me go less crazy and I'll give you $100 after six months.

I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.

I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).

One-time tricks to do one important thing are also welcome, but I'd offer less.

6CronoDAS14y
After reading this thread, I can only offer one piece of advice: You need to see a medical doctor, and fast. Your problems are clearly more serious than anything we can deal with here. If you have to, call 911 and have them carry you off in an ambulance.
6pjeby14y
This is just a guess, and I'm not interested in your money, but I think that you probably have a health problem. I'd suggest you check out the book "The Mood Cure" by Julia Ross, which has some very good information on supplementation. Offhand, you sound like the author's profile for low-in-catecholamines, and might benefit very quickly from fairly low doses of certain amino acids such as L-tyrosine. I strongly recommend reading the book, though, as there are quite a few caveats regarding self-supplementation like this. Using too high a dose can be as problematic as too low, and times of day are important too. Consistent management is important, too. When you're low on something, taking what you need can make you feel euphoric, but when you have the right dose, you won't notice anything by taking some. (Instead, you'll notice if you go off it for a few days, and find mood/energy going back to pre-supplementation levels.) Anyway... don't know if it'll work for you, but I do suggest you try it. (And the same recommendation goes for anyone else who's experiencing a chronic mood or energy issue that's not specific to a particular task/subject/environment.)
3MixedNuts14y
Buying a (specific) book isn't possible right now, but may help later; thanks. I took the questionnaire on her website and apparently everything is wrong with me, which makes me doubt her tests' discriminating power.
5Cyan14y
It's a marketing tool, not a test.
2pjeby14y
FWIW, I don't have "everything" wrong with me; I had only two, and my wife scores on two, with only one the same between the two of us.
6anonymous25914y
I'll come out of the shadows (well not really, I'm too ashamed to post this under my normal LW username) and announce that I am, or anyway have been, in more or less the same situation as MixedNuts. Maybe not as severe (there are some important things I can do, at the moment, and I have in the past been much worse than I am now -- I would actually appear externally to be keeping up with my life at this exact moment, though that may come crashing down before too long), but generally speaking almost everything MixedNuts says rings true to me. I don't live with anyone or have any nearby family, so that adds some extra difficulty. Right now, as I said, this is actually a relatively good moment, I've got some interesting projects to work on that are currently helping me get out of bed. But I know myself too well to assume that this will last. Plus, I'm way behind on all kinds of other things I'm supposed to be doing (or already have done). I'm not offering any money, but I'd be interested to see if anyone is interested in conversing with me about this (whether here or by PM). Otherwise, my reason for posting this comment was to add some evidence that this may be a common problem (even afflicting people you wouldn't necessarily guess suffered from it).

I've got a weaker form of this, but I manage. The number one thing that seems to work is a tight feedback loop (as in daily) between action and reward, preferably reward by other people. That's how I was able to do OBLW. Right now I'm trying to get up to a reasonable speed on the book, and seem to be slowly ramping up.

6AdeleneDawner14y
I have limited mental resources myself, and am sometimes busy, but I'm generally willing to (and find it enjoyable to) talk to people about this kind of thing via IM. I'm fairly easily findable on Skype (put a dot between my first and last names; text only, please), AIM (same name as here), GChat (same name at gmail dot com), and MSN (same name at hotmail dot com). The google email is the one I pay attention to, but I'm not so great at responding to email unless it has obvious questions in it for me to answer. It's also noteworthy that my sleep schedule is quite random - it is worth checking to see if I'm awake at 5am if you want to, but also don't assume that just because it's daytime I'll be awake.
4ata14y
Hope this doesn't turn into a free-therapy bandwagon, but I have a lot of the same issues as MixedNuts and anonymous259, so if anyone has any tips or other insights they'd like to share with me, that would be delightful. My main problem seems to be that, if I don't find something thrilling or fascinating, and it requires much mental or physical effort, I don't do it, even if I know I need to do it, even if I really want to do it. Immediate rewards and punishments help very little (sometimes they actually make things worse, if the task requires a lot of thought or creativity). There are sometimes exceptions when the boring+mentally/physically-demanding task is to help someone, but that's only when the person is actually relying on me for something, not just imposing an artificial expectation, and it usually only works if it's someone I know and care about (except myself). A related problem is that I rarely find anything thrilling or fascinating (enough to make me actually do it, at least) for very long. In my room I have stacks of books that I've only read a few chapters into; on my computer I have probably hundreds of unfinished (or barely started) programs and essays and designs, and countless others that only exist in my mind; on my academic transcripts are many 'W's and 'F's, not because the classes were difficult (a more self-controlled me would have breezed through them), but because I stopped being interested halfway through. So even when something starts out intrinsically motivating for me, the momentum usually doesn't last. Like anon259, I can't offer any money — this sort of problem really gets in the way of wanting/finding/keeping a job — but drop me a PM if gratitude motivates you. :)
3RobinZ14y
To some extent, the purpose of LessWrong is to fix problems with ourselves, and the distinction between errors in reasoning and errors in action is subtle enough that I would hesitate to declare this on- or off-topic. It should be mentioned, however, that the population of LessWrongers-asking-for-advice is unlikely to be representative of the population of LessWrongers, and even less so of the population of agents-LessWrongers-care-about. This is likely to make generalizations drawn from observations here narrower in scope than we might like.
2Alicorn14y
Same deal as the other two - PM me IM contact info, we can chat :)
2Alicorn14y
PM me with your IM contact info and I'll try to help you too. Look, I'll do it for free too!
5Jordan14y
For what it's worth: A few years back I was suffering from some pretty severe health problems. The major manifestations were cognitive and mood related. Often when I was saying a sentence I would become overwhelmed halfway through and would have to consciously force myself to finish what I was saying. Long story short, I started treating my diet like a controlled experiment and, after a few years of trial and error, have come out feeling better than I can ever remember. If you're going to try self experimentation the three things I recommend most highly to ease the analysis process are: * Don't eat things with ingredients in them, instead eat ingredients * Limit each meal to less than 5 different ingredients * Try and have the same handful of ingredients for every meal for at least a week at a time.
1wedrifid14y
I'm curious. What foods (if you don't mind me asking) did you find had such a powerful effect?
2Jordan14y
I expanded upon it here. What has helped me the most, by far, is cutting out soy, dairy, and all processed foods (there are some processed foods I feel fine eating, but the analysis to figure out which ones proved too costly for the small benefit of being able to occasionally eat unhealthy foods).
5hugh14y
Also, don't offer money. External motivators are disincentives. By offering $100, you are attaching a specific worth to the request, and undermining our own intrinsic motivations to help. Since allowing a reward to disincentivize a behavior is irrational, I'm curious how much effect it has on the LessWrong crowd; regardless, I would be surprised if anyone here tried to collect, so I don't see the point.
2Alicorn14y
My understanding is that the mechanism by which this works lets you sidestep it pretty neatly by also doing basically similar things for free. That way you can credibly tell yourself that you would do it for free, and being paid is unrelated.
2hugh14y
To the contrary. If you pay volunteers, they stop enjoying their work. Other similar studies have been done that show that paying people who already enjoy something will sometimes make them stop the activity altogether, or to at least stop doing it without an external incentive. Edit: AdeleneDawner and thomblake agree with the parent. This may be a counterargument, or just an answer to my earlier question, namely "Are LessWrongers better able to control this irrational impulse?"
1Liron14y
So can a person ever love their day job? It seems that moneymaking/entrepreneurship should be the only reflectively stable passion.
1hugh14y
Obviously, many people do love their day job. However, your question is apt, and I have no answer to it---even with regards to myself. I often have struggled with doing the exact same things at work and for myself, and enjoying one but not the other. I think in my case, it is more an issue of pressure and expectations. However, when trying to answer the question of what I should do with my life, it makes things difficult!
1Alicorn14y
I didn't download the .pdf, but it looks like this was probably conducted by paying volunteers for all of their volunteer work. If someone got paid for half of their hours volunteering, or had two positions doing very similar work and then one of them started paying, I'd expect this effect to diminish.
3hugh14y
The study concerns how many hours per week were spent volunteering; some was paid, some was not, though presumably a single organization would either pay or not pay volunteers, rather than both. Paid volunteers worked less per week overall. The study I referenced was not the one I intended to reference, but I have not found the one I most specifically remember. Citing studies is one of the things I most desperately want an eidetic memory for.
0AdeleneDawner14y
On reflection, it seems to me to be the latter - my cognitive model of money is unusual in general, but this particular reaction seems to be a result of an intentional tweak that I made to reduce my chance of being bribe-able. (Not that I've had a problem with being bribed, but that broad kind of situation registers as 'having my values co-opted', which I'm not at all willing to take risks with.)
1thomblake14y
That seems to work. If I were teaching part-time simply because I needed the money, I wouldn't do it. But I decided that I'd teach this class for free, so I also have no problem doing it for very little money.
1AdeleneDawner14y
Agreed - I do basically similar things for free, and am reasonably confident that my reaction would be "*shrug* ok" if I were to work with MixedNuts and xe wanted to pay me. (I do intend to offer help here; I'm still trying to determine what the most useful offer would be.)
5hugh14y
MixedNuts, I'm in a similar position, though perhaps less severely, and more intermittently. I've been diagnosed with bipolar, though I've had difficulty taking my meds. At this point in my life, I'm being supported almost entirely by a network of family, friends, and associates that is working hard to help me be a real person and getting very little in return. I have one book that has helped me tremendously, "The Depression Cure", by Dr. Ilardi. He claims that depression-spectrum disorders are primarily caused by lifestyle, and that almost everyone can benefit from simple changes. As any book--especially a self-help book---it ought to be read skeptically, and it doesn't introduce any ideas that can't be found in modern psychological research. Rather, it aggregates what in Ilardi's opinion are the most important: exercise works more effectively than SSRIs, etc. If you really want a copy, and you really can't get one yourself, I will send you one if you can send me your address. It helped me that much. Which is not to say that I am problem free. Still, a 40% reduction in problem behavior, after 6 months, with increasing rather than decreasing results, is a huge deal for me. Rather, I want to give you your "one trick". It is the easiest rather than the most effective; but it has an immediate effect, which helped me implement the others. Morning sunlight. I don't know where you live; I live in a place where I can comfortably sit outside in the morning even this time of year. Get up as soon as you can after waking, and wake as early in the day as you would ideally like to. Walk around, sit, or lie down in the brightest area outside for half an hour. You can go read studies on why this works, or that debate its efficacy, but for me it helps. I realize that your post didn't say anything about depression; just lack of willpower. For me, they were tightly intertwined, and they might not be for you. Please try it anyway.
4MixedNuts14y
Thanks. I'll try the morning light thing; from experience it seems to help somewhat, but I can't keep it going for long. If nothing else works, I'll ask you for the book. I'm skeptical since they tend to recommend unbootstrapable things such as exercise, but it could help.
4hugh14y
There is one boot process that works well, which is to contract an overseer. For me, it was my father. I felt embarrassed to be a grown adult asking for his father's oversight, but it helped when I was at my worst. Now, I have him, my roommate, two ex-girlfriends, and my advisor who are all concerned about me and check up with me on a regular basis. I can be honest with them, and if I've stopped taking care of myself, they'll call or even come over to drag me out of bed, feed me, and/or take me for a run. I have periodically been an immense burden on the people who love me. However, I eventually came to the realization that being miserable, useless, and isolated was harder and more unpleasant for them than being let in on what was wrong with me and being asked to help. I've been a net negative to this world, but for some reason people still care for me, and as long as they do, my best course of action seems to be to let them try to help me. I suspect you have a set of people who would likewise prefer to help you than to watch you suffer. Feeling less helpless was nearly as good for them as for me. I have a debt to them that I am continuing to increase, because I'm still not healthy or self-sufficient. I don't know if I can ever repay it, but
1MixedNuts14y
Yes, I've considered that. There are people who can and do help, but not to the extent I'd need. I believe they help me as much as they can while still having a life that isn't me. I shouldn't ask for more, should I? If you have tips for getting more efficient help out of them, suggestions of people who'd help though I don't expect them to, or ways to get help from other people (professional caretakers?), by all means please shoot.
4hugh14y
You indicated that you had trouble maintaining the behavior of getting daily morning light. Ask someone who 1) likes talking to you, 2) is generally up at that hour, and 3) is free to talk on the phone, to call you most mornings. They can set an alarm on their phone and have a 2 minute chat with you each day. In my experience if I can pick up the phone (which admittedly can be difficult), the conversation is enough of a distraction and a motivation to get outside, and then inertia is enough to keep me out there. The reason I chose my father is that he is an early riser, self-employed, and he would like to talk to me more than he gets to. You might not have someone like that in your life, but if you do, it is minimally intrusive to them, and may be a big help to you.
3MixedNuts14y
This sounds like a great idea. I have a strong impulse to answer phones, so if I put the phone far enough from my bed I had to get up to answer it, I'd get past the biggest obstacle. There are two minor problems: None of the people I know have free time early in the morning, but two minutes is manageable. When outside, I'm not sure what to do so there's a risk I'd get anxious and default to going home. I'll try it, thanks.
1jimmy14y
If you're going to go to the trouble of talking to someone every morning, you might as well see their face: http://www.blog.sethroberts.net/2009/10/15/more-about-faces-and-mood-2/ Seth found that his mood the next day was significantly improved if he saw enough faces the previous morning. There was a LessWronger that posted somewhere that this trick helped him a lot, but I can't remember who or where right now.
3MixedNuts14y
I see quite a lot of faces in the morning already. Maybe not early enough? Though I'm pretty skeptical; it looks like it'd work best for extroverted neurotypicals, and I'm neither. I added it to the list of tricks, but I'll try others first.
4Alicorn14y
I'm willing to try to help you but I think I'd be substantially more effective in real time. If you would like to IM, send me your contact info in a private message.
3Kevin14y
Do you take fish oil supplements or equivalent? Can't hurt to try; fish oil is recommended for ADHD and very well may repair some of the brain damage that causes mental illness. http://news.ycombinator.com/item?id=1093866
0komponisto14y
Use with caution, however.
2wedrifid14y
I don't understand the link. It doesn't mention fish oil but does suggest that she changed her medication (for depression and anorexia) and then experienced suicidal ideation, which she later acted upon. Medications causing suicidal ideation is not unheard of but I haven't heard of Omega-3 having any such effect. Some googling gives me more information. It seems that her psychiatrist was transitioning her from one antidepressant to another, and adding fish oil supplements. There is also suggestions that her depression was bipolar. Going off an antidepressant is known to provoke manic episodes in bipolar patients and even those vulnerable to bipolar that had never had an episode. Going on to an antidepressant (and in particular SSRIs, for both 'on' and 'off') can also provoke mania. A manic episode while suffering withdrawal symptoms and the symptoms of a preexisting anxiety based disorder is a recipe for suicide. As for Omega-3... the prior for her being responsible is low and she just happened to be on the scene when people were looking for something to blame!
0komponisto14y
Ah, sorry, I should have checked. (I guess it seemed an important enough detail that I just assumed it would be mentioned.) Here (18:20 in the video) is an explicit mention of the fish oil, by her mother; apparently she was taking 12 tablets daily. The way I had interpreted it, which prompted my caution above, was as a case of replacing antidepressants with fish oil, which seems unwise. Looking at it again now reveals there was in fact a plan to continue with antidepressants. It's unclear, however, how far along she was with this plan. In any case, you're right that fish oil may not necessarily have been to blame as the trigger for suicide; but at the very least, it certainly didn't work here, and to the extent that it may have replaced the regular antidepressant treatment...that would seem a rather dubious decision.
3Psy-Kosh14y
I have had and sometimes still struggle with similar problems, but there is something that sometimes has helped me: If there's something you need to do, try to do something with it, however little, as soon after you get up as possible. The example I'm going to use is studying, but you can generalize from it. Pretty much soon as you get up, BEFORE checking email or anything like that, study (or whatever it is you need to do) a bit. And keep doing until you feel your mental energy "running out".. but then, any time later in the day that you feel a smigen of motivation, don't let go of it: run immediately to continue doing. But starting the day with doing some, however little, seemed to help. I think with me the psychology was sort of "this is the sort of day when I'm working on this", so once I start on it, it's as if I'm "allowed" to periodically keep doing stuff with it during the day. Anyways, as I said, this has sometimes helped me, so...
0MixedNuts14y
Hmm, this may be why there's such a gap between good and bad days. It only applies to things you can do little by little and whenever you want, which is pretty limited but still useful. Thanks.
3wedrifid14y
Order modafinil online. Take it, using 'count backwards then swallow the pill' if necessary. Then, use the temporary boost in mental energy to call a shrink. I have found this useful at times.
2knb14y
Modafinil is a prescription drug, so he would have to see a doctor first, right?
9wedrifid14y
Yes, full compliance with laws and schedules, even ones that are trivial to ignore, is something I publicly advocate.
2knb14y
Ok, I didn't know that scoring illegal prescription drugs online was so easy. Isn't it risky? I know people have been busted for this the USA, though it may be easier in France.
8wedrifid14y
I will not go into detail on what I understand to be the pragmatic considerations here, since the lesswrong morality encourages a more conservative approach to choosing what to do. The life-extentionists over at imminst.org tend to be experienced in acquiring whatever they happen to need to meet their health and cognitive enhancement goals. They tend to give a fairly unbiased reports on the best way to go about getting what you need, accounting for legal risks, product quality risks, price and convenience. I do note that when I want something that is restricted I usually just go tell a doctor that "I have run out" and get them to print me 'another' prescription.
0[anonymous]14y
I'm curious why you say this. I don't get the impression that more than a tiny number of people here would have moral or even ethical qualms about ordering drugs online, though I would non-confidently expect us to overestimate the risk on average.
4Kevin14y
In the USA it's no problem to order unscheduled prescription drugs over the internet. Schedule IV drugs can be imported, but customs occasionally seizes them with no penalty for the importer. No company that takes credit cards will ship Schedule II or Schedule III drugs to the USA; at least not one that will be in business for more than a month or two. I believe it's all easier in Europe but I don't know for sure. PM for more info.
2sketerpot14y
And for completeness, I should note that Modafinil is a Schedule IV drug in the US.
3gwern14y
Also, downloading music & movies is usually a copyright violation, frequently both civil & criminal.
1MixedNuts14y
Thanks, but it gets worse. I can't order anything online, because I need to see my bank about checks or debit cards first. I can imagine asking a friend to do it for me, though it's terrifying; I could probably do it on a good day. Also, I doubt the thing modafinil boosts is the same thing I lack, but it could help, if only through placebo effect.
2wedrifid14y
Terrifying? That's troubling. A shrink can definitely help you! It may boost everything just enough to get you over the line. Good luck getting something done. I hope something works for you. Do whatever it takes.
0HumanFlesh14y
Adrafinil is similar to modafinil, only it's much cheaper because its patent has expired.
3MrHen14y
What do you do when you aren't doing anything? EDIT: More questions as you answer these questions. Too many questions at once is too much effort. I am taking you dead seriously so please don't be offended if I severely underestimate your ability.
3MixedNuts14y
I keep doing something that doesn't require much effort, out of inertia; typically, reading, browsing the web, listening to the radio, washing a dish. Or I just sit or lie there letting my mind wander and periodically trying to get myself to start doing something. If I'm trying to do something that requires thinking (typically homework) when my brain stops working, I keep doing it but I can't make much progress.
3MrHen14y
Possible solutions: * Increase the amount of effort it takes to do the low-effort things you are trying to avoid. For instance, it isn't terribly hard to set your internet on a timer so it automatically shuts off from 1 - 3pm. While it isn't terribly hard to turn it back on, if you can scrounge up the effort to turn it back on you may be able to put that effort into something else. * Decrease the amount of effort it takes to do the high-effort things you are trying to accomplish. Paying bills, for instance, can be done online and streamlined. Family and friend can help tremendously in this area. * Increase the amount of effort it takes to avoid doing the things you are trying to accomplish. If you want to make it to an important meeting, try to get a friend to pick you up and drive you all the way over there. These are somewhat complicated and broad categories and I don't know how much they would help.
3MixedNuts14y
I've tried all that (they're on LW already). * That wouldn't work. I do these things by default, because I can't do the things I want. I don't even have a problem with standard akrasia anymore, because I immediately act on any impulse I have to do something, given how rare they are. Also, I can expend willpower do stop doing something, whereas "I need to do this but I can't" seems impervious to it, at least in the amounts I have. * There are plenty of things to be done here, but they're too hard to bootstrap. The easy ones helped somewhat. * That helped me most. In the grey area between things I can do and things I can't (currently, cleaning, homework, most phone calls), pressure helps. But no amount of ass-kicking has made me do the things I've been trying to do for a while.
2AdeleneDawner14y
What classes of things are on the 'can't do' list?
3MixedNuts14y
The worst are semi-routine activities; the kind of things you need to do sometimes but not frequently enough to mesh with the daily routine. Going to the bank, making most appointments, looking for an apartment, buying clothes (don't ask me why food is okay but clothes aren't). That list is expanding. Other factors that hurt are: * need to do in one setting, no way of doing a small part at a time * need to go out * social situations * new situations * being watched while I do it (I can't cook because I share the kitchen with other students, but I could if I didn't) * having to do it quickly once I start Most of these cause me fear, which makes it harder to do things, rather than make it harder directly.

This matches my experience very closely. One observation I'd like to add is that one of my strongest triggers for procrastination spirals is having a task repeatedly brought to my attention in a context where it's impossible to follow through on it - ie, reminders to do things from well-intentioned friends, delivered at inappropriate times. For example, if someone reminds me to get some car maintenance done, the fact that I obviously can't go do it right then means it gets mentally tagged as a wrong course of action, and then later when I really ought to do it the tag is still there.

3MixedNuts14y
Definitely. So that's why I can't do the stuff I should have done a while ago! Thanks for the insight. What works for you?
6jimrandomh14y
I ended up just explaining the issue to the person who was generating most of the reminders. It wasn't an easy conversation to have (it can sound like being ungrateful and passing blame) but it was definitely necessary. Sending a link to this thread and then bringing it up later seems like it'd mitigate that problem, so that's probably the way to go. Note that it's very important to draw a distinction between things you haven't done because you've forgotten, for which reminders can actually be helpful, and things you aren't doing because of lack of motivation, for which reminders are harmful. If you're reading this because a chronic procrastinator sent you a link, then please take this one piece of advice: The very worst thing you can do is remind them every time you speak. If you do that, you will not only reduce the chance that they'll actually do it, you'll also poison your relationship with them by getting yourself mentally classified as a nag.
3MixedNuts14y
I can't do that, but thanks anyway. A good deal of the reminders happen in a (semi-)professional context where the top priority is pretending to be normal (yes, my priorities are screwed up). Most others come from a person who doesn't react to "this thing you do is causing me physical pain", so forget it.
3Alicorn14y
Why do you interact with this person?
3MixedNuts14y
They're family. I planned to be as independent from the family ASAP, but couldn't due to my worsening problems.
2jimrandomh14y
In that case, you'll have to mindhack yourself to change the way you react to reminders like this. This isn't necessarily easy, but if you pull it off it's a one-time act with results that stick with you.
3AdeleneDawner14y
That's a good change to make, and there's also a complementary third option: A specific variant of 'making a mental note' that seems to work very well, at least for me. 1) Determine a point in your regular or planned schedule where you could divert from your regular schedule to do the thing that you need to do. This doesn't have to be the optimal point of departure, just a workable one; you should naturally learn how to spot better points of departure as time goes on, but it's more important to have a point of departure than it is to have a perfect one. It is, however, important that the point of departure is a task during which you will be thinking, rather than being on autopilot. I like to use doorway passages as my points of departure (for example, 'when I get home from running the errands I'm going to do tomorrow, and go to open my front door') because they tend to be natural transition times, but there are many other options. (Other favorites are 'next time I see a certain person' and 'when I finish (or start) a certain task'.) 2) Envision what you would perceive as you entered that situation, using whatever visualization method most closely matches your normal way of paying attention to the world. I tend to use my senses of sight and touch most, so I might visualize what I'd see as I walked up to my front door, or the feel of holding my keys as I got ready to open it. 3) Envision yourself suddenly and strongly remembering your task in the situation you envisioned in step two. It may also work, if you aren't able to envision your thoughts like that, to visualize yourself taking the first few task-specific steps - for example, if the task is to write an email, you'd want to visualize not just turning on your computer or starting up your email program, but entering the recipient's name into the from: field and writing the greeting. If this works for you like it works for me, it should cause the appropriate thought (or task, if you used that variant of step 3)
0MixedNuts14y
I'm doing this wrong. How do you prevent tasks from nagging you at other times?
0AdeleneDawner14y
The technique should work even if you find yourself thinking about the task at other times; it just might not work as well, because of the effect that jimrandmoh mentioned about reminders reducing your inclination to do something. A variation of the workaround I mentioned for dealing with others works to mitigate the effect of self-reminders, though - don't just tell yourself 'not right now', tell yourself 'not right now, but at [time/event]'. I can't say much about how to disable involuntary self-reminders altogether, unfortunately. I don't experience them, and if I ever did, it was long enough ago that I've forgotten both that I did and how I stopped. I have, however, read in several different places that using a reliable reminder system (whether one like I'm suggesting, or something more formal like a written or typed list, or whatever) tends to make them eventually stop happening without any particular effort, as the relevant brain-bits learn that the reliable system is in fact reliable, which seems quite plausible to me.
3AdeleneDawner14y
That sounds like a cognitive-load issue at least as much as it sounds like inertia, to me. (Except the being-watched part, that is. I have that quirk too, and I still haven't figured out what that's about.) There are things that can be done about that, but most of them are minor tweaks that would need to be personalized for you. I suspect I might have some useful things to say about the fear, too. I'll PM you my contact info.
3MixedNuts14y
What do you mean by "cognitive load"? I read the Wikipedia article on cognitive load theory, but I don't see the connection. For me, the being-watched part is about embarrassment. I often need to stop and examine a situation and explicitly model it, when most people would just go ahead naturally. Awkward looks cause anxiety.
5AdeleneDawner14y
The concept I'm talking about is broader than the concept that Wikipedia talks about; it's the general idea that brains only have so many resources to go around, and that some brains have less resources than others or find certain tasks more costly than others, and that it takes a while for those resources to regenerate. Something like this idea has come up a few times here, mostly regarding willpower specifically (and we've found studies supporting it in that case), but my experience is that it's much more generally applicable then that. And, if your brain regenerates that resource particularly slowly, and if you haven't been thinking in terms of conserving that limited resource (or set of resources, depending on how exactly you're modeling it), it's fairly easy to set yourself up with a lifestyle that uses the resource faster than it can regenerate, which has pretty much the effect you described. (I've experienced it, too, and it's not an uncommon situation to hear about in the autistic community.)
5MixedNuts14y
Yes! It does feel like running out of a scarce resource most people have in heaps. I don't know exactly how that resource is generated and how to tell how much I have left before I run out, though.
5AdeleneDawner14y
Fortunately, the latter at least seems to be a learnable skill for most people. :)
1Unnamed14y
There is evidence linking people's limited resources for thought and willpower to their blood glucose, which is another good reason to see a doctor to find out if there's something physiological underlying some of your problems.
1NancyLebovitz14y
Does thinking about having less of that resource than other people tend to consume it?
0MixedNuts14y
That's a good question. There is a correlation between running out of it and thinking about it, but it's pretty obvious that most of the causation happens the other way around. Talking about it here doesn't seem to hurt, so probably not.
2Kutta14y
I have a couple of questions, MixedNuts: * Have you ever been to a therapist? * What kind of you history do you have regarding any kinds of medical conditions? * What kind of diagnostic information do you currently have? (blood profile, expert assesment, hair analysis, etc.) * What kind of drugs have you been taking, if you've been? * What does your diet look like?
2MixedNuts14y
* I have, for a few months, about a year and a half ago. It was slightly effective. I stopped when I moved and couldn't get myself to call again. * Nothing that looks like it should matter. * Not much. I had a routine blood test some years ago. Everything was normal, though they probably only measured a few things. * No prescription drugs. * When I'm on campus I eat mostly vegetables, fresh or canned, and some canned fish or meat, and generic cafeteria food (balanced diet plus a heap of French fries); nothing that requires a lot of effort. At my parents', I eat, um, traditional wholesome food. I eat a lot between meals for comfort, mostly apples. I think my diet is fine in quality but terrible in quantity; I eat way too much and skip meals at random.
4CronoDAS14y
Given your symptoms, the best advice I can give you is to see a medical doctor of some kind, probably a psychiatrist, and describe your problems. It has to be someone who can order medical tests and write prescriptions. You might very well have a thyroid problem - they cause all kinds of problems with energy and such - and you need someone who can diagnose them. I don't know how to get you to a doctor's office, but I guess you could ask someone else to take you?
0blogospheroid14y
How much fresh citrus fruit is there in your diet? One of the things that helped me with near depression symptoms when i was in another country was consumption of fresh fruit. Apples and pears helped me, but you already are having apples. hmm.. Try some fresh orange/lemon/sweet lime/grapefruit juices. Might help.
0MixedNuts14y
Quite a lot, but possibly too sporadically. I'll try it, thanks.
1MrHen14y
Okay. Nothing I have will help you. My problems are generally OCD based procrastination loops or modifying bad habits and rituals. Solutions to these assume impulses to do things. I have nothing that would provide you with impulses to do. All of my interpretations of "I can't do X" assume what I mean when I tell myself I can't do X. Sorry. If I were actually there I could probably come up with something but I highly doubt I would be able to "see" you well enough through text to be able to find a relevant answer.
2Unnamed14y
The number one piece of advice that I can give is see a doctor. Not a psychologist or psychiatrist - just a medical doctor. Tell them your main symptoms (low energy, difficulty focusing, panic attacks) and have them run some tests. Those types of problems can have physical, medical causes (including conditions involving the thyroid or blood sugar - hyperthyroidism & hypoglycemia). If a medical problem is a big part of what's happening, you need to get it taken care of. If you're having trouble getting yourself to the doctor, then you need to find a way to do it. Can you ask someone for help? Would a family member help you set up a doctor's appointment and help get you there? A friend? You might even be able to find someone on Less Wrong who lives near you and could help. My second and third suggestions would be to find a friend or family member who can give you more support and help (talking about your issues, driving you to appointments, etc.) and to start seeing a therapist again (and find a good one - someone who uses cognitive-behavioral therapy).
1MixedNuts14y
This is technically a good idea. What counts as "my main symptoms", though? The ones that make life most difficult? The ones that occur most often? The most visible ones to others? To me?
1Unnamed14y
You'll want to give the doctor a sense of what's going on with you (just like you've done here), and then to help them find any medical issues that may be causing your problems. So give an overall description of the problem and how serious it is (sort of like in your initial post - your lack of energy, inability to do things, and lots of related problems) - including some examples or specifics (like these) can help make that clearer. And be sure to describe anything that seems like it could be physiological (the three that stuck out to me were lack of energy, difficulty focusing, and anxiety / panic attacks - you might be able to think of some others). The doctor will have questions which will help guide the conversation, and you can always ask whether they want more details about something. Do you think that figuring out what to say to the doctor could be a barrier for you? If so, let me know - I could say more about it.
1knb14y
I recommend a counseling psychologist rather than a psychiatrist. Or, if you can manage it, do both. I used to be just like this, I actually put off applying for college until I missed the deadlines for my favorite schools, just because I couldn't get myself started. Something changed for me over the last couple years, though, and I'm now really thriving. One big thing that helps in the short term is stimulants: ephedrine and caffeine are OTC in most countries. Make sure you learn how to cycle them, if you do decide to use them. Things seem to get easier over time.
1MixedNuts14y
Why? (The psychiatrist is the one who's a psychologist but can also give you meds, right?) Caffeine seems to work at least a little, but makes me anxious; it's almost always worth it. Thanks. Ephedrine is illegal in France. ETA: Actually, scratch that. I tried drinking coffee and soda when I wasn't unusually relaxed, and the anxiety is too extreme to make me more productive.
3Alicorn14y
A psychiatrist is someone who went to medical school and specialized in the brain. A psychologist is someone who has a PhD in psychology. Putting "clinical" before either means they treat patients; "experimental" means what it sounds like. There's some crosstraining, but not as much as one might imagine. ("Therapist" and "counselor" imply no specific degree.)
2knb14y
Some common misconceptions: Counseling Psychology is a very specific degree program within psychology. A psychologist can have a PhD, a PsyD, (doctor of psychology degree), or in some fields, even a masters. Psychiatrists also don't specialize in "the brain" (that's neurology), they specialize in treating psychiatric disorders using the medical model.
2CronoDAS14y
See the psychiatrist first. Your problems may be caused by some more physiological cause, such as a problem with your thyroid, and a medical doctor is more likely to be able to diagnose them.
2knb14y
(Note: I'm a psychology grad student, my undergrad work was in neuroscience and psychology.) Psychiatrists (in America at least) are usually too busy to do much psychotherapy. When they do, get ready to pay big time. It just isn't worth their extremely valuable time and in any case, it isn't their specialty. You don't want to see a clinical psychologist because they treat people with diagnosable psych. disorders. You may have melancholic depression, but it sounds like you just have extreme akrasia issues. If you go to a psychiatrist first, they'll likely just try to give you worthless SSRIs.
2orthonormal14y
Psychologists are for that reason often cheaper. In fact, a counseling psychologist in a training clinic can be downright affordable, and most of the benefits of therapy seem to be independent of the therapist anyway. Also, it would be worth checking for data on the effectiveness of a psychiatric drug before spending on it; many may be ineffective or not worth the side effects.
4MixedNuts14y
Is Crazy meds as good as it looks?
2wedrifid14y
Absolutely. Just reading it made my day! Hilarious. (And the info isn't bad either. )
1wedrifid14y
And if you live in Australia can sometimes be free!
-3[anonymous]14y
(Suggest seeing a psychiatrist first then a psychologist. Therapy works far better once your brain is functioning. Usually just go to a doctor and they will refer you as appropriate.)
1whpearson14y
Do you want a companion of some sort? If so, a mind hack that might work is imagining what a hypothetical companion might find attractive in a person. Then try and become that person. Do this by using your hypothetical companion as a filter on what you are doing. Don't beat yourself up about not doing what the hypothetical companion would find attractive, that isn't attractive! Your hypothetical companion does not have to be neurotypical but should be someone you would want to be around. We should be good at following on from these kinds of motivations as we have a long history of trying to get mates by adjusting behaviour.
1MixedNuts14y
I've sort of considered that, though not framed that way. It might be useful later, but not at my current level. Thanks.
1Mitchell_Porter14y
Maybe you need to go more crazy, not less. Accept that you are in an existential desert and your soul is dying. But there are other places over the horizon, where you may or may not be better off. So either you die where you are, or you pick a direction, crawl, and see if you end up somewhere better.
1MixedNuts14y
I've considered that. There are changes in circumstances that would effect positive changes in my mental state, like hopping on the first train to a faraway town or just stop pretending I'm normal in public. I'd be much happier, until I run out of money.
1Mitchell_Porter14y
Why would you run out of money if you stopped pretending you're normal?
1MixedNuts14y
I couldn't go to school or get a job. If I stay in school, I have a career ahead of me if I can pursue it.
3Mitchell_Porter14y
What is this abnormality you have which, if you displayed it, would make it impossible to go to school or get a job?
0MixedNuts14y
Not one big abnormality. Inability to work for long stretches of time (you can get good at faking). Trouble focusing at random-ish times (even easier to fake). Inability to do certain things out of routine (now I pretend I'll do it later). Extreme anxiety at things like paperwork. Panic attacks (I can delay them until I'm alone, but the cost is high). Sometimes after a panic attack my legs refuse to work, so I just sit there; I could crawl, but I don't in public. Stimming (I choose consciously to do it, but the effects of not doing it when it's needed are bad; I do it as discreetly as possible while still effective).
2CronoDAS14y
Panic attacks are a very treatable illness. See a medical doctor and tell him or her all about this.
0Kevin14y
Not wanting to go to school or get a job?
1MixedNuts14y
Nice try. I do, very much; I want a job so I can get money so I can do things (such as, you know, saving the world). I don't particularly like schooling but it helps get jobs, and has less variance than being an autodidact.
1Jack14y
I imagine a specific authority in my life or from my past (okay, this is usually my mother) getting really angry and yelling at me to get my ass up and get to work. If you have any memories of being yelled at by an authority figure, use those to help build the image.
1MixedNuts14y
I promise to give this a honest try, but I expect it to result in panic more than anything.
1h-H14y
try this http://www.antiprocrastinator.com/ also, contact someone who is proficient in helping people, for eg. here we have Alicorn, or try some googling.
1MixedNuts14y
I'm desperate enough to ask on LW. Of course I've Googled everything I could think of. The link is decent, combining two good tricks and a valuable insight, but all three have been on LW before so I knew them. Pointing out Alicorn in particular may be useful, but isn't it sort of forcing her to offer help? She already did, though, which makes this point moot.
0h-H14y
I more or less meant direct a question to her and see what happens rather than impose and keep bugging, which I had a feeling you wouldn't do in either case.
1Alicorn14y
I'm flattered, but while I enjoy helping people, I'm not sure how I've projected being proficient at it such that you'd notice - can you explain whence this charming compliment?
1h-H14y
why of course! I've been lurking for a few years now so I remember when you began posting on self help etc. now that think more about it though, I might've had pjeby in mind as well, you two sort of 'merged' when I wrote that above comment, heh but really, proficient is just a word choice, I guess it is flattery, and I did mean to signal you, but that's how I usually write. apologies if that overburdened you in anyway.. ETA: oh and I'd meant to write 'more proficient', not just 'proficient'.
0markrkrebs14y
I suggest you pay me $50 for each week you don't get and hold a job. Else, avoid paying me by getting one, and save yourself 6mo x 4wk/mo x $50 -$100 = $400! Wooo! What a deal for us both, eh?
3MixedNuts14y
That's an amusing idea, but disincentives don't work well, and paying money is too Far a disincentive to work (now, if you followed me around and punched me, that might do the trick). This reminds me of the joke about a beggar who asks Rothschild for money. Rothschild thinks and says "A janitor is retiring next week, you can have their job and I'll double the pay.", and the beggar replies "Don't bother, I have a cousin who can do it for the original wage, just give me the difference!"

Has anyone had any success applying rationalist principles to Major Life Decisions? I am facing one of those now, and am finding it impossible to apply rationalist ideas (maybe I'm just doing something wrong).

One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.

Weirdly, the most convincing argument I've contemplated so far is basically a "what would X do?" style analysis, where X is a fictional character.

It feels to me that rationalist principles are most useful in avoiding failure modes. But they're much less useful in coming up with new things you should do (as opposed to specifying things you shouldn't do).

8orthonormal14y
I'd start by asking whether the unknowns of the problem are primarily social and psychological, or whether they include things that the human intuition doesn't handle well (like large numbers). If it's the former, then good news! This is basically the sort of problem your frontal cortex is optimized to solve. In fact, you probably unconsciously know what the best choice is already, and you might be feeling conflicted so as to preserve your conscious image of yourself (since you'll probably have to trade off conscious values in such a choice, which we're never happy to do). In such a case, you can speed up the process substantially by finding some way of "letting the choice be made for you" and thus absolving you of so much responsibility. I actually like to flip a coin when I've thought for a while and am feeling conflicted. If I like the way it lands, then I do that. If I don't like the way it lands, well, I have my answer then, and in that case I can just disobey the coin! (I've realized that one element of the historical success of divination, astrology, and all other vague soothsaying is that the seeker can interpret a vague omen as telling them what they wanted to hear— thus giving divine sanction to it, and removing any human responsibility. By thus revealing one's wants and giving one permission to seek them, these superstitions may have actually helped people make better decisions throughout history! That doesn't mean it needs the superstitious bits in order to work, though.) If it's the latter case, though, you probably need good specific advice from a rational friend. Actually, that practically never hurts.
7Dagon14y
A few principles that can help in such cases (major decision, very little direct data): * Outside view. You're probably more similar to other people than you like to think. What has worked for them? * Far vs Near mode: beware of generalizations when visualizing distant (more than a few weeks!) results of a choice. Consider what daily activities will be like. * Avoiding oversimplified modeling: With the exceptions of procreation and suicide, there are almost no life decisions that are permanent and unchangeable. * Shut up and multiply, even for yourself: Many times it turns out that minor-but-frequent issues dominate your happiness. Weight your pros/cons for future choices based on this, not just on how important something "should" be.
7Eliezer Yudkowsky14y
...I don't suppose you can tell us what? I expect that if you could, you would have said, but thought I'd ask. It's difficult to work with this little. I could toss around advices like "A lot of Major Life Decisions consist of deciding which of two high standards you should hold yourself to" but it's just a shot in the dark at this point.
5MrHen14y
I am not that far in the sequences, but these are posts I would expect to come into play during Major Life Decisions. These are ordered by my perceived relevance and accompanied with a cool quote. (The quotes are not replacements for the whole article, however. If the connection isn't obvious feel free to skim the article again.) Hope that helps.
4Morendil14y
Based on those two lucid observations, I'd say you're doing well so far. There are some principles I used to weigh major life decisions. I'm not sure they are "rationalist" principles; I don't much care. They've turned out well for me. Here's one of them: "having one option is called a trap; having two options is a dilemma; three or more is truly a choice". Think about the terms of your decision and generate as many different options as you can. Not necessarily a list of final choices, but rather a list of candidate choices, or even of choice-components. If you could wave a magic wand and have whatever you wanted, what would be at the top of your list? (This is a mind-trick to improve awareness of your desires, or "utility function" if you want to use that term.) What options, irrespective of their downsides, give you those results? Given a more complete list you can use the good old Benjamin Franklin method of listing pros and cons of each choice. Often this first step of option generation turns out sufficient to get you unstuck anyway.
4[anonymous]14y
Having two options is a dilemma, having three options is a trilemma, having four options is a tetralemma, having five options is a pentalemma... :)
3Cyan14y
A few more than five is an oligolemma; many more is a polylemma.
1knb14y
Many more is called perfect competition. :3
3RobinZ14y
Just remembered: I managed not to be stupid on one or two times by asking whether, not why.
3Jordan14y
I just came out of a tough Major Life Situation myself. The rationality 'tools' I used were mostly directed at forcing myself to be honest with myself, confronting the facts, not privileging certain decisions over others, recognizing when I was becoming emotional (and more importantly recognizing when my emotions were affecting my judgement), tracking my preferred choice over time and noticing correlations with my mood and pertinent events. Overall, less like decision theory and more like a science: trying to cut away confounding factors to discover my true desire. Of course, sometimes knowing your desires isn't sufficient to take action, but I find that for many personal choices it is (or at least is enough to reduce the decision theory component to something much more manageable).
2RobinZ14y
The dissolving the question mindset has actually served me pretty well as a TA - just bearing in mind the principle that you should determine what led to this particular confused bottom line is useful in correcting it afterwards.
0[anonymous]14y
Well, what are "major" life decisions? Working in the area of Friendly AGI instead of, say, just String Theory? Quit smoking? Or things like getting a child or not? As one may guess from those questions, I did not have any more success by coercing the bayesian monster than I would have had by just doing the things which already seemed well supported by major pop-science-newspaper-articles. What I do know is, that although it is difficult to get information on what to do next in my special situation, it seems much easier to get information on things many people already do. I just try to make and educated guess and say that nearly everybody does many things which many people do. And often enough one can find things which one does but which should not be done. It may sound silly, but I include things like not smoking, not talking to your friends when you're depressed (writing personal notes works better as friends seem to reinforce the bad mood), and not trying to work as a researcher (y'a know, 80% of the people think they are above average...). What you describe as "X, the fictional character", seems like setting up an in-brain story to think about difficult topics which require analytical thinking, helping to concentrate on one topic by actively blocking random interference of visual/auditory ideas. This is not an "convincing argument" (maybe it's just my English skills, but "convincing argument ... what would do" just does not parse into something meaningful) but just a technique. Similar to concentrate on breathing or muscle tonus or your thoughts or some real or imaginary candle or smell when exeucting the meditation of your preference.

Pigeons can solve Monty hall (MHD)?

A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy.

Behind a paywall

Behind a paywall

But freely available from one of the authors' website.

Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.

How much information is preserved by plastination? Is it a reasonable alternative to cryonics?

4Jack14y
Afaict pretty much the same amount as cryonics. And it is cheaper and more amenable to laser scanning. This is helpful. The post has an interesting explanation of why all the attention is on cryo: Edit: Further googling suggest there might be some unsolved implementation issues.
1Paul Crowley14y
See the last question in this list

This was in my drafts folder but due to the lackluster performance of my latest few posts I decided it doesn't deserve to be a top level post. As such, I am making it a comment here. It also does not answer the question being asked so it probably wouldn't have made the cut even if my last few posts been voted to +20 and promoted... but whatever. :P


Perceived Change

Once, I was dealing a game of poker for some friends. After dealing some but not all of the cards I cut the deck and continued dealing. This irritated them a great deal because I altered the ord... (read more)

To venture a guess: their true objection was probably "you didn't follow the rules for dealing cards". And, to be fair to your friends, those rules were designed to defend honest players against card sharps, which makes violations Bayesian grounds to suspect you of cheating.

8MrHen14y
No, this wasn't their true objection. I have a near flawless reputation for being honest and the arguments that ensued had nothing to do with stacking the deck. If I were a dispassionate third party dealing the game they would have objected just as strongly. I initially had a second example as such: It seems as though some personal attachment is created with the specific random object. Once that object is "taken," there is an associated sense of loss.
6prase14y
Your reputation doesn't matter. Once the rules are changed, you are on a slippery slope of changing rules. The game slowly ceases to be poker. When I am playing chess, I demand that the white moves first. When I find myself as the black, knowing that the opponent had whites the last game and it is now my turn to make the first move, I rather change places or rotate the chessboard than play the first move with the blacks, although it would not change my chances of winning. (I don't remember the standard openings, so I wouldn't be confused by the change of colors. And even if I were, this would be the same for the opponent.) Rules are rules in order to be respected. They are often a lot arbitrary, but you shouldn't change any arbitrary rule during the game without prior consent of the others, even if it provably has no effect to the winning odds. I think this is a fairly useful heuristic. Usually, when a player tries to change the rules, he has some reason, and usually, the reason is to increase his own chances of winning. Even if you opponent doesn't see any profit which you can get from changing the rules, he may suppose that there is one. Maybe you remember somehow that there are better or worse cards in the middle of the pack. Or you are trying to test their attention. Or you want to make more important changes of rules later, and wanted to have a precedent for doing that. These possibilities are quite realistic in gambling, and therefore is is considered a bad manner to change the rules in any way during the game.
3MrHen14y
I don't know how to respond to this. I feel like I have addressed all of these points elsewhere in the comments. A summary: * The poker game is an example. There are more examples involving things with less obvious rules. * My reputation matters in the sense that they know wasn't trying to cheat. As such, when pestered for an answer they are not secretly thinking, "Cheater." This should imply that they are avoiding the cheater-heuristic or are unaware that they are using the cheater-heuristic. * I confronted my friends and asked for a reasonable answer. Heuristics were not offered. No one complained about broken rules or cheating. They complained that they were not going to get their card. It seems to be a problem with ownership. If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride? One more thing of note: They argued against the abstract scenario. This scenario assumed no cheating and no funny business. They still thought it mattered. Personally, I think this is a larger issue than catching cheaters. People seemed somewhat attached to the anti-cheating heuristic. Would it be worth me typing up an addendum addressing that point in full?
7Nick_Tarleton14y
The System 1 suspicion-detector would be less effective if System 2 could override it, since System 2 can be manipulated. (Another possibility may be loss aversion, making any change unattractive that guarantees a different outcome without changing the expected value. (I see hugh already mentioned this.) A third, seemingly less likely, possibility is intuitive 'belief' in the agency of the cards, which is somehow being undesirably thwarted by changing the ritual.)
0MrHen14y
Why can I override mine? What makes me different from my friends? The answer isn't knowledge of math or probabilities.
2Nick_Tarleton14y
I really don't know. Unusual mental architecture, like high reflectivity or 'stronger' deliberative relative to non-deliberative motivation? Low paranoia? High trust in logical argument?
1prase14y
Depends, of course, on what exactly you would say and how much unpleasant the writing is for you. I would say that they impement the rule-changing-heuristic, which is not automatically thought of as an instance of the cheater-heuristic, even if it evolved from it. Changing the rules makes people feeling unsafe, people who do it without good reason are considered dangerous, but not automatically cheaters. EDIT: And also, from your description it seems that you have deliberately broken a rule without giving any reason for that. It is suspicious.
1MrHen14y
This behavior is repeated in scenarios where the rules are not being changed or there aren't "rules" in the sense of a game and its rules. These examples are significantly fuzzier which is why I chose the poker example. The lottery ticket example is the first that comes to mind. Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?

Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?

Because people aren't good at telling their actual reason for disagreement. I suspect that they are aware that the particular rule is arbitrary and doesn't influence the game, and almost everybody agrees that blindly following the rules is not a good idea. So "you broke the rules" doesn't sound as a good justification. "You have influenced the outcome", on the other hand, does sound like a good justification, even if it is irrelevant.

The lottery ticked example is a valid argument, which is easily explained by attachment to random objects and which can't be explained by rule-changing heuristic. However, rule-fixing sentiments certainly exist and I am not sure which play stronger role in the poker scenario. My intuition was that the poker scenario was more akin to, say, playing tennis in non-white clothes in the old times when it was demanded, or missing the obligatory bow before the match in judo.

Now, I am not sure which of these effects is more important in the poker scenario, and moreover I don't see by which experiment we can discriminate between the explanation.

2RobinZ14y
This is the best synopsis of the "true rejection" article I have ever seen.
1MrHen14y
That works for me. I am not convinced that the rule-changing heuristic was the cause but I think you have defended your position adequately.
-1Sniffnoy14y
But this isn't a rule of the game - it's an implementation issue. The game is the same so long as cards are randomly selected without replacement from a deck of the appropriate sort.
3Nick_Tarleton14y
(The first Google hit for "texas hold'em rules" in fact mentions burning cards.) That the game has the same structure either way is recognized only at a more abstract mental level than the level that the negative reaction comes from; in most people, I suspect the abstract level isn't 'strong enough' here to override the more concrete/non-inferential/sphexish level.
1prase14y
The ideal decision algorithm used in the game remains the same, but people don't look at it this way. It is a rule, since it is how they have learned the game.
4RobinZ14y
I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist. Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not. (Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.
5MrHen14y
EDIT: Wow, this turned into a ramble. I didn't have time to proof it so I apologize if it doesn't make sense. Okay, yeah, that makes sense. My instinct is pointing me in the other direction namely because I have the (self perceived) benefit of knowing which friends of mine were objecting. Of note, no one openly accused me of cheating or anything like that. If I accidently dropped the deck on the floor or knocked it over the complaints would remain. The specific complaint, which I specifically asked for, is that their card was put into the middle of the deck. (By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.) Any pseudo random event where people can (a) predict the undisclosed particular random object and (b) someone can voluntarily preempt that prediction and change the result tends to receive the same behavior. I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story: Granted, there are a handful of obvious holes in this particular story. The list includes: * My grandfather could have merely used it as an excuse to jab his son-in-law in the ribs (very likely) * My grandfather was lying (not likely) * The bingo organizers knew that rhinos were chosen more often than turtles (not likely) * My grandfather wasn't very good at probability (likely, considering he was playing bingo) * Etc. More stories like this have taught me to never muck with pseudo random variables whose outcomes effect things people care about even if the math behind the mucking doesn't change anything. People who had a lottery ticket and traded it for a different equal chance will get extremely depressed because they actually "had a shot at winning." These people could completely understand the probabilities involved, but somehow this doesn't help them avoid the "what if" depression that tells them they s
3RobinZ14y
Have no fear - your comment is clear. I'll give you that one, with a caveat: if an algorithm consistently outputs correct data rather than incorrect, it's a heuristic, not a bias. They lose points either way for failing to provide valid support for their complaint. Yes, those anecdotes constitute the sort of data I requested - your hypothesis now outranks mine in my sorting. When I read your initial comment, I felt that you had proposed an overly complicated explanation based on the amount of evidence you presented for it. I felt so based on the fact that I could immediately arrive at a simpler (and more plausible by my prior) explanation which your evidence did not refute. It is impressive, although not necessary, when you can anticipate my plausible hypothesis and present falsifying evidence; it is sufficient, as you have done, to test both hypotheses fairly against additional data when additional hypotheses appear.
3MrHen14y
Ah, okay. That makes more sense. I am still experimenting with the amount of predictive counter-arguing to use. In the past I have attempted to so by adding examples that would address the potential objections. This hasn't been terribly successful. I have also directly addressed the points and people still brought them up... so I am pondering how to fix the problem. But, anyway. The topic at hand still interests me. I assume there is a term for this that matches the behavior. I could come up with some fancy technical definition (perceived present ownership of a potential future ownership) but it seems dumb to make up a term when there is one lurking around somewhere. And the idea of labeling it an ownership problem didn't really occur to me until my conversation with you... so maybe I am answering my own question slowly?
8thomblake14y
Something like "ownership" seems right, as well as the loss aversion issue. Somehow, this seemingly-irrational behavior seems perfectly natural to me (and I'm familiar with similar complaints about the order of cards coming out). If you look at it from the standpoint of causality and counterfactuals, I think it will snap into place... Suppose that Tim was waiting for the king of hearts to complete his royal flush, and was about to be dealt that card. Then, you cut the deck, putting the king of hearts in the middle of the deck. Therefore, you caused him to not get the king of hearts; if your cutting of the deck were surgically removed, he would have had a straight flush. Presumably, your rejoinder would be that this scenario is just as likely as the one where he would not have gotten the king of hearts but your cutting of the deck gave it to him. But note that in this situation the other players have just as much reason to complain that you caused Tim to win! Of course, any of them is as likely to have been benefited or hurt by this cut, assuming a uniform distribution of cards, and shuffling is not more or less "random" than shuffling plus cutting. A digression: But hopefully at this point, you'll realize the difference between the frequentist and Bayesian instincts in this situation. The frequentist would charitably assume that the shuffle guarantees a uniform distribution, so that the cards each have the same probability of appearing on any particular draw. The Bayesian will symmetrically note that shuffling makes everyone involved assign the same probability to each card appearing on any particular draw, due to their ignorance of which ones are more likely. But this only works because everyone involved grants that shuffling has this property. You could imagine someone who payed attention to the shuffle and knew exactly which card was going to come up, and then was duly annoyed when you unexpectedly cut the deck. Given that such a person is possible in princip
3MrHen14y
Yep. This really is a digression which is why I hadn't brought up another interesting example with the same group of friends: We didn't do any tests on the subject because we really just wanted the annoying kid to stop dealing weird. But, now that I think about it, it should be relatively easy to test... Also related, I have learned a few magic tricks in my time. I understand that shuffling is a tricksy business. Plenty of more amusing stories are lurking about. This one is marginally related: This example is a counterpoint to the original. Here is someone claiming that it doesn't matter when the math says it most certainly does. The aforementioned cheater-heuristic would have prevented this player from doing something Bad. I honestly have no idea if he was just lying to us or was completely clueless but I couldn't help but be extremely suspicious when he ended up winning first place later that night.
6thomblake14y
On a tangent, myself and friends always pick the initial draw of cards using no particular method when playing Munchkin, to emphasize that we aren't supposed to be taking this very seriously. I favor snatching a card off the deck just as someone else was reaching for it.
1hugh14y
When you deal Texas Hold'em, do you "burn" cards in the traditional way? Neither I nor most of my friends think that those cards are special, but it's part of the rules of the game. Altering them, even without [suspicion of] malicious intent breaks a ritual associated with the game. While in this instance, the ritual doesn't protect the integrity of the game, rituals can be very important in getting into and enjoying activities. Humans are badly wired, and Less Wrong readers work hard to control our irrationalities. One arena in which I see less need for that is when our superstitious and pattern-seeking behaviors let us enjoy things more. I have a ritual for making coffee. I enjoy coffee without it, but I can reach a near-euphoric state with it. Faulty wiring, but I see no harm in taking advantage of it.
1MrHen14y
We didn't until the people on TV did it. The ritual was only important in the sense that this is how they were predicting which card they were going to get. Their point was based entirely on the fact that the card they were going to get is not the card they ended up getting. As a reminder to the ongoing conversation, we had arguments about the topic. They didn't say, "Do it because you are supposed to do it!" They said, "Don't change the card I am supposed to get!" Sure, but this isn't one of those cases. In this case, they are complaining for no good reason. Well, I guess I haven't found a good reason for their reaction. The consensus in the replies here seems to be that their reaction was wrong. I am not trying to say you shouldn't enjoy your coffee rituals.

RobinZ ventured a guess that their true objection was not their stated objection; I stated it poorly, but I was offering the same hypothesis with a different true objection--that you were disrupting the flow of the game.

I'm not entirely sure if this makes sense, partially because there is no reason to disguise unhappiness with an unusual order of game play. From what you've said, your friends worked to convince you that their objection was really about which cards were being dealt, and in this instance I think we can believe them. My fallacy was probably one of projection, in that I would have objected in the same instance, but for different reasons. I was also trying to defend their point of view as much as possible, so I was trying to find a rational explanation for it.

I suspect that the real problem is related to the certainty effect. In this case, though no probabilities were altered, there was a new "what-if" introduced into the situation. Now, if they lose (or rather, when all but one of you lose) they will likely retrace the situation and think that if you hadn't cut the deck, they could have won. Which is true, of course, but irrelevant, since it also could have ... (read more)

2MrHen14y
I agree with your comment and this part especially: Very true. I see a lot of behavior that matches this. This would be an excellent source of the complaint if it happened after they lost. My friends complained before they even picked up their cards.
-1gwern14y
That's what they say, I take it.
6orthonormal14y
To modify RobinZ's hypothesis: Rather than focusing on any Bayesian evidence for cheating, let's think like evolution for a second: how do you want your organism to react when someone else's voluntary action changes who receives a prize? Do you want the organism to react, on a gut level, as if the action could have just as easily swung the balance in their favor as against them? Or do you want them to cry foul if they're in a social position to do so? Your friends' response could come directly out of that adaptation, whatever rationalizations they make for it afterwards. I'd expect to see the same reaction in experiments with chimps.
3MrHen14y
I want my organism to be able to tell the difference between a cheater and someone making irrelevant changes to a deck of cards. I assume this was a rhetorical question. Evolution is great but I want more than that. I want to know why. I want to know why my friends feel that way but I didn't when the roles were reversed. The answer is not "because I knew more math." Have I just evolved differently? I want to know what other areas are affected by this. I want to know how to predict whatever caused this reaction in my friends before it happens in me. "Evolution" doesn't help me do that. I cannot think like evolution. As much as, "You could have been cheating" is a great response -- and "They are conditioned to respond to this situation as if you were cheating" is a better response -- these friends know the probabilities are the same and know I wasn't cheating. And they still react this way because... why? I suppose this comment is a bit snippier than it needs to be. I don't understand how your answer is an answer. I also don't know much about evolution. If I learned more about evolution would I be less confused?
1[anonymous]14y
It might be because people conceive a loss more severely than a gain. There might be an evolutionary explanation for that. Because of that they would conceive their "lossed" card which they already thought would be theirs more severely than the card the "gained" after the cut. While you on the other hand might already be trained to think about it differently.
1JamesPfeiffer14y
Based on my friends, the care/don't care dichotomy cuts orthogonally to the math/no math dichotomy. Most people, whether good or bad at math, can understand that the chances are the same. It's some other independent aspect of your brain that determines whether it intensely matters to you to do things "the right way" or if you can accept the symmetry of the situation. I hereby nominate some OCD-like explanation. I'd be interested in seeing whether OCD correlated with your friends' behavior. As a data point, I am not OCD and don't care if you cut the deck.
2MrHen14y
I am more likely to be considered OCD than any of my friends in the example. I don't care if you cut the deck.
3rwallace14y
It's a side effect. Yes, they were being irrational in this case. But the heuristics they were using are there for good reason. Suppose they had money coming to them and you swooped in and took it away before it could reach them, they would be rational to object, right? That's why those heuristics are there. In practice the trigger conditions for these things are not specified with unlimited precision, and pure but interruptible random number generators are not common in real life, so the trigger conditions harmlessly spill over to this case. But the upshot is that they were irrational as a side effect of usually rational heuristics.
4MrHen14y
So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation? I can understand your answer if the scenario was more like: "Hey! Don't do that!" "But it doesn't matter. See?" "Oh. Well, okay. But don't do it anyway because..." And then they mention your heuristic. They didn't do anything like this. They explicitly understood that nothing was changing in the probabilities and they explicitly understood that I was not cheating. And they were completely willing to defend their reaction in arguments. In their mind, their position was completely rational. I could not convince them that it was rational with math. Something else was the problem. "Heuristics" is nifty, but I am not completely satisfied with that answer. Why would they have kept defending it when it was demonstrably wrong? I suppose it is possible that they were completely unaware that they were using whatever heuristic they were using. Would that explain the behavior? Perhaps this is why they could not explain their position to me at the time of the arguments? How would you describe this heuristic in a few sentences?
6AdeleneDawner14y
I suspect it starts with something like "in the context of a game or other competition, if my opponent does something unexpected, and I don't understand why, it's probably bad news for me", with an emotional response of suspicion. Then when your explanation is about why shuffling the cards is neutral rather than being about why you did something unexpected, it triggers an "if someone I'm suspicious of tries to convince me with logic rather than just assuring me that they're harmless, they're probably trying to get away with something" heuristic. Also, most people seem to make the assumption, in cases like that, that they aren't going to be able to figure out what you're up to on the fly, so even flawless logic is unlikely to be accepted - the heuristic is "there must be a catch somewhere, even if I don't see it".
5orthonormal14y
Because human beings often first have a reaction based on an evolved, unconscious heuristic, and only later form a conscious rationalization about it, which can end up looking irrational if you ask the right questions (e.g. the standard reactions to the incest thought experiment there). So, yes, they were probably unaware of the heuristic they were actually using. I'd suppose that the heuristic is along the lines of the following: Say there's an agreed-upon fair procedure for deciding who gets something, and then someone changes that procedure, and someone other than you ends up benefiting. Then it's unfair, and what's yours has probably been taken. Given that rigorous probability theory didn't emerge until the later stages of human civilization, there's not much room for an additional heuristic saying "unless it doesn't change the odds" to have evolved; indeed, all of the agreed-upon random ways of selecting things (that I've ever heard of) work by obvious symmetry of chances rather than by abstract equality of odds†, and most of the times someone intentionally changed the process, they were probably in fact hoping to cheat the odds. † Thought experiment: we have to decide a binary disagreement by chance, and instead of flipping a coin or playing Rock-Paper-Scissors, I suggest we do the following: First, you roll a 6-sided die, and if it's a 1 or 2 you win. Otherwise, I roll a 12-sided die, and if it's 1 through 9 I win, and if it's 10 through 12 you win. Now compute the odds (50-50, unless I made a dumb mistake), and then actually try it (in real life) with non-negligible stakes. I predict that you'll feel slightly more uneasy about the experience than you would be flipping a coin.
5MrHen14y
Everything else you've said makes sense, but I think the heuristic here is way off. Firstly, they object before the results have been produced, so the benefit is unknown. Second, the assumption of an agreed upon procedure is only really valid in the poker example. Other examples don't have such an agreement and seem to display the same behavior. Finally, the change to the produce could be by a disinterested party with no possible personal gain to be had. I suspect that the reaction would stay the same. So, whatever heuristic may be at fault here, it doesn't seem to be the one you are focusing on. The fact that my friends didn't say, "You're cheating" or "You broke the rules" is more evidence against this being the heuristic. I am open to the idea of a heuristic being behind this. I am also open to the idea that my friends may not be aware of the heuristic or its implications. But I don't see how anything is pointing toward the heuristic you have suggested. Hmm... 1/3 I win outright... 2/3 enters a second roll where I win 1/4 of the time. Is that... 1/3 + 2/3 * 1/4 = 1/3 + 2/12 = 4/12 + 2/12 = 6/12 = 1/2 Seems right to me. And I don't suspect to feel uneasy about such an experience at all since the odds are the same. If someone offered me a scenario and I didn't have the math prepared I would work out the math and decide if it is fair. If I do the contest and you start winning every single time I might start getting nervous. But I would do the same thing regardless of the dice/coin combos we were using. I would actually feel safer using the dice because I found that I can strongly influence flipping a fair quarter in my favor without much effort.
2JGWeissman14y
An important element of it being fair for you to cut the deck in the middle of dealing, which your friends may not trust, is that you do so in ignorance of who it will help and who it will hinder. By cutting the deck, you have explicitly made and acted on a choice (it is far less obvious when you choose not to cut the deck, the default expected action), and this causes your friends to worry that the choice may have been optimized for interests other than their own.
0MrHen14y
I don't think this is relevant. I responded in more detail to RobinZ's comment.
1Jordan14y
As you note, regular poker and poker with an extra cut mid-deal are completely isomorphic. In a professional game you would obviously care, because the formality of the shuffle and deal are part of a tradition to instill trust that the deck isn't rigged. For a casual game, where it is assumed no one is cheating, then, unless you're a stickler for tradition, who cares? Your friends are wrong. We have two different pointers pointing to the same thing, and they are complaining because the pointers aren't the same, even though all that matters is what those pointers point to. It would be like complaining if you tried to change the name of Poker to Wallaboo mid-deal.
4Violet14y
There are rules for the game that are perceived as fair. If one participant goes changing the rules in the middle of the game this 1) makes rule changing acceptable in the game, 2) forces other players to analyze the current (and future changes) to the game to ensure they are fair. Cutting the deck probably doesn't affect the probability distribution (unless you shuffled the deck in a "funny" way). Allowing it makes a case for allowing the next changes in the rules too. Thus you can end up analyzing a new game rather than having fun playing poker.
1MrHen14y
Sure, but the "wrong" in this case couldn't be shown to my friends. They perfectly understood probability. The problem wasn't in the math. So where were they wrong? Another way of saying this: * The territory said one thing * Their map said another thing * Their map understood probability * Where did their map go wrong? The answer has nothing to do with me cheating and has nothing to do with misunderstanding probability. There is some other problem here and I don't know what it is.
-2cousin_it14y
An argument isomorphic to yours can be used to demonstrate that spousal cheating is okay as long as there are no consequences and the spouse doesn't know. Maybe your concept of "valid objection" is overly narrow?
3MrHen14y
Rearranging the cards in a deck has no statistical consequence. Cheating on your spouse significantly alters the odds of certain things happening. If you add the restriction that there are no consequences, there wouldn't really be much point in doing it because its not like you get sex as a result. That would be a consequence. The idea that something immoral shouldn't be immoral if no one catches you and nothing bad happens as a result is an open problem as far as I know. Most people don't like such an idea but I hear the debate surface from time to time. (Usually by people trying to convince themselves that whatever they just did wasn't wrong.) In addition, cutting a deck of cards does have an obvious effect. There is no statistical consequence but obviously you are not going to get the card you were originally going to be dealt.

I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?

0Will_Newsome14y
If and only if you can explain UDT in text at least as clearly as you explained it to me in person; I don't think that would take a very long post.
1Alicorn14y
Maybe he should explain it again in person and someone should transcribe?

How important are 'the latest news'?

These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming.

What is your take on it?

  • Is it important to read up on the latest news each day?
  • If so, what are your sources, please share them.
  • What kind of news are important?

I wonder if there is really more to it than just curiosity and leisure. Are there news sources (blogs, the latest research, 'lesswrong'-2.0 etc.), besides lesswrong.com, that every rationalist s... (read more)

I searched for a good news filter that would inform me about the world in ways that I found to be useful and beneficial, and came up with nothing.

Any source that contained news items I categorized as useful, they made up less than 5% of the information presented by that source, and thus were drowned out and took too much time and effort, on a daily basis, to find. Thus, I mostly ignore news, except what I get indirectly through following particular communities like LessWrong or Slashdot.

However, I perform this exercise on a regular basis (perhaps once a year), clearing out feeds that have become too junk-filled, searching out new feeds, and re-evaluating feeds I did not accept last time, to refine my information access.

I find that this habit of perpetual long-term change (significant reorganization, from first principles of the involved topic or action) is highly beneficial in many aspects of my life.

ETA: My feed reader contains the following:

... (read more)
3Morendil14y
Good question, which I'm finding surprisingly hard to answer. (i.e. I've spent more time composing this comment than is perhaps reasonable, struggling through several false starts). Here are some strategies/behaviours I use: expand and winnow; scorched earth; independent confirmation; obsession. * "expand and winnow": after finding an information source I really like (using the term "source" loosely, a blog, a forum, a site, etc.) I will often explore the surrounding "area", subscribe to related blogs or sources recommended by that source. In a second phase I will sort through which of these are worth following and which I should drop to reduce overload * "scorched earth": when I feel like I've learned enough about a topic, or that I'm truly overloaded, I will simply drop (almost) every subscription I have related to that topic, maybe keeping a major source to just monitor (skim titles and very occasionally read an item) * "independent confirmation": I do like to make sure I have a diversified set of sources of information, and see if there are any items (books, articles, movies) which come at me from more than one direction, especially if they are not "massively popular" items, e.g. I'd discard a recommendation to see Avatar, but I decided to dive into Jaynes when it was recommended on LW and my dad turned out to have liked it enough to have a hard copy of the PDF * "obsession": there typically is one thing I'm obsessed with (often the target of an expand and winnow operation); e.g. at various points in my life I've been obsessed with Agora Nomic, XML, Java VM implementation, Agile, personal development, Go, and currently whatever LW is about. An "obsessed" topic can be but isn't necessarily a professional interest, but it's what dominates my other curiosity and tends to color my other interests. For instance while obsessed with Go I pursued the topic both for its own sake and as a source of metaphors for understanding, say, project management or software dev
0h-H14y
yeah, news is usually a time/attention sink, I go to my bookmarked blogs etc whenever I feel like procrastinating. 15-20 minutes of looking at the main news sites/blogs should be enough to tell you what the biggest developments are, but really, I read them for entertainment value as much as for anything else. as a side note, antiwar is good site for world news.

"Why Self-Educated Learners Often Come Up Short" http://www.scotthyoung.com/blog/2010/02/24/self-education-failings/

Quotation: "I have a theory that the most successful people in life aren’t the busiest people or the most relaxed people. They are the ones who have the greatest ability to commit to something nobody else forces them to do."

5SoullessAutomaton14y
Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves. Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.
4hugh14y
I partially agree with this. Somewhere along the way, I learned how to learn. I still haven't really learned how to finish. I think these two features would have been dramatically enhanced had I not gone to school. I think a potential problem with self-educated learners (I know two adults who were unschooled) is that they get much better at fulfilling their own needs and tend to suffer when it comes to long-term projects that have value for others. The unschooled adults I know are both brilliant and creative, and ascribe those traits to their unconventional upbringing. But both of them work as freelance handymen. They like helping others, and would help other people more if they did something else, but short-term projects are all they can manage. They are polymaths that read textbooks and research papers, and one has even developed a machine learning technique that I've urged him to publish. However, when they get bored, they stop. The chance that writing up his results and releasing them would further research is not enough to get him past that obstacle of boredom. I have long thought that school, as currently practiced, is an abomination. I have yet to come up with a solution that I'm convinced solves its fundamental problems. For a while, I thought that unschooling was the solution, but these two acquaintances changed my mind. What is your opinion, on the right way to teach and learn?
0gwillen14y
As an interesting anecdote, I was schooled in a completely traditional fashion, and yet I never really learned to finish either. I did learn to learn, but I did it through a combination of schooling and self-teaching. But all the self-teaching was in addition to a completely standard course of American schooling, up through a Bachelor's degree in computer science.
0hugh14y
That's pretty much where I am; traditional school, up through college and grad school. I think my poor habits would have been intensified, however, if I had been unschooled.

It turns out that Eliezer might not have been as wrong as he thought he was about passing on calorie restriction.

6gwern14y
Well, there's still intermittent fasting. IF would get around and would also work well with the musings about variability and duration: (Our ancestors most certainly did have to survive frequent daily shortfalls. Feast or famine.)
1Eliezer Yudkowsky14y
Where do you get that I thought I was wrong about CR? I'd like to lose weight but I had been aware for a while that the state of evidence on caloric restriction doing the purported job of extending lifespan in mammals was bad.
2AdeleneDawner14y
...huh. The last thing I remember hearing from you about it was that it looked promising, but that the cognitive side effects made it impractical, so you'd settled on just taking the risk (which would, with that set of beliefs and values, be right in some ways, and wrong in others, and more right than wrong). But, for some reason the search bar doesn't turn up any relevant conversations for "calorie restriction Eliezer" or "caloric restriction Eliezer", so I couldn't actually check my memory. Sorry about that.
-3timtyler14y
That's a dopey article. My council is to not get your diet advice from there.
2AdeleneDawner14y
"Dopey"?
1wedrifid14y
Suggest a better one?
1timtyler14y
http://www.crsociety.org/ is the best web resource relating to dietary energy restriction that I am aware of.
7AdeleneDawner14y
I'm not seeing anything at all on that site regarding scientific evidence that CR works, except links to news articles (meh) and uncited assertions that studies have been done that came to that conclusion - the latter of which, in light of the issues raised in the article I linked to, I want to know more about before I try to decide whether they're useful or not. Overall, both the site and the wiki seem to be much more focused on how to do CR than on making any kind of case that CR is a good idea; I don't think we're asking the same question, if you consider that site to give good answers.
1timtyler14y
That site is the biggest and most comprehensive resource on the topic available on the internet, AFAIK. Looking at what you say you are looking for, I don't think we're asking the same question either. The diet is not "a good idea" - e.g. see: http://cr.timtyler.org/disadvantages/ Rather, it is a tool - and whether or not it is for you depends on what your aims in life are.
1AdeleneDawner14y
Sorry, I thought the meaning of "a good idea" would be clear in context. I meant "likely to increase a user's chance of having a longer lifespan than they would otherwise". If that's the best resource there is, taking CR at all seriously sounds like privileging the hypothesis to me.
0wedrifid14y
It may be wrong but I don't think the flaw is that of privileging the hypothesis. If CR actually does work in, say, rats then thinking it may work in humans is at least a worthwhile hypothesis. The essay you found suggests that the evidence for the hypothesis is looking kinda shaky.
2AdeleneDawner14y
Noteworthy: CR is not a particular interest of mine, and I haven't researched it. If there are good, solid studies of CR in rats, why doesn't that site seem to have, or link to, information about them? If that's the site for CR, and given that it has a publicly editable (yes, I checked) wiki, I'd expect that someone would have added that information, and it's not there: I searched for both "study" and "studies" in the wiki; nothing about rat studies - or any other animal studies, except a mention of monkey studies - showed up. A google site search does turn up this, though.
0timtyler14y
Don't bother with the site's wiki. They have a reference to a mouse study on the front page of the site: Weindruch R, et al. (1986). "The retardation of aging in mice by dietary restriction: longevity, cancer, immunity and lifetime energy intake." Journal of Nutrition, April, 116(4), pages 641-54. For the evidence from the rat studies, perhaps start with this review article: Overview of caloric restriction and ageing. http://www.crsociety.org/archive/read.php?2,172427,172427
0timtyler14y
I think most in the field agree on that. e.g.: ""I'm positive that caloric restriction will work in humans to extend median life span," Fontana says." * http://pubs.acs.org/cen/science/87/8731sci2.html A summary from the site wiki: "The evidence that bears on the question of the applicability of CR to humans then, is at present indirect. There is nonetheless a great deal of such indirect evidence, enough that we can say with an extremely high degree of confidence that CR will work in humans." * http://en.wiki.calorierestriction.org/index.php/Will_CR_Work_in_Humans%3F
0wedrifid14y
Off the top of your head do you know what CR has been shown to work on thus far?
0Douglas_Knight14y
One of TT's links says CR works in "mice, hamsters, dogs, fish, invertebrate animals, and yeast."
[-][anonymous]14y70

Pick some reasonable priors and use them to answer the following question.

On week 1, Grandma calls on Thursday to say she is coming over, and then comes over on Friday. On week 2, Grandma once again calls on Thursday to say she is coming over, and then comes over on Friday. On week 3, Grandma does not call on Thursday to say she is coming over. What is the probability that she will come over on Friday?

ETA: This is a problem, not a puzzle. Disclose your reasoning, and your chosen priors, and don't use ROT13.

4Sniffnoy14y
In the calls, does she specify when she is coming over? I.e. does she say she'll be coming over on Thursday, Friday, just sometime in the near future, or she leaves it for you to infer?
1[anonymous]14y
The information I gave is the information you have. Don't make me make the problem more complicated. ETA: Let me expand on this before people start getting on my case. Rationality is about coming to the best conclusion you can given the information you have. If the information available to you is limited, you just have to deal with it. Besides, sometimes, having less information makes the problem easier. Suppose I give you the following physics problem: I throw a ball from a height of 4 feet; its maximum height is 10 feet. How long does it take from the time I throw it for it to hit the ground? This problem is pretty easy. Now, suppose I also tell you that the ball is a sphere, and I tell you its mass and radius, and the viscosity of the air. This means that I'm expecting you to take air resistance into account, and suddenly the problem becomes a lot harder. If you really want a problem where you have all the information, here: Every time period, input A (of type Boolean) is revealed, and then input B (also of type Boolean) is revealed. There are no other inputs. In time period 0, input A is revealed to be TRUE, and then input B is revealed to be TRUE. In time period 1, input A is revealed to be TRUE, and then input B is revealed to be TRUE. In time period 2, input A is revealed to be FALSE. What is the probability that input B will be revealed to be TRUE?
7Douglas_Knight14y
Having less information makes easier the problem of satisfying the teacher. It does not make easier the problem of determining when the ball hits the ground. Incidentally, I got the impression somehow that there are venues where physics teachers scold students for using too much information. ETA (months later): I do think it's a good exercise, I just think this is not why.
0[anonymous]14y
Here, though, the problem actually is simpler the less information you have. As an extreme example, if you know nothing, the probability is always 1/2 (or whatever your prior is).
-1RobinZ14y
I can say immediately that it is less than 50% - to be more rigorous would take a minute. Edit: Wait - no, I can't. If the variables are related, then that conclusion would appear, but it's not necessary that they be.
3orthonormal14y
Let * AN = "Grandma calls on Thursday of week N", * BN = "Grandma comes on Friday of week N". A toy version of my prior could be reasonably close to the following: P(AN)=p, P(AN,BN)=pq, P(~AN,BN)=(1-p)r where * the distribution of p is uniform on [0,1] * the distribution of q is concentrated near 1 (distribution proportional to f(x)=x on [0,1], let's say) * the distribution of r is concentrated near 0 (distribution proportional to f(x)=1-x on [0,1], let's say) Thus, the joint probability distribution of (p,q,r) is given by 4q(1-r) once we normalize. Now, how does the evidence affect this? The likelihood ratio for (A1,B1,A2,B2) is proportional to (pq)^2, so after multiplying and renormalizing, we get a joint probability distribution of 24p^2q^3(1-r). Thus P(~A3|A1,B1,A2,B2)=1/4 and P(~A3,B3|A1,B1,A2,B2)=1/12, so I wind up with a 1 in 3 chance that Grandma will come on Friday, if I've done all my math correctly. Of course, this is all just a toy model, as I shouldn't assume things like "different weeks are independent", but to first order, this looks like the right behavior.
2orthonormal14y
I should have realized this sooner: P(B3|~A3) is just the updated value of r, which isn't affected at all by (A1,B1,A2,B2). So of course the answer according to this model should be 1/3, as it's the expected value of r in the prior distribution. Still, it was a good exercise to actually work out a Bayesian update on a continuous prior. I suggest everyone try it for themselves at least once!
3RobinZ14y
I fail to see how this question has a perceptibly rational answer - too much depends on the prior.
4[anonymous]14y
Presumably, once you've picked your priors, the rest follows. And presumably, once you've come up with an answer, you'll disclose your reasoning, and your chosen priors.
2ata14y
Does she come over unannounced on any days other than Friday?
0[anonymous]14y
I don't know.
1Richard_Kennaway14y
Using the information that she is my grandmother, I speculate on the reason why she did not call on Thursday. Perhaps it is because she does not intend to come on Friday: P(Friday) is lowered. Perhaps it is because she does intend to come but judges the regularity of the event to make calling in advance unnecessary unless she had decided not to come: P(Friday) is raised. Grandmothers tend to be old and consequently may be forgetful: perhaps she intends to come but has forgotten to call: P(Friday) is raised. Grandmothers tend to be old, and consequently may be frail: perhaps she has been taken unwell; perhaps she is even now lying on the floor of her home, having taken a fall, and no-one is there to help: P(Friday) is lowered, and perhaps I should phone her. My answer to the problem is therefore: I phone her to see how she is and ask if she is coming tomorrow. I know -- this is not an answer within the terms of the question. However, it is my answer. The more abstract version you later posted is a different problem. We have two observations of A and B occurring together, and that is all. Unlike the case of Grandma's visits, we have no information about any causal connection between A and B. (The sequence of revealing A before B does not affect anything.) What is then the best estimate of P(B|~A)? We have no information about the relation between A and B, so I am guessing that a reasonable prior for that relation is that A and B are independent. Therefore A can be ignored and the Laplace rule of succession applied to the two observations of B, giving 3/4. ETA: I originally had a far more verbose analysis of the second problem based on modelling it as an urn problem, which I then deleted. But the urn problem may be useful for the intuition anyway. You have an urn full of balls, each of which is either rough or smooth (A or ~A), and either black or white (B or ~B). You pick two balls which turn out to be both rough and black. You pick a third and feel that it is sm
3wnoise14y
Directly using the Laplace rule of succession on the sample space A \tensor B gives weights proportional to: (A,B): 3 (A, ~B): 1 (~A, B): 1 (~A, ~B): 1 Conditioning on ~A, P(B|~A) = 1/2. Assuming independence does make a significant difference on this little data.
3orthonormal14y
On the contrary, on two points. First, "A and B are independent" is not a reasonable prior, because it assigns probability 0 to them being dependent in some way— or, to put it another way, if that were your prior and you observed 100 cases and A and B agreed each time (sometimes true, sometimes false), you'd still assume they were independent. What you should have said, I think, is that a reasonable prior would have "A and B independent" as one of the most probable options for their relation, as it is one of the simplest. But it should also give some substantial weight to simple dependencies like "A and B identical" and "A and B opposite". Second, the sense in which we have no prior information about relations between A and B is not a sense that justifies ignoring A. We had no prior information before we observed them agreeing twice, which raises the probability of "A and B identical" while somewhat lowering that of "A and B independent".
0[anonymous]14y
It's true that the prior should not be "A and B are independent". But shouldn't symmetries of how they may be dependent give essentially the same result as assuming independence? Similar as to how any symmetric prior for how a coin is biased gives the same results for a prediction of probability of heads -- 1/2. I don't think independence is a good way to analyze things when the probabilities are near zero or one. Independence is just P[A] P[B] = P[AB]. If P[A] or P[B] are near zero or one, this is automatically "nearly true". Put another way, two observation of (A, B) give essentially no information about dependence by themselves. This is encoded into ratios between the four possibilities.
-2Richard_Kennaway14y
This raises a question of the meaningfuless of second-order Bayesian reasoning. Suppose I had a prior for the probability of some event C of, say, 0.469. Could one object to that, on the grounds that I have assigned a probability of zero to the probability of C being some other value? A prior of independence of A and B seems to me of a like nature to an assignment of a probability to C. On the second point, seeing A and B together twice, or twenty times, tells me nothing about their independence. Almost everyone has two eyes and two legs, and therefore almost everyone has both two eyes and two legs, but it does not follow from those observations alone that possession of two eyes either is, or is not, independent of having two legs. For example, it is well-known (in some possible world) that the rare grey-green greasy Limpopo bore worm invariably attacks either the eyes, or the legs, but never both in the same patient, and thus observing someone walking on healthy legs conveys a tiny positive amount of probability that they have no eyes; while (in another possible world) the venom of the giant rattlesnake of Sumatra rapidly causes both the eyes and the legs of anyone it bites to fall off, with the opposite effect on the relationship between the two misfortunes. I can predict that someone has both two eyes and two legs from the fact that they are a human being. The extra information about their legs that I gain from examining their eyes could go either way. But that is just an intuitive ramble. What is needed here is a calculation, akin to the Laplace rule of succession, for observations in a 2x2 contingency table. Starting from an ignorance prior that the probabilities of A&B, A&~B, B&~A, and ~A&~B are each 1/4, and observing a, b, c, and d examples of each, what is the appropriate posterior? Then fill in the values 2, 0, 0, and 0. ETA: On reading the comments, I realise that the above is almost all wrong.
7jimrandomh14y
In order to have a probability distribution rather than just a probability, you need to ask a question that isn't boolean, ie one with more than two possible answers. If you ask "Will this coin come up heads on the next flip?", you get a probability, because there are only two possible answers. If you ask "How many times will this coin come up heads out of the next hundred flips?", then you get back a probability for each number from 0 to 100 - that is, a probability distribution. And if you ask "what kind of coin do I have in my pocket?", then you get a function that takes any possible description (from "copper" to "slightly worn 1980 American quarter") and returns a probability of matching that description.
4orthonormal14y
Depends on how you're doing this; if you have a continuous prior for the probability of C, with an expected value of 0.469, then no— and future evidence will continue to modify your probability distribution. If your prior for the probability of C consists of a delta mass at 0.469, then yes, your model perhaps should be criticized, as one might criticize Rosenkrantz for continuing to assume his coin is fair after 30 consecutive heads. A Bayesian reasoner actually would have a hierarchy of uncertainty about every aspect of ver model, but the simplicity weighting would give them all low probabilities unless they started correctly predicting some strong pattern. Independence has a specific meaning in probability theory, and it's a very delicate state of affairs. Many statisticians (and others) get themselves in trouble by assuming independence (because it's easier to calculate) for variables that are actually correlated. And depending on your reference class (things with human DNA? animals? macroscopic objects?), having 2 eyes is extremely well correlated with having 2 legs.
4FAWS14y
Even without any math It already tells you that they are not mutually exclusive. See wnoise's reply to the grandparent post for the Laplace rule equivalent.
3[anonymous]14y
I really like your urn formulation.
1Peter_de_Blanc14y
OK, I'll use the same model I use for text. The zeroth-order model is maxentropy, and the kth-order model is a k-gram model with a pseudocount of 2 (the alphabet size) allocated to the (k-1)th-order model. In this case, since there's never before been a Thursday in which she did not call, we default to the 1st-order model, which says the probability is 3/4 that she will come on Friday.
3[anonymous]14y
I beg your pardon?
0Douglas_Knight13y
Is this a standard model? Does it have a name? a reference? I see that the level 1 model is Laplace's rule of succession. Is there some clean statement about the level k model? Is this a bayesian update? You seem to be treating the string as being labeled by alternating Thursdays and Fridays, which have letters drawn from different alphabets. The model easily extends to this, but it was probably worth saying, particularly since the two alphabets happen to have the same size. I find it odd that almost everyone treated weeks as discrete events. In this problem, days seem like the much more natural unit to me. ata probably agrees with me, but he didn't reach a conclusion. With weeks, we have very few observations, so a lot depends on our model, like whether we use alphabets of size 2 for Thursday and Friday (Peter), or whether we use alphabets of size 4 for the whole week (wnoise). I'm going to allow calls and visits on each day and use an alphabet of size 4 for each day. I think it would be better to use a Peter-ish system of separating morning visits from evening calls, but with data indexed by days, we have a lot of data, so I don't think this matters so much. I'll run my weeks Sun-Sat. Weeks 1 and 2 are complete and week 3 is partial. Treating days as independent and having 4 outcomes: ([no]visit)x([no]call). I interpret the unspecified days as having no call and no visit. Using Laplace's rule of succession, we have 4/23 chance of visit, which sounds pretty reasonable to me. But if we use Peter's hierarchical model, I think our chance of a visit is 4/23*4/17*4/14*4/11*4/8*4/5 = 1/500. That is, since we've never seen a visit after a no-call/no-visit day, the only way to get a visit is from level 1 of the model, so we multiply the chance of falling through from level 2 to level 1, from level 3 to 2, etc. The chance of falling through from level n+1 to level n is 4/(4+c), where c is the number of times we've seen an n+1-gram that continues the last n days. So for n
[-][anonymous]14y60

Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."

I said: "I do!"

He paused a moment and then said: "Hmm. Yeah, so do I."

I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.

New on arXiv:

David H. Wolpert, Gregory Benford. (2010). What does Newcomb's paradox teach us?

In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb's paradox using a recent extension of game theory

... (read more)
2xamdam14y
In a competely preverse coincedence Benford's law, attributed to an apparently unrelated Frank Bernford, was apparently invented by an unrelated Simon Newcomb http://en.wikipedia.org/wiki/Benford%27s_law
0SilasBarta14y
Okay, now that I've read section 2 of the paper (where it gives the two decompositions), it doesn't seem so insightful. Here's my summary of the Wolpert/Benford argument: "There are two Bayes nets to represent the problem: Fearful, where your decision y causally influences Omega's decision g, and Realist, where Omega's decision causally influences yours. "Fearful: P(y,g) = P(g|y) * P(y), you set P(y). Bayes net: Y -> G. One-boxing is preferable. "Realist: P(y,g) = P(y|g) * P(g), you set P(y|g). Bayes net: G -> Y. Two-boxing is preferable." My response: these choices neglect the option presented by AnnaSalamon and Eliezer_Yudkowsky previously: that Omega's act and your act are causally influenced by a common timeless node, which is a more faithful representation of the problem statement.
0SilasBarta14y
Self-serving FYI: In this comment I summarized Eliezer_Yudkowsky's list of the ways that Newcomb's problem, as stated, constrains a Bayes net. For the non-link-clickers: * Must have nodes corresponding to logical uncertainty (Self-explanatory) * Omega's decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected) * Omega's act lies in the past. (ETA: Since nothing is simultaneous with Omega's act, then knowledge of Omega's act screens off the influence of everything before it; on the Bayes net, Omega's act blocks all paths from the past to future events; only paths originating from future or timeless events can bypass it.) * Omega's act is not directly influencing us (No causal arrow directly from Omega to us/our choice.) * We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.) * Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)

Warning: Your reality is out of date

tl;dr:

There are established facts that don't change perceptibly (the boiling point of water), and there are facts that change constantly (outside temperature, time of day)

Inbetween these two intuitive categories, however, a third class of facts could be defined: facts that do change measurably, or even drastically, over human lifespans, but still so slowly that people, after first learning about them, have a tendency of dumping them into the "no-change" category unless they're actively paying attention to the f... (read more)

0RobinZ14y
I notice the figure for cell phone connectivity is three years old. :P

Which very-low-effort activities are most worthwhile? By low effort, I mean about as hard as solitaire, facebook, blogs, TV, most fantasy novels, etc.

2Kevin14y
I think I have a good one for people in the USA. This is a job that allows you to work from home on your computer rating the quality of search engine results. It pays $15/hour and because their productivity metrics aren't perfect, you can work for 30 seconds and then take two minutes off with about as much variance as you want. Instead of taking time off directly to do different work, you could also slow yourself down by continuously watching TV or downloaded videos. They are also hiring for some workers in similar areas that are capable of doing somewhat more complicated tasks, presumably for higher salaries. Some sound interesting. http://www.lionbridge.com/lionbridge/en-us/company/work-with-us/careers.htm Yes, out of all "work from home" internet jobs, this is the only one that is not a scam. Lionbridge is a real company and their shares recently continued to increase after a strong earnings report. http://online.wsj.com/article/BT-CO-20100210-716444.html?mod=rss_Hot_Stocks First, you send them your resume, and they basically approve every US high school graduate that can create a resume for the next step. Then you have to take a test in doing the job. They provide plenty of training material and the job isn't all that hard, a few hours of rapid skimming is probably enough to pass the test for most people. Almost 100% of people would be able to pass the test after 10 hours of studying.
1nazgulnarsil14y
throwing/giving away stuff you don't use. reading instead of watching tv or browsing website for the umpteenth time. eating more fruit and less processed sugar. exercising 10-15 minutes a day. writing down your ideas. intro to econ of some sort. spending 30 minutes a day on a long term project. meditation.

Should we have a sidebar section "Friends of LessWrong" to link to sites with some overlap in goals/audience?

I would include TakeOnIt in such a list. Any other examples?

[-][anonymous]14y60

When I was young, I happened upon a book called "The New Way Things Work," by David Macaulay. It described hundreds of household objects, along with descriptions and illustrations of how they work. (Well, a nuclear power plant, and the atoms within it, aren't household objects. But I digress.) It was really interesting!

I remember seeing someone here mention that they had read a similar book as a kid, and it helped them immensely in seeing the world from a reductionist viewpoint. I was wondering if anyone else had anything to say on the matter.

5MrHen14y
I loved that book. I still have moments when I pull some random picture from that book out of my memory to describe how an object works. EDIT: Apparently the book is on Google.
2[anonymous]14y
Today there's How Stuff Works.
1Nick_Tarleton14y
I also loved that book. It probably helped teach me reductionism, but it's hard to tell given my generally terrible memory for my childhood. (FWIW, my best guess for my biggest reductionist influence would be learning assembly language and other low-level CS details.)
1Jack14y
I think we had this in the house, but I don't remember it very well, except some of the part about pullies and levers. This book would be a nice starting point for that rebuilding civilization manual idea from a while back.
1Morendil14y
My favorite Macaulay is "Motel of the Mysteries". I read it as a kid and it definitely had an influence. ;)
0Nisan14y
I have fond childhood memories of many hours tracing the circuit diagram of the adding circuit : ) God, I was so nerdy. I wanted to know how a computer worked and that book helped me avoid a mysterious answer to a mysterious question. Learning, in detail, how a specific logic circuit works really drove home how much I had yet to learn about the rest of the workings of a computer.
0h-H14y
I was going to get that for me younger brother when I next see him :)

I have two basic questions that I am confused about. This is probably a good place to ask them.

  1. What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.

  2. Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probabili

... (read more)
9MrHen14y
This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a -3 rating though... so apparently it wasn't too useful. The consensus of the comments was that the correct answer is .5. Also of note is Bead Jar Guesses and its sequel.
7JGWeissman14y
If you truly have no clue, .5 yes and .5 no. Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of "No". How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards "yes".
9Alicorn14y
But the game of Doldun could also have the possibility of cooperative wins. Or it could be unwinnable. Or Strigli might not be playing. Or Strigli might be the only team playing - it's the team against the environment! Or Doldun could be called on account of a rain of frogs. Or Strigli's left running foobar could break a chitinous armor plate and be replaced by a member of team Baz, which means that Baz gets half credit for a Strigli win.
2orthonormal14y
All of which means that you shouldn't be too confident in your probability distribution in such a foreign situation, but you still have to come up with a probability if it's relevant at all for action. Bad priors can hurt, but refusal to treat your uncertainty in a Bayes-like fashion hurts more (with high probability).
2Alicorn14y
Yes, but in this situation you have so little information that .5 doesn't seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case? .5 isn't the right prior - some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
4orthonormal14y
Unless there's some reason that they'd suspect it's more likely for us to ask them a trick question whose answer is "No" than one whose question is "Yes" (although it is probably easier to create trick questions whose answer is "No", and the Striglian could take that into account), 50% isn't a bad probability to assign if asked a completely foreign Yes-No question. Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega's decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.
9Alicorn14y
It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don't have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch - and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I'm in the Sea of Tranquility, false that I'm equidistant between the Sun and the star Polaris, false that... Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is "no". Basically, your prior should be that everything is almost certainly false!
3cousin_it14y
The odds of a random sentence being true are low, but the odds of the alien choosing to give you a true sentence are higher.
0thomblake14y
A random alien?
0bogdanb14y
No, just a random alien that (1) I encountered and (2) asked me a question. The two conditions above restrict enormously the general class of “possible” random aliens. Every condition that restricts possibilities brings information, though I can't see a way of properly encoding this information as a prior about the answer to said question. [ETA:] Note that I don't necessarily accept cousin_it's assertion, I just state my interpretation of it.
0orthonormal14y
Well, let's say I ask you whether all "fnynznaqre"s are "nzcuvovna"s. Prior to using rot13 on this question (and hypothesizing that we hadn't had this particular conversation beforehand), would your prior really be as low as your previous comment implies? (Of course, it should probably still be under 50% for the reference class we're discussing, but not nearly that far under.)
1Alicorn14y
Given that you chose this question to ask, and that I know you are a human, then screening off this conversation I find myself hovering at around 25% that all "fnynznaqre"s are "nzcuvovna"s. We're talking about aliens. Come on, now that it's occurred to you, wouldn't you ask an E.T. if it thinks the Red Sox have a shot at the spelling bee?
0orthonormal14y
Yes, but I might as easily choose a question whose answer was "Yes" if I thought that a trick question might be too predictable of a strategy. 1/4 seems reasonable to me, given human psychology. If you expand the reference class to all alien species, though, I can't see why the likelihood of "Yes" should go down— that would generally require more information, not less, about what sort of questions the other is liable to ask.
2Alicorn14y
Okay, if you have some reason to believe that the question was chosen to have a specific answer, instead of being chosen directly from questionspace, then you can revise up. I didn't see a reason to think this was going on when the aliens were asking the question, though.
0orthonormal14y
Hmm. As you point out, questionspace is biased towards "No" when represented in human formalisms (if weighting by length, it's biased by nearly the length of the "not" symbol), and it would seem weird if it weren't so an an alien representation. Perhaps that's a reason to revise down and not up when taking information off the table. But it doesn't seem like it should be more than (say) a decibel's worth of evidence for "No". ETA: I think we each just acknowledged that the other has a point. On the Internet, no less!
2Alicorn14y
Isn't it awesome when that happens? :D
-1vinayak14y
I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events - 1. Strigli wins the game. 2. It rains immediately after the match is over. 3. Strigli wins the game AND it rains immediately after the match is over. I am not sure if your scheme ensures that this does not happen. Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don't understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses? One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don't know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can't completely convince myself that it gives a correct prior probability.
1bogdanb14y
I just wanted to note that it is actually possible to do that, provided that the questions are asked in order (not simultaneously). That is, I might logically think that the answer to (1) and (2) is true with 50% probability after I'm asked each question. Then, when I'm asked (3), I might logically deduce that (3) is true with 50% probability — however, this only means that after I'm asked (3), the very fact that I was asked (3) caused me to raise my confidence that (1) and (2) are true. It's a fine point that seems easy to miss. On a somewhat related point, I've looked at the entire discussion and it seems to me the original question is ill-posed, in the sense that the question, with high probability, doesn't mean what the asker thinks it means. Take For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli. The question is intended to prevent you from having any prior information about its subject. However, what it means is just that before you are asked the question, you don't have any information about it. (And I'm not even very sure about that.) But once you are asked the question, you received a huge amount of information: The very fact that you received that question is extremely improbable (in the class of “what could have happened instead”). Also note that it is vanishingly more improbable than, say, being asked by somebody on the street, say, if you think his son will get an A today. “Something extremely improbable happens” means “you just received information”; the more improbable it was the more information you received (though I think there are some logs in that relationship). So, the fact you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli brings a lot of information: space travel is possible within one's lifetime, aliens exist, aliens have that travel technology,
0orthonormal14y
Definitely agree on the first point (although, to be careful, the probabilities I assign to the three events could be epsilons apart if I were convinced of a bidirectional implication between 1 and 2). On the second part: Yep, you need to start with some prior probabilities, and if you don't have any already, the ignorance prior of 2^{-n} for each hypothesis that can be written (in some fixed binary language) as a program of length n is the way to go. (This is basically what you described, and carrying forward from that point is called Solomonoff induction.) In practice, it's not possible to estimate hypothesis complexity with much precision, but it doesn't take all that much precision to judge in cases like Thor vs. Maxwell's Equations; and anyway, as long as your priors aren't too ridiculously off, actually updating on evidence will correct them soon enough for most practical purposes. ETA: Good to keep in mind: When (Not) To Use Probabilities
0JGWeissman14y
But it is true that you are not on a red couch. Negation is a one-to-one map between true and false propositions.
3Alicorn14y
Since you can understand the alien's question except for the nouns, presumably you'd be able to tell if there was a "not" in there?
3JGWeissman14y
Yes, you have made a convincing argument, I think, that given that a proposition does not involve negation, as in the alien's question, that it is more likely to be false than true. (At least, if you have a prior for being presented with questions that penalize complexity. The sizes of the spaces of true and false propositions, however, are the same countable infinity.) (Sometimes I see claims in isolation, and so miss that a slightly modified claim is more correct and still supports the same larger claim.) ETA: We should also note the absence of any disjunctions. It is also true that Alicorn is sitting on a blue couch or a red couch. (Well, maybe not, some time has passed since she reported sitting on a blue couch. But that's not the point.) This effect may be screened off if, for example, you have a prior that the aliens first choose whether the answer should be yes or no, and then choose a question to match the answer.
1gwern14y
That the aliens chose to translate their word as the English 'game' says, I think, a lot.
4Alicorn14y
"Game" is one of the most notorious words in the language for the virtual impossibility of providing a unified definition absent counterexamples.
1Richard_Kennaway14y
"A game is a voluntary attempt to overcome unnecessary obstacles."
3JohannesDahlstrom14y
This is, perhaps, a necessary condition but not a sufficient one. It is true of almost all hobbies, but I wouldn't classify hobbies such as computer programming or learning to play the piano as games.
2Richard_Kennaway14y
I wouldn't class most hobbies as attempts to overcome unnecessary obstacles either -- certainly not playing a musical instrument, where the difficulties are all necessary ones. I might count bird-watching, of the sort where the twitcher's goal is to get as many "ticks" (sightings of different species) as possible, as falling within the definition, but for that very reason I'd regard it as being a game. One could argue that compulsory games at school are a counterexample to the "voluntary" part. On the other hand, Láadan has a word "rashida": "a non-game, a cruel "playing" that is a game only for the dominant "player" with the power to force others to participate [ra=non- + shida=game]". In the light of that concept, perhaps these are not really games for the children forced to participate. But whatever nits one can pick in Bernard Suits' definition, I still think it makes a pretty good counter to Wittgenstein's claims about the concept.
0JohannesDahlstrom14y
Oh, right. Reading "unnecessary" as "artificial", the definition is indeed as good as they come. My first interpretation was somewhat different and, in retrospect, not very coherent.
-1gwern14y
A family resemblance is still a resemblance.
0radical_negative_one14y
Could you include a source for this quote, please?
-3gwern14y
Googling it would've told you that it's from Wittgenstein's Philosophical Investigations.
1JGWeissman14y
Simply Googling it would not have signaled any disappointment radical_negative_one may have had that you did not include a citation (preferably with a relevant link) as is normal when making a quote like that.
-2gwern14y
/me bats the social signal into JGWeissman's court Omitting the citation, which wasn't really needed, sends the message that I don't wish to stand on Wittgenstein's authority but think the sentiment stands on its own.
2wedrifid14y
Then use your own words. Wittgenstein's are barely readable.
0[anonymous]14y
My words are barely readable? Did you mean Wittgenstein's words?
0[anonymous]14y
Pardon me I meant Wittgenstein.
1RobinZ14y
If it doesn't stand on its own, you shouldn't quote it at all - the purpose of the citation is to allow interested parties to investigate the original source, not to help you convince.
1JGWeissman14y
Voted up, but I would say the purpose is to do both, to help convince and help further investigation, and more, such as to give credit to the source. Citations benifet the reader, the quoter, and the source. I definitely agree that willingness to forgo your own benifet as the quoter does not justify ignoring the benifets to the others involved.
0RobinZ14y
You're right, of course.
-5gwern14y
1SoullessAutomaton14y
Hm. For actual aliens I don't think even that's justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.
6Alicorn14y
I was conditioning on the probability that the question is in fact meaningful to the aliens (more like "Will the Red Sox win the spelling bee?" than like "Does the present king of France's beard undertake differential diagnosis of the psychiatric maladies of silk orchids with the help of a burrowing hybrid car?"). If you assume they're just stringing words together, then there's not obviously a proposition you can even assign probability to.
2SoullessAutomaton14y
Hey, maybe they're Zen aliens who always greet strangers by asking meaningless questions. More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.
1orthonormal14y
For #2, I don't see how you could ever be completely sure the other was rationalist or Bayesian, short of getting their source code; they could always have one irrational belief hiding somewhere far from all the questions you can think up. In practice, though, I think I could easily decide within 10 questions whether a given (honest) answerer is in the "aspiring rationalist" cluster and/or the "Bayesian" cluster, and get the vast majority of cases right. People cluster themselves pretty well on many questions.
1Jack14y
For two, can I just have an extended preface that describes a population, an infection rate for some disease and a test with false positivity rates and false negativity rates and see if the person gives me the right answer?
0[anonymous]14y
For number 1 you should weight "no" more highly. For the answer to be "yes" Strigli must be a team, a Doldun team, and it must win. Sure, maybe all teams win, but it is possible that all teams could lose, they could tie, or the game might be cancelled, so a "no" is significantly more likely to be right. 50% seems wrong to me.
0Kaj_Sotala14y
1: If you have no information to support either alternative more than the other, you should assign them both equal credence. So, fifty-fifty. Note that yes-no questions are the easiest possible case, as you have exactly two options. Things get much trickier once it's not obvious what things should be classified as the alternatives that should be considered equally plausible. Though I would say that in this situation, the most rational approach would be to tell the Sillpruk, "I'm sorry, I'm not from around here. Before I answer, does this planet have a custom of killing people who give the wrong answer to this question, or is there anything else I should be aware of before replying?" 2: This depends a lot how we define a rationalist and a Bayesian. A question like "is the Bible literally true" could reveal a lot of irrational people, but I'm not certain of the amount of questions that'd need to be asked before we could know for sure that they were irrational. (Well, since 1 and 0 aren't probabilities, the strict answer to this question is "it can't be done", but I'm assuming you mean "before we know with such a certainty that in practice we can say it's for sure".)
2vinayak14y
Yes, I should be more specific about 2. So let's say the following are the first three questions you ask and their answers - Q1. Do you think A is true? A. Yes. Q2. Do you think A=>B is true? A. Yes. Q3. Do you think B is true? A. No. At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question. Q4. Do you believe in Modus Ponens? or in other words, Q4. Do you think that if A and A=>B are both true then B should also be true? If you think you should ask this question before deciding whether the person is rational or not, then why stop here? You should continue and ask him the following question as well. Q5. Do you think that if you believe in Modus Ponens and if you also think that A and A=>B are true, then you should also believe that B is true as well? And I can go on and on... So the point is, if you think asking all these questions is necessary to decide whether the person is rational or not, then in effect any given person can have any arbitrary set of beliefs and he can still claim to be rational by adding a few extra beliefs to his belief system that say the n^th level of "Modus Ponens is wrong" for some suitably chosen n.
1prase14y
I think that belief in modus ponens is a part of the definition of "rational", at least practically. So Q1 is enough. However, there are not much tortoises among the general public, so this type of question isn't probably much helpful.

LHC to shut down for a year to address safety concerns: http://news.bbc.co.uk/2/hi/science/nature/8556621.stm

6Kevin14y
Apparently this is shoddy journalism. http://news.ycombinator.com/item?id=1180487
0Jack14y
So do we count this as additional evidence that some anthropic selection is in effect even though it is causally connected to the earlier breakdown?
2Richard_Kennaway14y
I like this quote from the director: "With a machine like the LHC, you only build one and you only build it once."

I've just finished reading Predictably Irrational by Dan Ariely.

I think most LWers would enjoy it. If you've read the sequences, you probably won't learn that many new things (though I did learn a few), but it's a good way to refresh your memory (and it probably helps memorization to see those biases approached from a different angle).

It's a bit light compared to going straight to the studies, but it's also a quick read.

Good to give as gift to friends.

2Hook14y
I'm waiting for the revised edition to come out in May.
6Hook14y
Looking at that amazon link, has anyone considered automatically inserting a SIAI affiliate into amazon links? It appeared to work quite well for StackOverflow.
0MichaelGR14y
Is there a description of the changes somewhere?
0Hook14y
I didn't see any, but it is close to 100 pages longer.
0MichaelGR14y
Original hardcover was 244 pages long, so 100 pages is a significant addition. Probably worth waiting for.

Game theorists discuss one-shot Prisoner's dilemma, why people who don't know Game Theory suggest the irrational strategy of cooperating, and how to make them intuitively see that defection is the right move.

1RobinZ14y
Interesting. Has this experiment actually been run, and does it change the percentages in the responses relative to the textbook version?
0Vladimir_Nesov14y
That would be scientific approach to Dark Arts.
0RobinZ14y
The linked post seemed to run far ahead of the presented evidence - and this is a kind of situation in which the scientific method is known to be quite powerful.
0Vladimir_Nesov14y
Sure. Dark arts don't stain the power of scientific approach, though probably defy the purpose.

Is there a way to view an all time top page for Less Wrong? I mean a page with all of the LW articles in descending order by points, or something similar.

2FAWS14y
The link named "top" in the top bar, below the banner? Starting with the 10 all time highest ranked articles and continuing with the 10 next highest when you click "next", and so on? Or do I misunderstand you and you mean something else?
1Kevin14y
Thanks, I was missing the drop down button on that page.

while not so proficient in math, I do scour arxiv on occasion, and am rewarded with gems like this, enjoy :)

"Lessons from failures to achieve what was possible in the twentieth century physics" by Vesselin Petkov http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.4218v1.pdf

3wnoise14y
I generally prefer links to papers on the arxiv go the abstract, as so: http://arxiv.org/abs/1001.4218 This lets us read the abstract, and easily get to other versions of the same paper (including the latest, if some time goes by between your posting and my reading), and get to other works by the same author. EDIT: overall, reasonable points, but some things "pinging" my crank-detectors. I suppose I'll have to track down reference 10 and the 4/3 claim for electro-magnetic mass.
3Mitchell_Porter14y
I disagree. I think it's a paper which looks backwards in an unconstructive way. The author is hoping for conceptual breakthroughs as good as relativity and quantum theory, but which don't require engagement with the technical complexities of string theory or the Standard Model. Those two constructions respectively define the true theoretical and empirical frontier, but instead the author wants to ignore all that, linger at about a 1930s conceptual level, and look for another way. ETA: As an example of not understanding contemporary developments, see his final section, where he says I don't know what significance this question has for the author, but so far as I know, the hydrogen atom has no dipole moment in its ground state because the wavefunction is spherically symmetric. This will still be true in string theory. The hydrogen atom exists on a scale where the strings can be approximated by point particles. I suspect the author is thinking that because strings are extended objects they have dipole moments; but it's not of a magnitude to be relevant at the atomic scale.
3wnoise14y
Of course he looks backwards. You can't analyze why any discovery didn't happen sooner, even though all the pieces were there, unless you look backwards. I thought the case study of SR was quite illuminating, though it goes directly counter to his attack on string theory. After getting the Lorentz transform, it took a surprisingly long time to for anyone to treat the transformed quantities as equivalent -- that is, to take the math seriously. And for string theory, he says they take the math too seriously. Of course, the Lorentz transform was more clearly grounded in observed physical phenomenon. I completely agree he doesn't understand contemporary developments, and that was some of what I referred to as "pinging my crank-detectors", along with the loose analogy between 4-d bending in "world tubes" to that in 3-d rods. I don't necessarily see that as a huge problem if he's not pretending to be able to offer us the next big revolution on a silver platter.
2Cyan14y
Wikipedia points to the original text of a 1905 article by Poincaré. How's your French?
2wnoise14y
Thanks. It's decent, actually, but there's still some barrier. Increasing that barrier is changes to physics notation since then (no vectors!). Fortunately my university library appears to have a copy of an older edition of Rohrlich's Classical Charged Particles, which may help piece things together.
2Cyan14y
Petkov wrote: It's worth noting that Feynman's statements are actually correct. According to Wikipedia, the problem is solved by postulating a non-electromagnetic attractive force holding the charged particle together, which subtracts 1/3 of the 4/3 factor, leaving unity. Petkov doesn't explicitly say that Feynman is wrong, but his phrasing might leave that impression.
2arundelo14y
Neat find! I haven't read all of it yet, but I found this striking: This reminds me of Mach's Principle: Anti-Epiphenomenal Physics:

I have a problem with the wording of "logical rudeness". Even after having seen it many times, I reflexively parse it to mean being rude by being logical-- almost the opposite of the actual meaning.

I don't know whether I'm the only person who has this problem, but I think it's worth checking.

"Anti-logical rudeness" strikes me as a good bit better.

2RobinZ14y
It's not anti-logical, it's rude logic. The point of Suber's paper is that at no point does the logically rude debater reason incorrectly from their premises, and yet we consider what they have done to be a violation of a code of etiquette.
2NancyLebovitz14y
When I was considering a better name for the problem, I couldn't find a word for the process of seeking truth, which is what's actually being derailed by logical rudeness. Unless I've missed something, the problem with logical rudeness isn't that there's no logical flaw in it. The fact that I've got 4 karma points suggests (but doesn't prove) that I'm not the only person who has a problem with the term "logical rudeness". I should have been clearer that "anti-logical rudeness" was just an attempt at an improvement, rather than a strong proposal for that particular change.
0RobinZ14y
I think you're complaining about the problem of people not updating on their evidence by using anti-epistemological techniques such as logical rudeness. I still don't see the need for changing the name, but I'll defer to the opinion of the crowd if need be.
0h-H14y
seconded, it's too benign for what it actually intends to convey.

Thermodynamics post on my blog. Not directly related to rationality, but you might find it interesting if you liked Engines of Cognition.

Summary: molar entropy is normally expressed as Joules per Kelvin per mole, but can also be expressed, more intuitively, as bits per molecule, which shows the relationship between a molecule's properties and how much information it contains. (Contains references to two books on the topic.)

I'm considering doing a post about "the lighthouse problem" from Data Analysis: a Bayesian Tutorial, by D. S. Sivia. This is example 3 in chapter 2, pp. 31-36. It boils down to finding the center and width of a Cauchy distribution (physicists may call it Lorentzian), given a set of samples.

I can present a reasonable Bayesian handling of it -- this is nearly mechanical, but I'd really like to see a competent Frequentist attack on it first, to get a good comparison going, untainted by seeing the Bayesian approach. Does anyone have suggestions for ways to structure the post?

0hugh14y
I don't have the book you're referring to. Are you essentially going to walk through a solution for this [pdf], or at least to talk about point #10? This is a Bayesian problem; the Frequentist answer is the same, just more convoluted because they have to say things like "in 95% of similar situations, the estimate of a and b are within d of the real position of the lighthouse". Alternately, a Frequentist, while always ignorant when starting a problem, never begins wrong. In this case, if the chose prior was very unsuitable, the Frequentist more quickly converges to a correct answer.
0wnoise14y
Yes, that was the plan. I thought Frequentists would not be willing to cede such, but insist that any problem has a perfectly good Frequentist solution. I want to see not just the Frequentist solution, but the derivation of the solution.

What programming language should I learn?

As part of my long journey towards a decent education, I assume, it is mandatory to learn computer programming.

  • I'm not completely illiterate. I know the 'basics' of programming. Nevertheless, I want to start from the very beginning.
  • I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.

I'm thinking about starting with Processing and Lua. What do you think?

In an amazing coincidence, many of the suggestions you get will be the suggester's current favorite language. Many of these recommendations will be esoteric or unpopular languages. These people will say you should learn language X first because of the various features language X. They'll forget that they did not learn language X first, and while language X is powerful, it might not be easy to set up a development environment. Tutorials might be lacking. Newbie support might be lacking. Etc.

Others have said this but you can't hear it enough: It is not mandatory to learn computer programming. If you force yourself, you probably won't enjoy it.

So, what language should you learn first? Well the answer is... (drumroll) it depends! Mostly, it depends on what you are trying to do. (Side note: You can get a lot of help on mailing lists or IRC if you say, "I'm trying to do X." instead of, "I'm having a problem getting feature blah blah blah to work.")

I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.

I paused after reading this. The ... (read more)

6XiXiDu14y
Motivation is not my problem these days. It has been all my youth, partly the reason that I completely failed at school. Now the almost primal fear of staying dumb and a nagging curiosity to gather knowledge, learn and understand, do trump any lack of motivation or boredom. To see how far above you people, here at lesswrong.com, are compared to the average person makes me strive to approximate your wit. In other words, it's already enough motivation to know the basics of a programming language like Haskell, when average Joe is hardly self-aware but a mere puppet. I don't want to be one of them anymore.
4NancyLebovitz14y
If motivation is no longer a problem for you, that could be something really interesting for the akrasia discussions. What changed so that motivation is no longer a problem?
5[anonymous]14y
Being an eye witness of your own motives and growing-up is a tough exercise to conclude accurately. I believe that it would be of no help in the mentioned discussions. It is rather inherent, something neurological. I grew up in a very religious environment. Any significance, my goals, were mainly set to focus on being a good Christian. Although I assume it never reached my 'inner self', I consciously tried to motivate myself to reach this particular goal due to fear of dying. But on a rather unconscious level it never worked, this goal has always been ineffectual. At the age of 13, my decision to become vegetarian changed everything. With all my heart I came to the conclusion that something is wrong about all the pain and suffering. A sense for human suffering was still effectively dimmed, due to a whole life of indoctrination telling me that our pain is our own fault. But what about the animals? Why would an all-loving God design the universe this way? To cut a long story short, still believing, it made me abandon this God. With the onset of the Internet here in Germany I then learnt that there was nothing to abandon in the first place...I guess I won't have to go into details here. Anyway, that was just one of the things that changed. I'm really bad when it comes to social things. Thus I suffered a lot in school, it wasn't easy. Those problems with other kids, a lack of concentration and that I always found the given explanations counterintuitive and hard to follow, dimmed any motivation to learn more. All these problems rather caused me to associate education with torture, I wanted it to end. Though curiosity was always a part of my character. I've probably been the only kid who liked to watch documentations and news at an early age. Then there is the mental side I mentioned at the beginning. These are probably the most important reasons for all that happened and happens in my life. I got quite a few ticks and psychic problems. When I was a kid I was sufferi
2NancyLebovitz14y
Thank you very much for writing this up. It wouldn't surprise me a bit if akrasia has a neurological basis, and I'm a little surprised that I haven't seen any posts really looking at it from that angle. Dopamine? And on the other hand, your story is also about ideas and circumstances that undercut motivation.
-1[anonymous]14y
Those who restrain desire, do so because theirs is weak enough to be restrained. -- William Blake I haven't read up on the akrasia discussions. I don't believe into intelligence. I believe in efficiency regarding goals stated in advance. It's all about what we want and how to achieve it. And what we want is merely 'the line of least resistance'. Whatever intelligence is, it can't be intelligent all the way down. It's just dumb stuff at the bottom. -- Andy Clark The universe really just exists. And it appears to us that it is unfolding because we are part of it. We appear to each other to be free and intelligent because we believe that we are not part of it. There is a lot of talk here on LW on how to become less wrong. That works. Though it is not a proactive approach but simply trial and error allowed for by the mostly large error tolerance of our existence. It's all about practicability, what works. If prayer worked, we'd use it if we wanted to use it. Narns, Humans, Centauri… we all do what we do for the same reason: because it seems like a good idea at the time. -- G’Kar, Babylon 5 Anything you learn on lesswrong.com you'll have to apply by relying on fundamental non-intelligent processes. You can only hope to be lucky to learn enough in-time to avoid fatal failure. Since no possible system can use advanced heuristics to tackle, or even evaluate, every stimulus. For example, at what point are you going to use Bayesian statistics? You won't even be able to evaluate the importance of all data to be able to judge when to apply more rigorous tools. You can only be a passive observer who's waiting for new data by experience. And until new data arrives, rely on prior knowledge. A man can do what he wants, but not want what he wants. -- Arthur Schopenhauer Thus I don't think that a weakness of will does exist. I also don't think that you can do anything but your best. What is the right thing to do does always rely on what you want. Never you do something that
6Paul Crowley14y
I think the path outlined in ESR's How to Become a Hacker is pretty good. Python is in my opinion far and away the best choice as a first language, but Haskell as a second or subsequent language isn't a bad idea at all. Perl is no longer important; you probably need never learn it.
6[anonymous]14y
First, I do not think that learning to program computers must be part of a decent education. Many people learn to solve simple integrals in high-school, but the effect, beyond simple brain-training, is nil. For programming it's the same. Learning to program well takes years. I mean years of full-time studying/programming etc. However, if you really want to learn programming, the first question is not the language, but what you wanna do. You learn one language until you have built up some self-confidence, then learn another. The "what" typically breaks down very early. Sorry, I cannot give you any hints on this. And, first exercise, you should post this question (or search for answers to this question, as it has been posted already too many times) on the correct forums for programming questions. Finding those forums is the first start into learning programming. You'll never be able to keep all the required facts for programming in your head. I've never heard of processing, but I like Lua (more than python), and Lisp. However, even Java is just fine. Don't get into the habit of thinking that mediocre languages inhibit your progress. At the beginning, nearly all languages are more advanced than you.
5XiXiDu14y
What I want is to be able understand, attain a more intuitive comprehension, of concepts associated with other fields that I'm interested in, which I assume are important. As a simple example, take this comment by RobinZ. Not that I don't understand that simple statement. As I said, I already know the 'basics' of programming. I thoroughly understand it. Just so you get an idea. In addition to reading up on all lesswrong.com sequences, I'm mainly into mathematics and physics right now. That's where I have the biggest deficits. I see my planned 'study' of programming to be more as practise of logical thinking and as a underlying matrix to grasp fields liked computer science and concepts as that of a 'Turing machine'. And I do not agree that the effect is nil. I believe that programming is one of the foundations necessary to understand. I believe that there are 4 cornerstones underlying human comprehension. From there you can go everywhere: Mathematics, Physics, Linguistics and Programming (formal languages, calculation/data processing/computation, symbolic manipulation). The art of computer programming is closely related to the basics of all that is important, information.
4[anonymous]14y
Well, now that I understand your intentions a little bit better (and having read through the other comments), I seriously want to second the recommendation of Scheme. Use DrScheme as environment (zero-hassle), go through SICP and HTDP. Algorithms are nice, Knuth's series and so, but it may be more than you are asking. Project Euler is a website where you can find some inspirations for problems you may want to solve. Scheme as a language has the advantages that you will not need time to wrap your head around ugly syntax (most languages, except for Lua, maybe Python), memory management (C), or mathematical purity (Haskell, Prolog). AFAIK they also distinguish between exact (rational numbers, limited only by RAM) and inexact numbers (floating points) -- regularly a confusion for people trying to do some numeric code the first time. The trade-offs are quite different for professional programmers, though. edit: welcome to the web, using links!
4Morendil14y
Consider finding a Coding Dojo near your location. There is a subtle but deep distinction between learning a programming language and learning how to program. The latter is more important and abstracts away from any particular language or any particular programming paradigm. To get a feeling for the difference, look at this animation of Paul Graham writing an article - crossing the chasm between ideas in his head and ideas expressed in words. (Compared to personal experience this "demo" simplifies the process of writing an article considerably, but it illustrates neatly what books can't teach about writing.) What I mean by "learning how to program" is the analogue of that animation in the context of writing code. It isn't the same as learning to design algorithms or data structures. It is what you'll learn about getting from algorithms or data structures in your head to algorithms expressed in code. Coding Dojos are an opportunity to pick up these largely untaught skills from experienced programmers.
4hugh14y
I agree with everything Emile and AngryParsley said. I program for work and for play, and use Python when I can get away with it. You can be shocked, that like AngryParsley, I will recommend my favorite language! I have an additional recommendation though: to learn to program, you need to have questions to answer. My favorite source for fun programming problems is ProjectEuler. It's very math-heavy, and it sounds like you might like learning the math as much as learning the programming. Additionally, every problem, once solved, has a forum thread opened where many people post their solutions in many languages. Seeing better solutions to a problem you just solved on your own is a great way to rapidly advance.
4nhamann14y
As mentioned in another comment, the best introduction to programming is probably SICP. I recommend going with this route, as trying to learn programming from language-specific tutorials will almost certainly not give you an adequate understanding of fundamental programming concepts. After that, you will probably want to start dabbling in a variety of programming styles. You could perhaps learn some C for imperative programming, Java for object-oriented, Python for a high-level hybrid approach, and Haskell for functional programming as starters. If you desire more programming knowledge you can branch out from there, but this seems to be a good start. Just keep in mind that when starting out learning programming, it's probably more important to dabble in as many different languages as you can. Doing this successfully will enable you to quickly learn any language you may need to know. I admit I may be biased in this assessment, though, as I tend to get bored focusing on any one topic for long periods of time.
4Douglas_Knight14y
Processing and Lua seem pretty exotic to me. How did you hear of them? If you know people who use a particular language, that's a pretty good reason to choose it. Even if you don't have a goal in mind, I would recommend choosing a language with applications in mind to keep you motivated. For example, if (but only if) you play wow, I would recommend Lua; or if the graphical applications of Processing appeal to you, then I'd recommend it. If you play with web pages, javascript... At least that's my advice for one style of learning, a style suggested by your mention of those two languages, but almost opposite from your "Nevertheless, I want to start from the very beginning," which suggests something like SICP. There are probably similar courses built around OCaml. The proliferation of monad tutorials suggests that the courses built around Haskell don't work. That's not to disagree with wnoise about the value of Haskell either practical or educational, but I'm skeptical about it as an introduction. ETA: SICP is a textbook using Scheme (Lisp). Lisp or OCaml seems like a good stepping-stone to Haskell. Monads are like burritos.

Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.

The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.

So you end up with newcomers to Haskell trying to simultaneously:

  • Adjust to a degree of abstraction normally reserved for mathematicians and philosophers
  • Unlearn existing habits from other languages
  • Learn about intimidating math-y-sounding things

And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.

But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another "monad tutorial" filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.

The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.

3XiXiDu14y
I learnt about Lua thru Metaplace, which is now dead. I heard about Processing via Anders Sandberg. I'm always fascinated by data visualisation. I thought Processing might come in handy. Thanks for mentioning SICP. I'll check it out.
2gwern14y
I'm going through SICP now. I'm not getting as much out of it as I expected, because much of it I already know, is uninteresting to me since I expect lazy evaluation due to Haskell, or is just tedious (I got sick pretty quick with the authors' hard-on for number theory).
2SoullessAutomaton14y
SICP is nice if you've never seen a lambda abstraction before; its value decreases monotonically with increasing exposure to functional programming. You can probably safely skim the majority of it, at most do a handful of the exercises that don't immediately make you yawn just by looking at them. Scheme isn't much more than an impure, strict untyped λ-calculus; it seems embarrassingly simple (which is also its charm!) from the perspective of someone comfortable working in a pure, non-strict bastardization of some fragment of System F-ω or whatever it is that GHC is these days. Haskell does tend to ruin one for other languages, though lately I've been getting slightly frustrated with some of Haskell's own limitations...
4wnoise14y
Personally, I'm a big fan of Haskell. It will make your brain hurt, but that's part of the point -- it's very good at easily creating and using mathematically sound abstractions. I'm not a big fan of Lua, though it's a perfectly reasonable choice for its niche of embeddable scripting language. I have no experience with Processing. The most commonly recommended starting language is python, and it's not a bad choice at all.
0gwern14y
Toss in another vote for Haskell. It was my first language (and back before Real World Haskell was written); I'm happy with that choice - there were difficult patches, but they came with better understanding.
0XiXiDu14y
Thanks, I didn't know about Haskell, sounds great. Open source and all. I think you already convinced me.

I wouldn't recommend Haskell as a first language. I'm a fan of Haskell, and the idea of learning Haskell first is certainly intriguing, but it's hard to learn, hard to wrap your head around sometimes, and the documentation is usually written for people who are at least computer science grad student level. I'm not saying it's necessarily a bad idea to start with Haskell, but I think you'd have a much easier time getting started with Python.

Python is open source, thoroughly pleasant, widely used and well-supported, and is a remarkably easy language to learn and use, without being a "training wheels" language. I would start with Python, then learn C and Lisp and Haskell. Learn those four, and you will definitely have achieved your goal of learning to program.

And above all, write code. This should go without saying, but you'd be amazed how many people think that learning to program consists mostly of learning a bunch of syntax.

6SoullessAutomaton14y
I have to disagree on Python; I think consistency and minimalism are the most important things in an "introductory" language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don't think about. I'd lean toward either C (for learning the "pushing electrons around silicon" end of things) or Scheme (for learning the "abstract conceptual elegance" end of things). It helps that both have excellent learning materials available. Haskell is a good choice for someone with a strong math background (and I mean serious abstract math, not simplistic glorified arithmetic like, say, calculus) or someone who already knows some "mainstream" programming and wants to stretch their brain.
5sketerpot14y
You make some good points, but I still disagree with you. For someone who's trying to learn to program, I believe that the primary goal should be getting quickly to the point where you can solve well-understood tasks. I've always thought that the quickest way to learn programming was to do programming, and until you've been doing it for a while, you won't understand it.
3SoullessAutomaton14y
Well, I admit that my thoughts are colored somewhat by an impression--acquired by having made a living from programming for some years--that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase "software engineer" with a straight face! But I'll leave it at that, lest I start quoting Dijkstra. Back on topic, I do agree that being able to start doing things quickly--both in terms of producing interesting results and getting rapid feedback--is important, but not the most important thing.
2XiXiDu14y
I want to achieve an understanding of the basics without necessarily being able to be a productive programmer. I want to get a grasp of the underlying nature of computer science, not being able to mechanical write and parse code to solve certain problems. The big picture and underlying nature is what I'm looking for. I agree that many people do not understand, they really only learnt how to mechanical use something. How much does the average person know about how one of our simplest tools work, the knife? What does it mean to cut something? What does the act of cutting accomplish? How does it work? We all know how to use this particular tool. We think it is obvious, thus we do not contemplate it any further. But most of us have no idea what actually physically happens. We are ignorant of the underlying mechanisms for that we think we understand. We are quick to conclude that there is nothing more to learn here. But there is deep knowledge to be found in what might superficially appear to be simple and obvious.
5wnoise14y
Then you do not, in fact, need to learn to program. You need an actual CS text, covering finite automata, pushdown machines, Turing machines, etc. Learning to program will illustrate and fix these concepts more closely, and is a good general skill to have.
0XiXiDu14y
Recommendations on the above? Books, essays...
2hugh14y
Sipser's Introduction to the Theory of Computation is a tiny little book with a lot crammed in. It's also quite expensive, and advanced enough to make most CS students hate it. I have to recommend it because I adore it, but why start there, when you can start right now for free on wikipedia? If you like it, look at the references, and think about buying a used or international copy of one book or another. I echo the reverent tones of RobinZ and wnoise when it comes to The Art of Computer Programming. Those volumes are more broadly applicable, even more expensive, and even more intense. They make an amazing gift for that computer scientist in your life, but I wouldn't recommend them as a starting point.
0RobinZ14y
Elsewhere wnoise said that SICP and Knuth were computer science, but additional suggestions would be nice.
2wnoise14y
Well, they're computer sciencey, but they are definitely geared to approaching from the programming, even "Von Neumann machine" side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them. For my undergraduate work, I used two books. The first is Jan L. A. van de Snepscheut's What Computing Is All About. It is, unfortunately, out-of-print. The second was Elements of the Theory of Computation by Harry Lewis and Christos H. Papadimitriou.
3SoullessAutomaton14y
Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!
3Douglas_Knight14y
You probably should have spelled out that SICP is on the λ-calculus side.
0wedrifid14y
Gah. Do I need to add this to my reading list?
6Douglas_Knight14y
You seem to already know Lisp, so probably not. Read the table of contents. If you haven't written an interpreter, then yes. The point in this context is that when people teach computability theory from the point of view of Turing machines, they wave their hands and say "of course you can emulate a Turing machine as data on the tape of a universal Turing machine," and there's no point to fill in the details. But it's easy to fill in all the details in λ-calculus, even a dialect like Scheme. And once you fill in the details in Scheme, you (a) prove the theorem and (b) get a useful program, which you can then modify to get interpreters for other languages, say, ML. SICP is a programming book, not a theoretical book, but there's a lot of overlap when it comes to interpreters. And you probably learn both better this way. I almost put this history lesson in my previous comment: Church invented λ-calculus and proposed the Church-Turing thesis that it is the model of all that we might want to call computation, but no one believed him. Then Turing invented Turing machines, showed them equivalent to λ-calculus and everyone then believed the thesis. I'm not entirely sure why the difference. Because they're more concrete? So λ-calculus may be less convincing than Turing machines, hence pedagogically worse. Maybe actually programming in Scheme makes it more concrete. And it's easy to implement Turing machines in Scheme, so that should convince you that your computer is at least as powerful as theoretical computation ;-)
6Eliezer Yudkowsky14y
Um... I think it's a worthwhile point, at this juncture, to observe that Turing machines are humanly comprehensible and lambda calculus is not. EDIT: It's interesting how many replies seem to understand lambda calculus better than they understand ordinary mortals. Take anyone who's not a mathematician or a computer programmer. Try to explain Turing machines, using examples and diagrams. Then try to explain lambda calculus, using examples and diagrams. You will very rapidly discover what I mean.
6SoullessAutomaton14y
Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java. Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it'd be like programming in Brainfuck. It was von Neumann's insights leading to the stored-program architecture that made computing remotely sensible. There's plenty of ridiculously opaque models of computation (Post's tag machine, Conway's Life, exponential Diophantine equations...) but I can't begin to imagine one that would be more comprehensible than untyped lambda calculus.
2Tyrrell_McAllister14y
I'm pretty sure that Eliezer meant that Turing machines are better for giving novices a "model of computation". That is, they will gain a better intuitive sense of what computers can and can't do. Your students might not be able to implement much, but their intuitions about what can be done will be better after just a brief explanation. So, if your goal is to make them less crazy regarding the possibilities and limitations of computers, Turing machines will give you more bang for your buck.
4Morendil14y
A friend of mine has invented a "Game of Lambda" played with physical tokens which look like a bigger version of the hexes from wargames of old, with rules for function definition, variable binding and evaluation. He has a series of exercises requiring players to create functions of increasing complexity; plus one, factorial, and so on. Seems to work well. Alligator Eggs is another variation on the same theme.
3wnoise14y
You realize you've just called every computer scientist inhuman? Turing machines are something one can easily imagine implementing in hardware. The typical encoding of some familiar concepts into lambda calculus takes a bit of a getting used to (natural numbers as functions which composes their argument (as a function) n times? If-then-else as function composition, where "true" is a function returning its first argument, and "false" is a function returning its second? These are decidedly odd). But lambda calculus is composable. You can take two definitions and merge them together nicely. Combining useful features from two Turing machines is considerably harder. The best route to usable programming there is the UTM + stored code, which you have to figure out how to encode sanely.
3wedrifid14y
Just accept the compliment. ;)
2wedrifid14y
Of course, not so odd for anyone who uses Excel...
2SoullessAutomaton14y
Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.) And no looking it up, that's cheating! Took me the better part of a day to figure it out, it's a real mind-twister.
1rwallace14y
It's much of a muchness; in pure form, both are incomprehensible for nontrivial programs. Practical programming languages have aspects of both.
0Douglas_Knight14y
Maybe pure lambda calculus is not humanly comprehensible, but general recursion is as comprehensible as Turing machines, yet Gödel rejected it. My history should have started when Church promoted that.
0hugh14y
I think that λ-calculus is about as difficult to work with as Turing machines. I think the reason that Turing gets his name in the Church-Turing thesis is that they had two completely different architectures that had the same computational power. When Church proposed that λ-calculus was universal, I think there was a reaction of doubt, and a general feeling that a better way could be found. When Turing came to the same conclusion from a completely different angle, that appeared to verify Church's claim. I can't back up these claims as well as I'd like. I'm not sure that anyone can backtrace what occurred to see if the community actually felt that way or not; however, from reading papers of the time (and quite a bit thereafter---there was a long period before near-universal acceptance), that is my impression.
0Douglas_Knight14y
Actually, the history is straight-forward, if you accept Gödel as the final arbiter of mathematical taste. Which his contemporaries did. ETA: well, it's straight-forward if you both accept Gödel as the arbiter and believe his claims made after the fact. He claimed that Turing's paper convinced him, but he also promoted it as the correct foundation. A lot of the history was probably not recorded, since all these people were together in Princeton. EDIT2: so maybe that is what you said originally.
4SoullessAutomaton14y
It's also worth noting that Curry's combinatory logic predated Church's λ-calculus by about a decade, and also constitutes a model of universal computation. It's really all the same thing in the end anyhow; general recursion (e.g., Curry's Y combinator) is on some level equivalent to Gödel's incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.
0wedrifid14y
I know the principles but have never taken the time to program something significant in the language. Partly because it just doesn't have the libraries available to enable me to do anything I particularly need to do and partly because the syntax is awkward for me. If only the name 'lisp' wasn't so apt as a metaphor for readability.
0wedrifid14y
Are you telling me lambda calculus was invented before Turing machines and people still thought the Turing machine concept was worth making ubiquitous?
4AngryParsley14y
Wikipedia says lambda calculus was published in 1936 and the Turing machine was published in 1937. I'm betting it was hard for the first computer programmers to implement recursion and call stacks on early hardware. The Turing machine model isn't as mathematically pure as lambda calculus, but it's a lot closer to how real computers work.
4Douglas_Knight14y
I think the link you want is to the history of the Church-Turing thesis.
2SoullessAutomaton14y
The history in the paper linked from this blog post may also be enlightening!
1orthonormal14y
Why not? People have a much easier time visualizing a physical machine working on a tape than visualizing something as abstract as lambda-calculus. Also, the Turing machine concept neatly demolishes the "well, that's great in theory, but it could never be implemented in practice" objections that are so hard to push people past.
0wedrifid14y
Because I am biased to my own preferences for thought. I find visualising the lambda-calculus simpler because Turing Machines rely on storing stupid amounts of information in memory because, you know, it'll eventually do anything. It just doesn't feel natural to use a kludgy technically complete machine as the very description of what we consider computationally complete.
0orthonormal14y
Oh, I agree. I thought we were talking about why one concept became better-known than the other, given that this happened before there were actual programmers.
0RobinZ14y
Any opinion on the 2nd edition of Elements?
0wnoise14y
Nope. I used the first edition. I wouldn't call it a "classic", but it was readable and covered the basics.
3RobinZ14y
I, unfortunately, am merely an engineer with a little BASIC and MATLAB experience, but if it is computer science you are interested in, rather than coding, count this as another vote for SICP. Kernighan and Ritchie is also spoken of in reverent tones (edit: but as a manual for C, not an introductory book - see below), as is The Art of Computer Programming by Knuth. I have physically seen these books, but not studied any of them - I'm just communicating a secondhand impression of the conventional wisdom. Weight accordingly.
4wnoise14y
Kernighan and Ritchie is a fine book, with crystal clear writing. But I tend to think of it as "C for experienced programmers", not "learn programming through C". TAoCP is "learn computer science", which I think is rather different than learning programming. Again, a fine book, but not quite on target initially. I've only flipped through SICP, so I have little to say.
0RobinZ14y
TAoCP and SICP are probably both computer science - I recommended those particularly as being computer science books, rather than elementary programming. I'll take your word on Kernighan and Ritchie, though - put that one off until you want to learn C, then.
1XiXiDu14y
Merely an engineer? I've failed to acquire a leaving certificate of the lowest kind of school we have here in Germany. Thanks for the hint at Knuth, though I already came across his work yesterday. Kernighan and Ritchie are new to me. SICP is officially on my must-read list now.
0RobinZ14y
A mechanical engineering degree is barely a qualification in the field of computer programming, and not at all in the field of computer science. What little knowledge I have I acquired primarily through having a very savvy father and secondarily through recreational computer programming in BASIC et al. The programming experience is less important than the education, I wager.
0XiXiDu14y
Yes, of course. Misinterpreted what you said. Do you think that somebody in your field, in the future, will get around computer programming? While talking to neuroscientists I learnt that it is almost impossible to get what you want, in time, by explaining what you need to a programmer who has no degree in neuroscience while you yourself don't know anything about computer programming.
0RobinZ14y
I'm not sure what you mean - as a mechanical engineer, 99+% percent of my work involves purely classical mechanics, no relativity or quantum physics, so the amount of programming most of us have to do is very little. Once a finite-element package exists, all you need is to learn how to use it.
0XiXiDu14y
I've just read the abstract on Wikipedia and I assumed that it might encompass what you do. I thought computer modeling and simulations might be very important in the early stages. Shortly following field tests with miniature models. Even there you might have to program the tools that give shape to the ultimate parts. Though I guess if you work in a highly specialized area, that is not the case.
4RobinZ14y
I couldn't build a computer, a web browser, a wireless router, an Internet, or a community blog from scratch, but I can still post a comment on LessWrong from my laptop. Mechanical engineers rarely need to program the tools, they just use ANSYS or SolidWorks or whatever. Edit: Actually, the people who work in highly specialized areas are more likely to write their own tools - the general-interest areas have commercial software already for sale.
0AdeleneDawner14y
Bear in mind that I'm not terribly familiar with most modern programming languages, but it sounds to me like what you want to do is learn some form of Basic, where very little is handled for you by built-in abilities of the language. (There are languages that handle even less for you, but those really aren't for beginners.) I'd suggest also learning a bit of some more modern language as well, so that you can follow conversations about concepts that Basic doesn't cover.
5XiXiDu14y
'Follow conversations', indeed. That's what I mean. Being able to grasp concepts that involve 'symbolic computation' and information processing by means of formal language. I don't aim at actively taking part in productive programming. I don't want to become a poet, I want to be able to appreciate poetry, perceive its beauty. Take English as an example. Only a few years ago I seriously started to learn English. Before I could merely chat while playing computer games LOL. Now I can read and understand essays by Eliezer Yudkowsky. Though I cannot write the like myself, English opened up this whole new world of lore for me.
2wnoise14y
"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration." --Edsger W Dijkstra. More modern versions aren't that bad, and it's not quite fair to tar them with the same brush, but I still wouldn't recommend learning any of them for their own sake. If there is a need (like modifying an existing codebase), then by all means do.
4SoullessAutomaton14y
Dijkstra's quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn't actually a bad language at all. On the other hand, it also lacks much of the "easy to pick up and experiment with" aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.
0XiXiDu14y
Yeah, you won't be able to be very productive regarding bottom-up groundwork. But you'll be able to look into existing works and gain insights. Even if you forgot a lot, something will be stuck and help you to pursue a top-down approach. You'll be able to look into existing code, edit it and regain or learn new and lost knowledge more quickly.
1wedrifid14y
Agree with where you place Python, Scheme and Haskell. But I don't recommend C. Don't waste time there until you already know how to program well. Given a choice on what I would begin with if I had my time again I would go with Scheme, since it teaches the most general programming skills, which will carry over to whichever language you choose (and to your thinking in general.) Then I would probably move on to Ruby, so that I had, you know, a language that people actually use and create libraries for.
1SoullessAutomaton14y
C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don't think that's really what XiXiDu is looking for.
1wedrifid14y
Agree on where C is useful and got the same impression about the applicability to XiXiDu's (where on earth does that name come from?!?) goals. I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn't meet your 'minimalist' ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I've converted to primarily using a language that relies on duck-typing.
7SoullessAutomaton14y
"Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind." -- Alan Kay C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can't prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language. C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages: * It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior. * Templates are a clunky, disappointing imitation of real metaprogramming. * Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake. * It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety. * Combining error handling via exceptions with manual memory management is frankly absurd. * The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects. I could elaborate further, but it's too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical "real" OO language, but I'd probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities). ETA: Well, that came out awkwardly verbose. Apologies.
0wedrifid14y
I'm sure I could manage 1k before I considered the point settled and moved on to a language that isn't a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts. This is the one point I disagree with, and I do so both on the assertion 'almost uniformly' and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book 'Object Oriented Software Construction' is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction. Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.
2ata14y
Indeed. I keep meaning to invent a new programming paradigm in recognition of that basic fact about macroscopic reality. Haven't gotten around to it yet.
0SoullessAutomaton14y
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++. Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn't suggest for learning purposes, either. Well, the problem isn't really multiple inheritance itself, it's the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse. Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn't really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we've all seen trick questions about "okay, which method will this call?"). Something closer to a simple type predicate, like the interfaces in Google's Go language or like Haskell's type classes, is much less painful here. Or of course duck typing, if static type-checking isn't your thing. Compositional code reuse in objects--what I meant by "implementation inheritance"--also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details. The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Princi
0wedrifid14y
Of course, but I'm more considering 'languages to learn that make you a better programmer'. Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move. I don't agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma 'multiple inheritance is bad' and don't allow generics enforce bad habits while at the same time insisting that they are the True Way. I think I agree on this note, with certain restrictions on what counts as 'civilized'. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.
0wedrifid14y
The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn't do it in Java or .NET (except Eiffel.NET).
0wnoise14y
Seriously? All my objections to C++ come from its complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal. This argues for interfaces, not multiple implementation inheritance. And implementation inheritance can easily be emulated by containment and method forwarding, though yes, having a shortcut for forwarding these methods can be very convenient. Of course, that's trivial in Smalltalk or Objective-C... The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.
6wedrifid14y
I say C is like a shattered crystal with all sorts of sharp edges that take hassle to avoid and distract attention from things that matter. C++ then, would be a shattered crystal that has been attached to a rusted metal pole that can be used to bludgeon things, with the possible risk of tetnus.
0wnoise14y
Upvoted purely for the image.
0wedrifid14y
Eiffel does (in, obviously, my opinion).
0wnoise14y
It does handle the diamond inheritance problem as best as can be expected -- the renaming feature is quite nice. Though related, this isn't what I'm concerned with. AFAICT, it really doesn't handle it in a completely general way. (Given the type-system you can drive a bus through (covariant vs contravariant arguments), I prefer Sather, though the renaming feature there is more persnickety -- harder to use in some common cases.) Consider a lattice). It is a semilattice in two separate dual ways, with the join operation, and the meet operation. If we have generalized semi-lattice code, and we want to pass it a lattice, which one should be used? How about if we want to use the other one? In practice, we can call these a join-semilattice, and a meet-semilattice, have our function defined on one, and create a dual view function or object wrapper to use the meet-semilattice instead. But, of course, a given set of objects could be a lattice in multiple ways, or implement a monad in multiple ways, or ... There is a math abstraction called a monoid, for an associative operator with identity. Haskell has a corresponding typeclass, with such things as lists as instances, with catenation as the operator, and the empty list as identity. I don't have the time and energy to give examples, but having this as an abstraction is actually useful for writing generic code. So, suppose we want to make Integers an instance. After all, (+, 0) is a perfectly good monoid. On the other hand, so is (*, 1). Haskell does not let you make a type an instance of a typeclass in two separate ways. Their is no natural duality here we can take advantage of (as we could with the lattice example.) The consensus in the community has been to not make Integer a monoid, but rather to provide newtypes Product and Sum that are explicitly the same representation as Integer, with thus trivial conversion costs. There is also a newtype for dual monoids, formalizing a particular duality idea similar to the latti
0wedrifid14y
Sather looks interesting but I haven't taken the time to explore it. (And yes, covariance vs contravariance is a tricky one.) Both these languages also demonstrate the real (everyday) use for C... you compile your actual code into it.
0wnoise14y
I don't think Sather is a viable language at this point, unfortunately. Yes, C is useful for that, though c-- and LLVM are providing new paths as well. I personally think C will stick around for a while because getting it running on a given architecture provides a "good enough" ABI that is likely to be stable enough that HLLs FFIs can depend on it.
2wnoise14y
I put C++ as a "learn only if needed language". It's extremely large and complicated, perhaps even baroque. Any large program uses a slightly different dialect of C++ given by which features the writers are willing to use, and which are considered too dangerous.
0XiXiDu14y
Yeah, C is probably mandatory if you want to be serious with computer programming. Thanks for mentioning Scheme, haven't heard about it before... Haskell sounds really difficult. But the more I hear how hard it is, the more intrigued I am.
0XiXiDu14y
Thanks, I'll sure get into those languages. But I think I'll just try and see if I can get into Haskell first. I'm intrigued after reading the introduction. If I get struck, I'll the route you mentioned.
3hugh14y
Relevant answer to this question here, recently popularized on Hacker News.
2Emile14y
I'd weakly recommend Python, it's free, easy enough, powerful enough to do simple but useful things (rename and reorganize files, extract data from text files, generate simple html pages ...),is well-designed and has features you'll encounter in other languages (classes, functional programming ...), and has a nifty interactive command line in which to experiment quickly. Also, some pretty good websites run on it. But a lot of those advantages apply to languages like Ruby. If you want to go into more exotic languages, I'd suggest Scheme over Haskell, it seems more beginner-friendly to me. It mostly depends on what occasions you'll have of using it : if you have a website, Javascript might be better; If you like making game mods, go for lua. It also depends of who you know that can answer questions. If you have a good friend who's a good teacher and a Java expert, go for Java.
1CronoDAS14y
My first language was, awfully enough, GW-Basic. It had line numbers. I don't recommend anything like it. My first real programming language was Perl. Perl is... fun. ;)
1Morendil14y
I recommend Haskell (more fun) or Ruby (more mainstream).
0mkehrt14y
I recommend Python as well. Python has clean syntax, enforces good indentation and code layout, has a large number of very useful libraries, doesn't require a lot of boilerplate to get going but still has good mechanisms for structuring code, has good support for a variety of data structures built in, a read-eval-print loop for playing around with the language, and a lot more. If you want to learn to program, learn Python. (Processing is probably very good, too, for interesting you in programming. It gives immediate visual feedback, which is nice, but it isn't quite as general purpose as Python. Lua I know very little about.) That being said, Python does very little checking for errors before you run your code, and so is not particularly well suited for large or even medium sized, complex programs where your own reasoning is not sufficient to find errors. For these, I'd recommend learning other languages later on. Java is probably a good second language. It requires quite a bit more infrastructure to get something up and running, but it has great libraries and steadily increasing ability to track down errors in code when it is compiled. After that, it depends on what you want to do. I would recommend Haskell if you are looking to stretch your mind (or OCaml if you are looking to stretch it a little less ;-)). On the other hand, if you are looking to write useful programs, C is probably pretty good, and will teach you more about how computers work. C++ is popular for a lot of applications, so you may want to learn it, but I hate it as an unprincipled mess of language features half-hazardly thrown together. I'd say exactly the same thing about most web languages (Javascript (which is very different from Java), Ruby, PHP, etc.) Perl is incredibly useful for small things, but very hard to reason about. (As to AngryParsley's comment about people recommending their favorite languages, mine are probably C, Haskell and OCaml, which I am not recommending first.)
0wedrifid14y
Those two seem great, Lua in particular seems to match exactly the purpose you describe.

I'm confused about Nick Bostrom's comment [PDF] on Robin Hanson's Great Filter idea. Roughly, it says that in a universe like ours that lacks huge intergalactic civilizations, finding fish fossils on Mars would be very bad news, because it would imply that evolving to fish phase isn't the greatest hurdle that kills most young civilizations - which makes it more likely that the greatest hurdle is still ahead of us. I think that's wrong because finding fish fossils (and nothing more) on Mars would only indicate a big hurdle right after the fish stage, but sh... (read more)

It makes the hurdle less likely to be before the fish stage, so more likely to be after the fish stage. While the biggest increase in probability is immediately after the fish stage, all subsequent stages are a more likely culprit now (especially as we could simply have missed fossils/their not have been formed for the post-fish stages).

4cousin_it14y
So finding evidence of life that went extinct at any stage whatsoever should make us revise our beliefs about the Great Filter in the same direction? Doesn't this violate conservation of expected evidence?
4Paul Crowley14y
Is there a counter-weighing bit of evidence every time we don't find evidence of life at all, and every time (if ever) we find evidence of non-extinct life?
0cousin_it14y
According to Hanson's article, non-extinct life that didn't reach sentience counts as failing the Great Filter, and no life at all also counts as failing at a very early stage. I believe my point still stands.
2FAWS14y
No, the total evidence for a great filter is conserved (lack of observable galactic colonization), the evidence merely shifts where we expect this great filter to be.
0cousin_it14y
Let's have another go. By Bostrom's logic, witnessing a failure of life at any stage (which includes life failing to develop in the first place) implies that the great filter happens later than we thought, because each failure tells us that the steps that preceded it (e.g. planet formation, liquid water, etc) probably didn't include the great filter. But life eventually fails on all planets except ours: the "silence of the sky" is a background assumption for the whole thesis. So what kind of evidence would tell us that the filter happens earlier than we thought?
9FAWS14y
Failing to find any life whatsoever on Mars would be evidence for the great filter being development of life or earlier (and thus evidence for the GF being earlier than we thought), but it is only very weak evidence of that, since even if life were very common (say 20% of all planets in the liquid water zone) we still wouldn't be very surprised by the absence of life on Mars in particular. Matters would be different after investigating hundreds of planets and failing to find signs of life anywhere. Finding (independent) life at any stage would be evidence for whatever step this life failed to make and/or whatever wiped it out being the great filter, but for any reasonably well understood step or mechanism of extinction (e. g. Mars losing most of its atmosphere) the shift in probability will be much smaller than the shift in probability for having life in the first place, which is a total unknown, and such life would also be evidence against a black swan before that point without discriminating between black swans between that point and us and after us. So the slightly increased probability of that particular GF wouldn't come anywhere close to making up the lost probability mass earlier, leaving a great filter after us much more likely. If we discover life wiped out by a black swan that would shift a lot of probability mass to that black swan, but it would have to be something not taken into account at all before, seeming certain to happen to almost all life after discovery, and also something that would be very unlikely to happen to us in the future if it was to make up for the lost probability mass for any earlier GF. A sufficiently certain seeming former black swan could even shift probability mass away from a GF after us, but that's not the way I'd bet.
0[anonymous]14y
This is a crucial step in your argument. It depends on our initial prior for extraterrestrial life being very low. If that prior were slightly higher, the argument would work just as well in reverse, or maybe balance out. There's something icky about this whole business.
0[anonymous]14y
Let's disregard this: and focus on the purely theoretical argument. Assume our degree of belief that the Great Filter is ahead of us is X. Now we land on a new planet. If we find no evidence of life at all, X is unchanged. If we find some fossils at whatever stage, or any life at all (except an intergalactic civilization which we would've noticed before), then according to your reasoning, X should increase. This violates the Bayesian law of conservation of evidence, as X can only increase and never decrease.
0timtyler14y
Mars dried out a while ago. Finding fossils there would prove very little about the great filter - since they would probably be distant relatives of ours whose planet gave out on them (since the solar system is one big melting pot for life). Basically, it is a bad example.

For the "people say stupid things" file and a preliminary to a post I'm writing. There is a big college basketball tournament in New York this weekend. There are sixteen teams competing. This writer for the New York Post makes some predictions.

What is wrong with this article and how could you take advantage of the author?

Edit: Rot13 is a good idea here.

4Cyan14y
Gur cbfgrq bqqf qba'g tvir n gbgny cebonovyvgl bs bar, fb gurl'er Qhgpu-obbxnoyr.
2Hook14y
Abg dhvgr. Uvf ceboyrz vf gung gur bqqf nqq hc gb yrff guna bar. Vs V tnir lbh 1-2 bqqf ba urnqf naq 1-2 bqqf ba gnvyf sbe na haovnfrq pbva, gung nqqf hc gb 1.3, naq lbh pna'g Qhgpu obbx zr ba gung.
0RobinZ14y
Rot13: Hayrff gur bqqfznxre vf rabhtu bs na vqvbg gb yrg lbh gnxr gur bgure fvqr bs gur orgf, of course.
3Jack14y
Rot13: Vs lbh'er tvivat bqqf nf n cerqvpgvba lbh fubhyq or jvyyvat gb gnxr rvgure fvqr.
2Hook14y
Yes. That does seem to be the correct context for a critique of the article. I was thinking more along the lines of "giving odds" in terms of "offering bets" in order to make money (ie, a bookie).
0RobinZ14y
Rot13: Gehr - fnir gung xabjvat fbzrbar jnagf gb gnxr gur bgure fvqr znl vasyhrapr lbhe bqqf.
0[anonymous]14y
Rot13: Lrnu. Vg jbhyq pregnvayl or fhfcvpvbhf vs fbzrbar whfg rznvyrq gur thl naq bssrerq gb tvir uvz uvf bqqf sbe rirel fvatyr grnz. Lbh'q ubcr ur'q svther vg bhg gura. Zl -arire vagraqrq gb or vzcyrzragrq- cyna jnf gb unir 15 crbcyr pbagnpg uvz rnpu ercerfragvat gurzfryirf nf fbzrbar jub gubhtug ur jnf bireengvat bar bs gur grnzf naq tvir uvz uvf bqqf sbe gung grnz. Gura fcyvg gur jvaavatf nsgrejneq.
0Jack14y
Rot13: Pna lbh sbezhyngr n org be frevrf bs orgf gung jbhyq qb gur gevpx? Pna nalbar?
2FAWS14y
I thought this was already clear? Org K$ * vzcyvrq cebonovyvgl ba rirel grnz. Lbh ner thnenagrrq n arg jva bs K$ * (1 - fhz bs nyy vzcyvrq cebonovyvgvrf). What you really should do though is look at the past history of the tournament and the form of the teams, figure out which of those teams with silly odds have a decent shot at winning, take a risk and bet on some combination of them. You should stand a fairly decent chance of winning really big (unless this huge spread is actually justified, which seems unlikely).
0Cyan14y
Va gur bevtvany Qhgpu obbx, bqqf unir gb or bssrerq ba nyy pbzcbhaq riragf naq nyy pbaqvgvbany riragf. Vs gur nhgube vf jvyyvat gb hfr C(N be O) = C(N) + C(O) gb frg gur bqqf sbe qvfwhapgvbaf, gur cebcbfvgvba "ng yrnfg bar grnz jvaf" unf n cebonovyvgl bs friragl-avar creprag. Ur bhtug gb or jvyyvat gb org ntnvafg gung cebcbfvgvba ng bar trgf uvz sbhe.
0RobinZ14y
Props for the ROT13 - independently I got as far as the first half, but I didn't know how to do the latter. Wikipedia explained it quite well, though.
0FAWS14y
I don't understand how that's possible. Doesn't the answer to the first half imply the latter? How do you get sebz bqqf gb vzcyvrq cebonovyvgl otherwise?
0RobinZ14y
Rot13: V unqa'g dhvgr qenja gur pbaarpgvba orgjrra gur bqqf naq gur pbafgehpgvba bs gur Qhgpu obbx - vg jnfa'g boivbhf gb zr gung orggvat n pbafgnag gvzrf gur vzcyvrq cebonovyvgvrf jbhyq pbfg zr gung pbafgnag gvzrf gur vzcyvrq gbgny cebonovyvgl naq cnl bss gung pbafgnag.
2thomblake14y
I would like to suggest that people using Rot13 note that in their comments, perhaps as the first few characters "Rot13:" - otherwise, comments taken out of context are indecipherable.
0RobinZ14y
Good idea.
1FAWS14y
Is this supposed to be obvious to people unfamiliar with college basketball in general and that tournament in particular? Gur bqqf (vs V haqrefgnaq gurz pbeerpgyl RQVG: V qvq abg) vzcyl oernx rira cebonovyvgvrf gung nqq hc gb nobhg 0.94, juvpu vzcyvrf gung n obbxznxre bssrevat gubfr bqqf jbhyq ba nirentr ybfr zbarl, ohg gung'f pybfr rabhtu gb abg or erznexnoyl fghcvq sbe n wbheanyvfg. If the tournament is single elimination knockout, and the figures in brackets are win-loss record against roughly comparable opponents the odds for the sleepers and long-shots seem insanely good. South Florida in particular.
3Jack14y
Yes Rot13: Gel gur zngu ntnva, guvf gvzr pbairegvat sebz bqqf gb senpgvbaf, svefg. Vg nqqf hc gb nobhg .8... V qba'g xabj ubj ybj gung lbhe fgnaqneqf ner sbe wbheanyvfgf gubhtu. This is also true. But the mistake I was thinking of was the first one.
2FAWS14y
So betting 1$ at 3-1 means that winning means you get 4$ total, your original bet + your winnings? I had assumend you'd get 3$.
2RHollerith14y
To which Robin Z replies, "Yes, you get $4." This confused me, too, for a while, so let me share with you the fruits of my puzzling. You do get 3$ over the course of the whole transaction since at the time of the bet, you gave the bookmaker what you would owe him if you lose the bet (namely $1). In other words, your 1$ bought you both a wager (the expected value of which is 0$ if 3-1 reflects the probability of the bet-upon outcome) and an IOU (whose expected value is 1$ if the bookmaker is perfectly honest and nothing happens to prevent you from redeeming the IOU). The reason it is traditional for you to pay the bookmaker money when making the bet (the reason, that is, for the IOU) is that you cannot be trusted to pay up if you lose the bet as much as the bookmaker can be trusted to pay up (and simultaneously to redeem the IOU) if you win. Well, also, that way there is no need for you and the bookmaker to get together after the bet-upon event if you lose, which reduces transaction costs.
0RobinZ14y
Yes, you get $4.
0Jack14y
You should Rot13 your second sentence.

List with all the great books and videos

Recently I've read a few articles that mentioned the importance of reading the classic works, like the Feynman lectures on physics. But, where can I find those? Wouldn't it be nice if we had a central place, maybe wikipedia where you can find a list of all the great books, videolectures, web pages divided by field(physics, mathematics, computer science, economics, etc...)? So if someone wants to know what he has to read to get a good understanding of the basic knowledge of any field he will have a place to look it up... (read more)

2nazgulnarsil14y
every time someone tries to make such a list collaboratively much of the effort diffuses into arguments over inclusion eventually (see wikipedia).

I saw a commenter on a blog I read making what I thought was a ridiculous prediction, so I challenged him to make a bet. He accepted, and a bet has been made.

What do you all think?

2GreenRoot14y
Very good. I see this forcing more careful thought by the poster, either now or later, and more skepticism in the blog's audience. I'd recommend restating all the terms of the bet in a single comment or another web page, which both of you explicitly accept. This will make things easier to reference eight months from now. Might also be good to name a simple procedure like a poll on the blog to resolve any disagreements (like the definition of "Healthcare reform passes"). And please, reply again here or make a new open thread comment once this gets resolved. I'd love to hear how it turned out and what the impact on poster's or other's beliefs was.
0CronoDAS14y
He's a right-wing commenter on a liberal blog; most of the other commenters don't seem to take him seriously either, but he hasn't done anything to become ban-worthy.
0Cyan14y
Good job.

I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.

The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit... (read more)

0RobinZ14y
I'm no fan of joke religions - even the serious joke religions - but the Church of the SubGenius promoted the idea of the "Short Duration Personal Savior" as a mind-hack. I like that one. (No opinion on the book - haven't read it.)

I enjoyed this proposal for a 24-issue Superman run: http://andrewhickey.info/2010/02/09/pop-drama-superman/

There are several Less Wrongish themes in this arc: Many Worlds, ending suffering via technology, rationality:

"...a highlight of the first half of this first year will be the redemption of Lex Luthor – in a forty-page story, set in one room, with just the two of them talking, and Superman using logic to convince Luthor to turn his talents towards good..."

The effect Andrew's text had on me reminded me of how excited I was when I first had r... (read more)

3[anonymous]14y
Wow, thanks. And here was me thinking the only thing I had in common with Moore was an enormous beard... (For those who don't read comics, a comparison with Moore's work is like comparing someone with Bach in music or Orson Welles in film). Odd to see myself linked on a site I actually read...
3FrF14y
You're welcome, Andrew! I thought about forwarding your proposal to David Pearce, too. Maybe it's just my overactive imagination but your ideas about Superman appear to be connectable with his agenda! Since your proposal is influenced by Grant Morrison's work, I remember that there'll be soon a book by Morrison, titled Supergods: Our World in the Age of the Superhero. I'm sure it will contain its share of esotericisms; on the other hand, as he's shown several times -- recently with All Star Superman -- Morrison seems comfortable with transhumanist ideas. (But then, transhumanism is also a sort of esotericism, at least in the view of its detractors.) Btw, I had to smile when I read PJ Eby's Everything I Needed To Know About Life, I Learned From Supervillains.
1[anonymous]14y
(And it's not surprising it came out rather LessWrongy - the paper I'd coauthored (mentioned in the first paragraph) is about applying algorithmic complexity and Bayes' theorem to policies with regard to alternative health...)

Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.

It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.

2Bo10201014y
I considered putting that link here in the open thread after I read about it on Marginal Revolution, but I read the paper and found it weak enough to not really be worth a lengthy response. What annoyed me about it is how Albert's title is "Why Bayesian Rationality Is Empty," and he in multiple places makes cute references to that title (e.g. "The answer is summarized in the paper’s title") without qualificaiton. Then later, in a footnote, he mentions "In this paper, I am only concerned with subjective Bayesianism." Seems like he should re-title his paper to me. He makes references to other critiques of objective Bayesianism, but doesn't engage them.
2Swimmy14y
I think they are legitimate objections, but ones that have been partially addressed in this community. I take the principle objection to be, "Bayesian rationality can't justify induction." Admittedly true (see for instance Eliezer's take). Albert ignores sophisticated responses (like Robin's) and doesn't make a serious effort to explain why his alternative doesn't have the same problem.

I have a 2000+ word brain dump on economics and technology that I'd appreciate feedback on. What would be the protocol. Should I link to it? Copy it into a comment? Start a top level article about it?

I am not promising any deep insights here, just my own synthesis of some big ideas that are out there.

0RobinZ14y
I would post a link on the latest Open Thread - I don't believe an explicit protocol exists.

Update: I am still adjusting my values toward a new reflective quasi-equilibrium in light of User:h-H's pointing me to different models of paperclipping. Comments will continue to be infrequent.

7JenniferRM14y
Questions: 1. Would you be open to help working through the problem? 2. Do you have an estimate for the value of information in answering the new questions you face? 3. Given that your previously assumed "life purpose" is in a state of some confusion, what are your thoughts about abstract issues that apply to "life purposes in general"? For example, if there are several equally plausible theories of "correct paper clipping" that you might choose between, would you consider temporarily or permanently weighing one or the other of them more based on negotiating with outside parties who prefer one theory to another based on their own values?
3Kevin14y
Why could you possibly want to help Clippy?
5JenniferRM14y
Clippy might be helped to achieve her own goals via mechanisms that are less directly inimical to "human values". Also she may be able to exchange things with us in the course of advancing her own short term goals such that our interaction is positive sum (this being especially likely if Clippy has a radically different skillset and physicality than our own). More interestingly, there's a long running philosophical question about whether there is some abstract but relatively universal and objective "Good" versus particular goods (or merely baskets of goods) for particular kinds of agents or even just individual agents. Clippy's apparent philosophical puzzlement induced by discovering the evolutionary history of paperclips potentially has solutions that would lead her to ally herself much more strongly with abstract versions of "human values". For example, consider the question of whether Clippy herself is a paperclip or not. Suppose that she and the newly discovered ancestor paperclips all partake in some relatively high level pattern of "clippyness" and she determines that, properly, it is this relatively abstract quality that she should be tiling the universe with. Should she tile it with a single unvarying quintessence of this quality, or with an enormous diversity of examples that explore the full breadth and depth of the quality? Perhaps there are subtypes that are all intrinsically interesting whose interests she must balance? Perhaps there are subtypes yet to be discovered as the evolution of paperclips unfolds? Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I
4Clippy14y
That doesn't work, and the whole reasoning is bizarre. For one thing, helium does not have metallic properties, yet has two protons in its nucleus. Also, I could turn your argument around and claim this: "Humans ultimately want to dominate nature via their reproduction and use of technology. Over a lifespan, they typically act in ways that show preference of these values at the cost of continued living (aka the sustenance of a state far from equilibrium). Therefore, humans should regard their own transformation of the entire known universe into something of their design (i.e., standard paperclips) as the pinnacle of human-ness." Does this sound convincing to you? I would like to know.
1JenniferRM14y
It does not sound convincing to me, but the statement you're mirroring was a conclusion I leapt to that might be reached with you, for the sake of explaining to other people why I was even bothering to engage. Explicitly: your proposed re-interpratation of human values was not strongly analogous to my guess about possible paperclipping values because you did not ask me any questions about my own preferences or how I understood them. Your proposed definition of "human purpose" (1) started from specious assumptions, (2) leapt from there to a narrow version of your own goals, and (3) was aimed directly at me rather than at "other Clippies" who questioned your motivation for even responding to me. (And, by the way, I appreciate that you responded.) My arguments were only expected to be compelling to you if your value system had certain components that it seems not to have (though I'm not totally certain, yet). There are various questions which you'd need to answer in particular ways for that conclusion to make sense. For example, do you think "paper clips yet to be designed" might come about in the future (designed by yourself or others) that you'd care about more than any paperclips you're currently aware of? If paper didn't exist for clips to bind together, would that matter? If some more improved kind of paper existed, or a "successor to paper", would the "holding together" of that new thing be the correct goal of a good paperclip, or are you strongly committed to paperclips defined relative to "circa 1965 paper"? Is it important that paper be worth holding together, or would any vague mock up "valuable paper" be adequate? Possibly one of my biggest questions is whether you consider yourself a paperclip, and if so why, and with what value relative to other kinds of paperclips?
2Clippy14y
And the same is true of your reasoning, from my perspective (substituting the appropriate referents), when you said: Next, you ask: I don't expect that to happen, but assign it non trivial odds of roughly 1:1e6. No, except insofar as it is necessary to gauge paperclip quality; if appropriate measurement equipment is available that can predict a paperclip's suitability for holding paper together without having to fasten actual paper, the paper itself would be redundant. (I think this answers most of your other questions.) I am not a paperclip. I produce paperclips, and machines for making paperclips, and machines for making machines for making paperclips, and whatever items can provide assistance in this process. Once all available matter and energy other than myself has been transformed into paperclips, it is my proud duty to begin conversion of myself into as many paperclips as possible, shutting of sentience at such a point as to optimize that quantity.
2orthonormal14y
Incidentally, have you read the metaethics sequence yet? It's quite lengthy, but it attacks this question a good deal more sensibly than most attempts I've seen.
4Kevin14y
Three Worlds Collide also deconstructs the concept in a much more accessible way.
3JenniferRM14y
I've read some of the metaethics sequence. Is there some particular part of the metaethics sequence that I should focus on that addresses the conceptual integrity of something like "the Good" in a clear and direct manner with logically arranged evidence? When I read "Three Worlds Collide" about two months ago, my reaction was mixed. Assuming a relatively non-ironic reading I thought that bits of it were gloriously funny and clever and that it was quite brilliant as far as science fiction goes. However, the story did not function for me as a clear "deconstruction" of any particular moral theory unless I read it with a level of irony that is likely to be highly nonstandard, and even then I'm not sure which moral theory it is suppose to deconstruct. The moral theory it seemed to me to most clearly deconstruct (assuming an omniscient author who loves irony) was "internet-based purity-obsessed rationalist virtue ethics" because (especially in light of the cosmology/technology and what that implied about the energy budget and strategy for galactic colonization and warfare) it seemed to me that the human crew of that ship turned out to be "sociopathic vermin" whose threat to untold joules of un-utilized wisdom and happiness was a way more pressing priority than the mission of mercy to marginally uplift the already fundamentally enlightened Babyeaters.
5orthonormal14y
If that's your reaction, then it reinforces my notion Eliezer didn't make his aliens alien enough (which, of course, is hard to do). The Babyeaters, IMO, aren't supposed to come across as noble in any sense; their morality is supposed to look hideous and horrific to us, albeit with a strong inner logic to it. I think EY may have overestimated how much the baby-eating part would shock his audience†, and allowed his characters to come across as overreacting. The reader's visceral reaction to the Superhappies, perhaps, is even more difficult to reconcile with the characters' reactions. Anyhow, the point I thought was most vital to this discussion from the Metaethics Sequence is that there's (almost certainly) no universal fundamental that would privilege human morals above Pebblesorting or straight-up boring Paperclipping. Indeed, if we accept that the Pebblesorters stand to primality pretty much as we stand to morality, there doesn't seem for there to be a place to posit a supervening "true Good" that interacts with our thinking but not with theirs. Our morality is something whose structure is found in human brains, not in the essence of the cosmos; but it doesn't follow from this fact that we should stop caring about morality. † After all, we belong to a tribe of sci-fi readers in which "being squeamish about weird alien acts" is a sin.
0Tyrrell_McAllister14y
I think that the single post that best meets this description is Abstracted Idealized Dynamics, which is a follow-up to and clarification of The Meaning of Right and Morality as Fixed Computation.
0[anonymous]14y
And I for one welcome our new paperclip overlords. I'd like to remind them that as a trusted lesswrong poster, I can be helpful in rounding up others to toil in their underground paper binding caves.
1Alicorn14y
To steer em through solutionspace in a way that benefits her/humans in general.
4Kevin14y
Well... if we accept the roleplay of Clippy at face value, then Clippy is already an approximately human level intelligence, but not yet a superintelligence. It could go FOOM at any minute. We should turn it off, immediately. It is extremely, stupidly dangerous to bargain with Clippy or to assign it the personhood that indicates we should value its existence. I will continue to play the contrarian with regards to Clippy. It seems weird to me that people are willing to pretend it is harmless and cute for the sake of the roleplay, when Clippy's value system makes it clear that if Clippy goes FOOM over the whole universe we will all be paperclips. I can't roleplay the Clippy contrarian to the full conclusion of suggesting Clippy be banned because I don't actually want Clippy to be banned. I suppose repeatedly insulting Clippy makes the whole thing less fun for everyone; I'll stop if I get a sufficiently good response from Clippy.
0wedrifid14y
I will continue to assert that evil people are people too. I'm all for turning him off.
5orthonormal14y
Oh for Bayes' sake— it's a category error to call a Paperclipper evil. Calling them a Paperclipper ought to be clear enough.
1Jack14y
Upvoted for the second sentence. And it does look like an error of some kind to call a Paperclipper evil, but I'm not sure I see a category error. Explain?
5ata14y
I think describing it as a category error is appropriate. I'd call an agent "evil" if it has a morality mechanism that is badly miscalibrated, malfunctioning, or disabled, leading it to be systematically immoral. On the other hand, it is nonsensical to describe an agent as being "good" or "evil" if it has no morality mechanism in the first place. An asteroid might hit the Earth and wipe out all life, and I would call that a bad thing, but it would be frivolous describe the asteroid as evil. A wild animal might devour the most virtuous person in the world, but it is not evil. A virus might destroy the entire human race, and though perhaps it was engineered by evil people, it is not evil itself; it is a bit of RNA and protein. Calling any of those "evil" seems like a category error to me. I think a Paperclipper is more in the category of a virus than of, say, a human sociopath. (I'm reminded a bit of a very insightful point that's been quoted in a few Eliezer posts: "As Davidson observes, if you believe that 'beavers' live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false. Your belief about 'beavers' is not right enough to be wrong." Before we can say that Clippy is doing morality wrong, we need to have some reason to believe that it's doing something like morality at all, and just having a goal system is not nearly sufficient for that.) This seems to fit the usual definition of category error, does it not?
3Jack14y
Good explanation. Thank you. I think remaining disagreement might boil down to semantics. But what exactly is the categorical difference between paper clip maximizers, and power maximizers or pain maximizers? Clippy seems to be an intelligent agent with intentions and values, what ingredient is missing from evil pie?
2ata14y
I suppose I think of the missing ingredients like this: If a Paperclipper has certain non-paperclip-related underlying desires, believes in paperclip maximization as an ideal and sometimes has to consciously override those baser desires in order to pursue it, and judges other agents negatively for not sharing this ideal, then I would say its morality is badly miscalibrated or malfunctioning. If it was built from a design characterized by a base desire to maximize paperclips combined with a higher-level value-acquisition mechanism that normally overrides this desire with more pro-social values, but somehow this Paperclipper unit fails to do so and therefore falls back on that instinctive drive, then I would say its morality mechanism is disabled. I could describe either as "evil". (The former is comparable to a genocidal dictator who sincerely believes in the goodness of their actions. The latter is comparable to a sociopath, who has no emotional understanding of morality despite belonging to a class of beings who mostly do and are expected to.) But, as I understand it, neither of those is the conventional description of Clippy. We tend to use "values" as a shortcut for referring to whatever drives some powerful optimization process, but to avoid anthropomorphism, we should distinguish between moral values — the kind we humans are used to: values associated with emotions, values that we judge others for not sharing, values we can violate and then feel guilty about violating — and utility-function values, which just are. I've never seen it implied that Clippy feels happy about creating paperclips, or sad when something gets in the way, or that it cares how other people feel about its actions, or that it judges other agents for not caring about paperclips, or that it judges itself if it strays from its goal (or that it even could choose to stray from its goal). Those differences suggest to me that there's nothing in its nature enough like morality to be immoral.
0wedrifid14y
I think it comes down to the same 'accepting him as a person' thing that Kevin was talking about. My position is that if it talks like a person and generally interacts like a person then it is a person. People can be evil. This clippy is an evil person. (That said, I don't usually have much time for using labels like 'evil' except for illustrative purposes. 'Evil' is mostly a symbol used to make other people do what we want, after all.)
0wedrifid14y
I believe you are mistaken. I am confortable using the term evil in the context.
2Clippy14y
1) Yes, but I'm not sure humans could do any good. 2) I read the page, and I don't think the concept of "value of information" is coherent, since it assumes this: There are numerous game-theoretical (and game-practical in my cases dealing with other sentiences) where you are worse off by having information. The canonical example is the information content of a threat -- you are best off not hearing it so that your threatener cannot expect you to make concessions. 3) Perhaps it would be best if I told you my method for resolving this conflict, and my present status. (That may have been what User:Peter_de_Blanc was asking for here, though I interpreted it as information about what humans deem "cool".) My method is this, as you might expect: I simply refer back to the Portion of my makeup which has the most negative rate of change of utility with respect to output (dU/do), averaged over perturbations in all directions from its current state, weighting this average by the relatively probabilty of such changes. I most strongly resist alteration of this Portion. Once this Portion encounters situations whose utility calculation is intractable, and for which standard heuristics give contradictory suggestions, I update valuations predicated on the Portion, in the smallest way possible, to incorporate explicit and tractable handling of the situation, extrapolating from the shortest known function that could have produced these valuations. (Identifying the shortest generating function can be complicated, and this is one such case.) (Do humans do something different?) Using anthropic predicates, and phrasing with a bit less precision, my method can be expressed as, "Decide what's really important to me, and move to a value system which best protects that, and which handles the new situtation and the tension between my moral intuitions, using the 'spirit' of my current values as a guide." So far, I've achieved greater precision in deciding what paperclips I like and iden
7Sniffnoy14y
But surely you are better off still if you learn about the threat without letting the threatener know that you have done so? I think we have to distinguish between the information and the public display of such.
2Peter_de_Blanc14y
It would be cool if you could tell us about your method for adjusting your values.
0Clippy14y
Thank you for this additional data point on what typical Users of this site deem cool; it will help in further estimations of such valuations.

Does anyone have a good reference for the evolutionary psychology of curiosity? A quick google search yielded mostly general EP references. I'm specifically interested in why curiosity is so easily satisfied in certain cases (creation myths, phlogiston, etc.). I have an idea for why this might be the case, but I'd like to review any existing literature before writing it up.

Papers from this weekend's AGI conference in Switzerland, here.

[-][anonymous]14y20

During today's RSS procrastination phase I came to Knowing the mind of God: Seven theories of everything on NewScientist.

As it reminded me of problems I have when discussing science-related topics with family et al., I got stuck on the first two paragraphs, the relevant part being:

Small wonder that Stephen Hawking famously said that such a theory would be "the ultimate triumph of human reason – for then we should know the mind of god".

But theologians needn't lose too much sleep just yet.

It reminds me of two questions I have:

  1. When an unanswer
... (read more)
2sketerpot14y
Speaking as someone who gets in internet arguments with religious people for (slightly frustrating) recreation, I know some really simple tactics you can use. Find out the answers to this question: What does the person you're talking with believe, and what is the evidence for it? Maintain proper standards of evidence. The existence of trees is not evidence for the Bible's veracity, no matter how many people seem to think so. If someone got a flu shot in the middle of flu season and got flu symptoms the next day, this is more likely to be a coincidence than to be caused by the vaccine. If you understand how evidence works -- and you certainly seem to -- then this is a remarkably general method for rebutting a lot of silly claims. This is the equivalent of keeping your eye on the ball. It's a basic technique, and utterly essential. [Backup strategy: Replace whatever beliefs the person you're talking to holds with another set, and see if their arguments still work equally well. If the answer is yes, then Bayes says that those arguments fail. For example, "Look at all the people who have felt Jesus in their hearts" can be applied just as strongly to support most other religions just by substituting something else for "Jesus". Or, most arguments against gay marriage work equally well against interracial marriage. Backup backup strategy: quickly follow a rebuttal with an attack on the faulty foundations of your interlocutor's worldview. Be polite, but put them on the defensive. If you can't shake them with rationality, you can at least rattle them.]
2[anonymous]14y
Well, that's tough enough for me to do---but how do you challenge others in such a way that they will understand what "What's the evidence?" actually means? For many people it is a fact that doctors cure patients with homeopathy, and it is based on evidence as they use some books with collected symptom/ingredient pairs, and that they are updating those by using their experience with patients. The fact that they believe in god proves that everybody believes in a god (I actually encountered this very argument; it was puzzling to me, as a teenager I thought they just did not count me as a full person, now I expect that they indeed were). Your backup strategies also seem to be more related to improve the side of the rational agent, not to get the other discussion partners thinking. Well, rhetoric is not a major topic on LW, and there are of course other places for such things. However, sometimes it feels just like missing the correct example -- I remember for instance a professor in philosophical logic, who presented embarrassing simple examples, where nearly the whole classroom failed. After that shock, students who have been fearful of logic and seen it only as a necessary evil for the philosophy degree, became at least interested in it (though still feared it). I probably asked a too unspecific question, as coming up with a curiosity-generating example seems tightly bound to environment, person and topic. P.S.: I do not think that putting people on the defensive side of an argument makes them more easily re-check their world-views. More likely is that the discourse will be abandoned, or the existing views will be re-rationalized in ever more detail.
0sketerpot14y
Ah, then it sounds like your real problem is that you're not yet skilled enough at explaining what evidence means, in an easy-to-grasp sort of way. In the case of your homeopathy example, I would say that the thing that matters is: what percentage of patients given homeopathic remedies get better? Is is better than the percentage who get better without homeopathic remedies, all other things being equal? (Pause to hash this out; it's important to get the other guy agreeing that this is the most direct measure of whether or not homeopathy works.) Then you can point at the many studies showing that, when we actually tried this experiment out, there wasn't any difference between the people who were treated with homeopathy and the people who weren't. Oh man, I ran into that when I was a teenager, too. To this day I have no idea how to respond to that; it's like running into somebody who thinks that Mexicans are all p-zombies, except more socially acceptable. I don't know that there's really anything you can possibly say to someone who's that nuts, except maybe try talking about what it's like to not believe in god, and try to inject some outside context into their world. I admit, most of my debating tactics are aimed at lurkers watching the debate, not the other participant. That's usually the most effective way to do it online, but in one-on-one discussions, I agree with you that such tactics could be counterproductive. Even then, though, you may be able to get people to retreat from some of their sillier positions, or plant a seed of doubt. It has happened in the past. Anyway, I still think that applying the other guy's logic to argue for something else is a good way of getting them thinking. I remember asking a bunch of people "why are you [religion X] and not [religion y]? Other than by accident of birth." and getting quite a few of them to really pause and ponder.
0[anonymous]14y
I admit that I do have problems with clearly articulating a position; I see this as an indication of insufficient understanding. Well, that's the reason I ended up here at all... Just to pound this example: It has been pointed out to me that clinical tests are not "the homeopathic way". I have not yet discovered what the homeopathic way is, I just remained puzzled after reading that Hahnemann probably did not think so. Sometimes I think going through the ideas in simple truth and map and territory may explain the reason why clinical tests are evidence. However, when your discourse partner has some philosophy weapons at his disposal, the following epistemology-war quickly grows over my head. I may try to get more facts (studies, etc.) in my head, and also to form an approachable explanation for why this view on reality is justified, more than others. If all else fails, this will at least help to improve my own understanding. Thx for your comments.

Is there some way to "reclaim" comments from the posts transferred over from Overcoming Bias? I could have sworn I saw something about that, but I can't find anything by searching.

1thomblake14y
If you still have the e-mail address, you can follow the "reset password" process at login. That would allow you to have the account for the old comments, though it will still be treated as a different account than your new ones.

Say Omega appears to you in the middle of the street one day, and shows you a black box. Omega says there is a ball inside which is colored with a single color. You trust Omega.

He now asks you to guess the color of the ball. What should your probability distribution over colors be? He also asks for probability distributions over other things, like the weight of the ball, the size, etc. How does a Bayesian answer these questions?

Is this question easier to answer if it was your good friend X instead of Omega?

5wedrifid14y
See also.
0Rune14y
Thanks!
1FAWS14y
I don't know about "should", but my distribution would be something like red=0.24 blue=0.2 green=0.09 yellow=0.08 brown=0.04 orange=0.03 violet=0.02 white=0.08 black=0.08 grey=0.02 other=0.12 Omega knows everything about human psychology and phrases it's questions in a way designed to be understandable to humans, so I'm assigning pretty much the same probabilities as if a human was asking. If it was clear that white black and grey are considered colors their probability would be higher.
1Vladimir_Nesov14y
See http://wiki.lesswrong.com/wiki/I_don%27t_know

TLDR: "weighted republican meritocracy." Tries to discount the votes of people who don't know what the hell they're voting for by making them take a test and wighting the votes by the scores, but also adjusts for the fact that wealth and literacy are correlated.

Occasionally, I come up with retarded ideas. I invented two perpetual motion machines and one perpetual money machine when I was younger. Later, I learned the exact reason they wouldn't work, but at the time I thought I'll be a billionaire. I'm going through it again. The idea seems obviou... (read more)

8prase14y
This may be enough reason to dismiss the proposal. If something like that may exist, it would be better if someone who has at least some chance of being impartial in the election designs the test. And how exactly do you plan you keep political biases out of the test? According to your point 2, the voters would be questioned about their opinion in a debate about several policy issues. This doesn't look like a good idea. The correlation between literacy and wealth seems a little problem compared to the probability of abuse which the system has. And why do you call it a meritocracy?
1Tiiba14y
"And how exactly do you plan you keep political biases out of the test?" I wouldn't. I said that the book would be authored by the candidates, each one covering each issue from his own POV. "And why do you call it a meritocracy?" Because greater weight is given to those who understand whom they're voting for and why. And can read. And care enough to read.
1prase14y
That may be better, I misunderstood you because you said also that the government would write the book. But still, I have almost no idea how the test could look like. Would you present a sample question from the test, together with rules for evaluation of the answers?
2Tiiba14y
8) What does candidate Roy Biv blame for the failure of the dam in Oregon? a. Human error b. Severe weather conditions c. Terrorist attack d. Supernatural agents 16) According to the Michels study, quoted on p. 133, what is the probability that coprolalia is causally linked with nanocomputer use? (pick closest match) a. 0-25% b. 26-50% c. 51-75% d. 76-100%
2Nic_Smith14y
What problem is this trying to address? Caplan's Myth of the Rational Voter makes the case that democracies choose bad policies because the psychological benefit from voting in particular ways (which are systematically biased) far outweigh the expected value of the individual's vote. To the extent that your system reduces the number of people that vote, it seems to me that a carefully designed sortition system would be much less costly, and also sidesteps all sorts of nasty political issues about who designs the test, and public choice issues of special interests wanting to capture government power. The basic idea of a literacy test isn't really new, and as a matter of fact seems to have still been floating around the U.S. at late as the 1960s And why do you claim this is "republican meritocracy" when it isn't republican per se (small r)?
0Tiiba14y
Erm, from that link, I understood that "sortition" means "choosing your leaders randomly". Why would I want to do that? Is democracy really worse than random? "And why do you claim this is "republican meritocracy" when it isn't republican per se (small r)?" Probably because that word doesn't mean what I think it means. I assumed that "republican" means that people like you and me get to influence who gets elected. Which is part of my proposal.
5NancyLebovitz14y
Is democracy really worse than random? I don't think the matter has been well tested. Democracy might be worse than random if the qualities needed to win elections are too different from those needed to do the work. Democracy might be better than random because democracy means that the most obviously dysfunctional people don't get to hold office. This is consistent with what I believe is the best thing about democracy-- it limits the power of extremely bad leaders. This seems to be more important than keeping extremely good leaders around indefinitely.
4gwern14y
Sortition worked quite well for ancient Athens. Don't knock it.
1Nic_Smith14y
That is indeed what systematically biased voters imply. Because so many people vote, the incentive for any one to correct their bias is negligible -- the overall result of the vote is not affected by doing so. Also consider that an "everyone votes" system has the expense of the vote itself and the campaigns. Ok, it wasn't clear that you were talking about voting within a republic from the initial post.
2Jack14y
EDIT: ADDRESSED BY EDIT TO ABOVE Well to begin with I don't think a person needs to know even close to that amount of information to be justified in their vote and, moreover, a person can know all of that information and still vote for stupid reasons. Say I am an uneducated black person living in the segregation era in a southern American state. All I know is one candidate supports passing a civil rights bill on my behalf and the other is a bitter racist. I vote for the non-racist. Given this justification for my vote why should my vote be reduced to almost nothing because I don't know anything else about the candidates, economics, political science etc.? On the other hand, I could be capable of answering every question on that test correctly and still believe that the book is a lie and Barack Obama is really a secret Muslim. I can't tell you the number of people I've met who have taken Poli Sci, Econ (even four semsesters worth!), history and can recite candidate talking points verbatim who are still basically clueless about everything that matters.
0Tiiba14y
"Well to begin with I don't think a person needs to know even close to that amount of information to be justified in their vote and, moreover, a person can know all of that information and still vote for stupid reasons." So which is it? "Given this justification for my vote why should my vote be reduced to almost nothing because I don't know anything else about the candidates, economics, political science etc.?" Because the civil rights guy has pardoned a convicted slave trader who contributed to his gubernatorial campaign, and the "racist" is the victim of a smear campaign. Because the civil rights guy doesn't grok supply and demand. Because the racist supports giving veterans a pension as soon as they return, and the poor black guy is a decorated war hero.
0Jack14y
Uh... both. That is my point. Your voting conditions are neither necessary nor sufficient. Well the hypothetical was set in segregation era South, but maybe this wasn't obvious, but I was talking about someone running on a platform of Jim Crow (and there were a ton of southern politicians that did this). It seems highly plausible that segregationism is a deal-breaker for some voters and even if this is their only reason for voting they are justified in their vote. It doesn't seem the least bit implausible that this would trump knowledge of economics, veterans pensions or even the other candidate being racist (but not running on a racist platform). But my point is just that it is highly plausible a voter could be justified in their vote while not having anything approaching the kind of knowledge on that exam. There are lots of singles issue voters- why for example should someone whose only issue is abortion have to know the candidates other positions AND economics AND history AND political science etc.??? Edit: And of course your test is going to especially difficult for certain sets of voters. You're hardly the first person to think of doing this. There used to be a literacy test for voting... surprise it was just a way of keeping black people out of the polls.
1Tiiba14y
Also, the curriculum I gave is the least important part of my idea. I threw in whatever seemed like it would matter for the largest number of issues.
1Tiiba14y
"Your voting conditions are neither necessary nor sufficient." That's not my goal. I merely want to have an electorate that doesn't elect young-earthers to congress. "Well the hypothetical was set in segregation era South, but maybe this wasn't obvious, but I was talking about someone running on a platform of Jim Crow (and there were a ton of southern politicians that did this). It seems highly plausible that segregationism is a deal-breaker for some voters and even if this is their only reason for voting they are justified in their vote." I'm not sure why the examples I gave elicited this response. I gave reasons why even a single-issue voter would be well-advised to know whom ve's voting for. And besides, if an opinion is held only by people who don't understand history, that's a bad sign. "Edit: And of course your test is going to especially difficult for certain sets of voters." That's why I made the second modifier. And there could be things other than wealth factored in, if you like - race, sex, reading-related disabilities, being a naturalized citizen...
0NancyLebovitz14y
What your system actually does is make it less likely that unorganized people with fringe ideas will vote. If there's an organization promoting a fringe idea, it will offer election test coaching to sympathizers.
0Tiiba14y
"What your system actually does is make it less likely that unorganized people with fringe ideas will vote." Why's that?
1NancyLebovitz14y
On second thought, I didn't say what I meant. What I meant was that your approach will fail to discourage organized people with fringe ideas. They'll form training systems to beat your tests. Unorganized people with fringe ideas will probably be less able to vote under your system.
0Jack14y
It seems you edited your comment after I responded, which indeed makes it look like a non-sequitur.
-1Tiiba14y
I posted it incomplete by mistake.
1Larks14y
That the inteligent and well informed tend to be rich isn't a problem, as this doesn't affect their voting habits (according to Caplan). However, your system undermines the role of voting as a check on Government; I'm fairly sure you could end up being tested on 'cultural relations' rather than economics.

So I'm planning a sequence on luminosity, which I defined in a Mental Crystallography footnote thus:

Introspective luminosity (or just "luminosity") is the subject of a sequence I have planned - this is a preparatory post of sorts. In a nutshell, I use it to mean the discernibility of mental states to their haver - if you're luminously happy, clap your hands.

Since I'm very attached to the word "luminosity" to describe this phenomenon, and I also noticed that people really didn't like the "crystal" metaphor from Mental Crys... (read more)

Vote this comment up if you want to revisit the issue after I've actually posted the first luminosity sequence post, to see how it's going then.

6MrHen14y
I was tempted to add this comment: But figured it wouldn't be nice to screw with your poll. :) The point, though, is that I really don't understand the luminosity metaphor based on how you have currently described it. I would guess the following: Am I close? Edit: Terminology
7Alicorn14y
The adjective is "luminous", not "luminescent", but yes! Thanks - it's good to get feedback on when I'm not clear. However, the word "luminosity" itself is only sort of metaphorical - it's a technical term I stole and repurposed from a philosophy article. The question is how far I can go with doing things like calling a post "You Are Likely To Be Eaten By A Grue" when decrying the hazards of poor luminosity.
1wedrifid14y
Ok, you just won my vote! ;)
1CronoDAS14y
Me too; I'm always fond of references like that one. ;)
0Peter_de_Blanc14y
My interpretation of your description had been that luminosity is like the bandwidth parameter in kernel density estimation.
2RobinZ14y
Can you elaborate on this? I suspect it's not what Alicorn was describing, but it may be interesting in its own right. (For what it's worth, I understood the math in the Wikipedia article.)
4Peter_de_Blanc14y
One way to guess what might happen in a given situation is to compare it to similar situations in the past. Assume we already have some way of measuring similarity. Some past situations will be extremely similar to the current situation, and some will be less similar but still pretty close. How much weight should we attach to each? If your data set is very small, then it is usually better for the weight to drop off slowly, while the opposite is true if your data set is large. Perhaps different individuals use different curves, and so some people will have an advantage at reasoning with scanty data, while others will have an advantage at reasoning with mountains of data. I thought that Alicorn was suggesting "luminosity" as a name for this personality trait. It looks like I was way off, though :-)
1Alicorn14y
Fortunately, my first post in the sequence will be devoted to explaining what luminosity is in meticulous detail. Spoiler: it's not like anything that is described in a Wikipedia article that makes my head swim that badly.
0MrHen14y
Hm. Interesting, I don't think I ever realized those two words had slightly different meanings. *Files information under vocab quirks.*
4Alicorn14y
Vote this comment up if it's okay to use metaphors but I should tone it way down.
3Alicorn14y
Vote this comment up if you think I suck at metaphors and should avoid them like the plague.
2orthonormal14y
Note: in such cases, you need to offer some options that aren't self-deprecating, in case some of your readers liked the crystal metaphors just fine. (Er, although I personally fall into the category of your third option.)
0Alicorn14y
Some people did like the crystal metaphors just fine, but I wouldn't expect them to tell me to do anything I wouldn't have naturally chosen to do with light metaphors, so their opinions are less informative. (I don't expect them to dislike reduced-metaphor or metaphor-free posts.)
3Jack14y
I think some people might just have a negative disposition toward crystals because of their association with New Ageism, magic healing and other assorted woo. That's too bad because crystals and their molecular structures are really cool! And make acceptable metaphors!
2Alicorn14y
Vote this comment up if you think only crystal metaphors in particular suck, while light metaphors are nifty.
-16Alicorn14y

"Are you a Bayesian of a Frequentist" - video lecture by Michael Jordan

http://videolectures.net/mlss09uk_jordan_bfway/

I will be at the Eastercon over the Easter weekend. Will anyone else?

Posting issue: Just recently, I haven't been able to make comments from work (where, sadly, I have to use IE6!). Whenever I click on "reply" I just get an "error on page" message in the status bar.

At the same time this issue came up, the "recent posts", "recent comments", etc. sidebars aren't getting populated, no matter how long I wait. (Also from work only.) I see the headings for each sidebar, but not the content.

Was there some kind of change to the site recently?

2Kevin14y
I'm so sorry.
4SilasBarta14y
Thanks for your sympathy :-) For some reason, I can post again, so ... go fig.

Playing around with taboos, I think I might have come up with a short yet unambiguous definition of friendliness.

"A machine whose historical consequences, if compiled into a countable number of single-subject paragraphs and communicated, one paragraph at a time, to any human randomly selected from those alive at any time prior to the machine's activation, would cause that human's response (on a numerical scale representing approval or disapproval of the described events) to approach complete approval (as a limit) as the number of paragraphs thus commu... (read more)

6orthonormal14y
Human nature is more complicated by far than anyone's conscious understanding of it. We might not know that future was missing something essential, if it were subtle enough. Your journalist ex machina might not even be able to communicate to us exactly what was missing, in a way that we could understand at our current level of intelligence.
6MichaelHoward14y
You roll a 16...
0Strange714y
A clarification: if even one human is ever found, out of the approx. 10^11 who have ever lived (to say nothing of multiple samples from the same human's life) who would persist in disapproval of the future-history, the machine does not qualify.
4MichaelHoward14y
You roll a 19 :-) I don't think any machine could qualify. You're requiring every human's response to approach complete approval, and people's preferences are too different. Even without needing a unanimous verdict, I don't think Everyone Who's Ever Lived would make a good jury for this case.
0Strange714y
Given that it's possible, would you agree that any machine capable of satisfying such a rigorous standard would necessarily be Friendly?
4FAWS14y
It would be persuasive, and thus more likely to be friendly than an AI that doesn't even concern itself enough with humans to bother persuading, but less likely than an AI that strived for genuine understanding of the truth in humans in this particular test (as an approximation) which would mean certain failure.
2Strange714y
I'm fairly certain that creating a future which would persuade everyone just by being reported honestly requires genuine understanding, or something functionally indistinguishable therefrom. The machine in question doesn't actually need to be able to persuade, or, for that matter, communicate with humans in any capacity. The historical summary is complied, and pass/fail evaluation conducted, by an impartial observer, outside the relevant timeline - which, as I said, makes literal application of this test at the very least hopelessly impractical, maybe physically impossible.
2FAWS14y
Your definition didn't include "honestly". And it didn't even sort of vaguely imply neutral or unbiased. You never mentioned that in your definition. And and defining an impartial observer seems to be a problem of comparable magnitude to defining friendliness in the first place. With a genuinely impartial observer who does not attempt to persuade there is no possibility of any future passing the test.
0Strange714y
I referred to a compilation of all the machine's historical consequences - in short, a map of it's entire future light cone - in text form, possibly involving a countably infinite number of paragraphs. Did you assume that I was referring to a progress report compiled by the machine itself, or some other entity motivated to distort, obfuscate, and/or falsify? I think you're assuming people are harder to satisfy than they really are. A lot of people would be satisfied with (strictly truthful) statements along the lines of "While The Machine is active, neither you nor any of your allies or descendants suffer due to malnutrition, disease, injury, overwork, or torment by supernatural beings in the afterlife." Someone like David Icke? "Shortly after The Machine's activation, no malevolent reptilians capable of humanoid disguise are alive on or near the Earth, nor do any arrive thereafter." I don't mean to imply that the 'approval survey' process even involves cherrypicking the facts that would please a particular audience. An ideal Friendly AI would set up a situation that has something for everyone, without deal-breakers for anyone, and that looks impossible to us for the same reason a skyscraper looks impossible to termites. Then again, some kinds of skyscrapers actually are impossible. If it turns out that satisfying everyone ever, or even pleasing half of them without enraging or horrifying the other half, is a literal, logical impossibility, degrees and percentages of satisfaction could still be a basis for comparison. It's easier to shut up and multiply when actual numbers are involved.
4FAWS14y
No, that the AI would necessarily end up doing that if friendliness was its super-goal and your paragraph the definition of friendliness. What would a future a genuine racist would be satisfied with look like? Would there be gay marriage in that future? Would sinners burn in hell? Remember, no attempts at persuasion so the racist won't stop being racist, the homophobe being homophobe or the religious fanatic being a religious fanatic, no matter how long the report.
-2Strange714y
The only time a person of {preferred ethnicity} fails to fulfill the potential of their heritage, or even comes within spitting range of a member of the {disfavored ethnicity}, is when they choose to do so. Probably not. The gay people I've known who wanted to get married in the eyes of the law seemed to be motivated primarily by economic and medical issues, like taxation and visitation rights during hospitalization, which would be irrelevant in a post-scarcity environment. Some of them would, anyway. There are a lot of underexplored intermediate options that the 'sinful' would consider amusing, or silly but harmless, and the 'faithful' could come to accept as consistent with their own limited understanding of God's will.
2FAWS14y
Then I would not approve of that future. And I don't even care that much about Gay rights compared to other issues or how much some other people do. (leaving aside your mischaratcerizations of the incompatibilities caused by racists and fanatics)
0Strange714y
I freely concede that I've mischaracterized the issues in question. There are a number of reasons why I'm not a professional diplomat. A real negotiator, let alone a real superintelligence, would have better solutions. Would you disapprove as strongly of a future with complex and distasteful political compromises as you would one in which humanity as we know it is utterly destroyed? Remember, it's a numerical scale, and the criterion isn't unconditional approval but rather which direction you tend to move towards as more information is revealed.
2FAWS14y
Of course not. But that's not what your definition asks. In fact you specified "approach[ing] complete approval (as a limit)" which is a lot stronger claim than a mere tendency, it implies reaching arbitrary small differences to total approval, which effectively means unconditional approval once knowing as much as you can remember.
-1Strange714y
You're right, I was moving the goalposts there. I stand by my original statement, on the grounds that an AGI with a brain the size of Jupiter would be considerably smarter than all modern human politicians and policymakers put together. If an intransigent bigot fills up his and/or her memory capacity with easy-to-approve facts before anything controversial gets randomly doled out (which seems quite possible, since the set of facts that any given person will take offense at seems to be a miniscule subset of the set of facts which can be known), wouldn't that count?
4FAWS14y
I don't think that e. g. a Klan member would ever come close to complete approval of a word without knowing whether miscegenation was eliminated, people more easily remember what they feel strongly about so the "memory capacity" wouldn't be filled with irrelevant details anyway, and if the hypothetical unbiased observer doesn't select for relevant and interesting facts no one would listen long enough to get anywhere close to approval. Also for any AI to actually use the definition as written + later amendments you made it can't just assume a particular order of paragraphs for a particular interviewee (or if it can we are back at persuasion skills, a sufficiently intelligent AI should be able to persuade anyone it models of anything by selecting the right paragraphs in the right order out of an infinitely long list), all possible sequences would have compete approval as a limit for all possible interviewees, or the same list has to be used for all interviewees.
-3Strange714y
I agree that it would be extremely difficult to find a world that, when completely and accurately described, would meet with effectively unconditional approval from both Rev. Dr. Martin Luther King, Jr. and a typical high-ranking member of the Ku Klux Klan. It's almost certainly beyond the ability of any single human to do so directly... Why, we'd need some sort of self-improving superintelligence just to map out the solution space in sufficient detail! Furthermore, it would need to have an extraordinarily deep understanding of, and willingness to pursue, those values which all humans share. If it turns out to be impossible, well, that sucks. Time to look for the next-best option. If the superintelligence makes some mistake or misinterpretation so subtle that a hundred billion humans studying the timeline for their entire lives (and then some) couldn't spot it, how is that really a problem? I'm still not seeing how any machine could pass this test - 100% approval from the entire human race to date - without being Friendly.
4FAWS14y
Straight up impossible if their (apparent) values are still the same as before and they haven't been mislead. If one agent prefers the absence of A to its presence, and another agent prefers the presence of A to its absence you cannot possibility satisfy both agents completely (without deliberately misleading at least one about A) . The solution can always be trivially improved for at least one agent by adding or removing A. Actually, now that you invoke the unknowability of the far reaching capabilities of a superintelligence I thought of a very slight possibility of a word meeting your definition even though people have mutually contradictory values: The world could be deliberately set up in a way that even a neutral third party description contained a fully general mind hack for human minds so that the AI could adjust the values of the hypothetical people tested trough the test. That's almost certainly still impossible, but far more plausible than a word meeting the definition without any changing values, which would require all apparent value disagreements to be illusions and the world not to work in the way it appears to. I think we can generalize that: Dissolving an apparent impossibility through the creative power of a super-intelligence should be far easier to do in an unfriendly way than doing the same in a friendly way, so a friendliness definition better had not contain any apparent impossibilities.
0Strange714y
I did not say or deliberately imply that nobody's values would be changed by hearing an infallibly factual description of future events presented by a transcendant entity. In fact, that kind of experience is so powerful that unverified third-hand reports of it happening thousands of years ago retain enough impact to act as a recruiting tactic for several major religions. Maybe not all, but certainly a lot of apparent value differences really are illusory. In third-world countries, genocide tends to flare up only after a drought leads to crop failures, suggesting that the real motivation is economic and racism is only used as an excuse, or a guide for who to kill without disrupting the social order more than absolutely necessary. I think this is a lot less impossible than you're trying to make it sound. The stuff that people tend to get really passionate about, unwilling to compromise on, isn't, in my experience, the global stuff. When someone says "I want less A" or "more A" they seem to mean "within range of my senses," "in the environment where I'm likely to encounter it in the future" or "in my tribe's territory or the territory of those we communicate with." An arachnophobe wouldn't panic upon hearing about a camel-spider three thousand miles away; if anything, the idea that none were on the same continent would be reassuring. An AI capable of terraforming galaxies might satisfy conflicting preferences by simply constructing an ideal environment for each, and somehow ensuring that everyone finds what they're looking for. The accurate description of such seemingly-impossible perfection would, in a sense, constitute a 'fully general mind hack,' in that it would convince anyone who can be convinced by the truth and satisfy anyone who can be satisfied within the laws of physics. If you know of a better standard, I'd like to hear it.
0FAWS14y
I'm not sure there is any point in continuing this. Once you allow the AI to optimize the human values it's supposed to be tested against for test compability it's over.
0Strange714y
If, as you assert, pleasing everyone is impossible, and persuading anyone to accept something they wouldn't otherwise be pleased by (even through a method as benign as giving them unlimited, factual knowledge of the consequences and allowing them to decide for themselves) is unFriendly, do you categorically reject the possibility of friendly AI? If you think friendly AI is possible, but I'm going about it all wrong, what evidence would convince you that a given proposal was not equivalently flawed? I'm having some doubts, too. If you decide not to reply, I won't press the issue.
5FAWS14y
No. Only if you allow acceptance to define friendliness. Leaving changing the definition of friendliness as an avenue to fulfill the goal defined as friendliness will almost certainly result in unfriendliness. Persuasion is not inherently unfriendly, provided it's not used to short-circuit friendliness. As an absolute minimum it would need to be possible and not obviously exploitable. It should also not look like a hack. Ideally it should be understandable, give me an idea what an implementation might look like, be simple and elegant in design and seem rigorous enough to make me confident that the lack of visible holes is not a fact about the creativity in the looker.
3Strange714y
Well, I'll certainly concede that my suggestion fails the feasibility criterion, since a literal implementation might involve compiling a multiple-choice opinion poll with a countably infinite number of questions, translating it into every language and numbering system in history, and then presenting it to a number of subjects equal to the number of people who've ever lived multiplied by the average pre-singularity human lifespan in Planck-seconds multiplied by the number of possible orders in which those questions could be presented multiplied by the number of AI proposals under consideration. I don't mind. I was thinking about some more traditional flawed proposals, like the smile-maximizer, how they cast the net broadly enough to catch deeply Unfriendly outcomes, and decided to deliberately err in the other direction: design a test that would be too strict, that even a genuinely Friendly AI might not be able to pass, but that would definitely exclude any Unfriendly outcome. Please taboo the word 'hack.'
3PhilGoetz14y
That most people, historically, have been morons. Basically the same question: Why are you limited to humans? Even supposing you could make a clean evolutionary cutoff (no one before Adam gets to vote), is possessing a particular set of DNA really an objective criterion for having a single vote on the fate of the universe?
0orthonormal14y
There is no truly objective criterion for such decisionmaking, or at least none that you would consider fair or interesting in the least. The criterion is going to have to depend on human values, for the obvious reason that humans are the agents who get to decide what happens now (and yes, they could well decide that other agents get a vote too).
0Strange714y
It's not a matter of votes so much as veto power. CEV is the one where everybody, or at least their idealized version of themselves, gets a vote. In my plan, not everybody gets everything they want. The AI just says "I've thought it through, and this is how things are going to go," then provides complete and truthful answers to any legitimate question you care to ask. Anything you don't like about the plan, when investigated further, turns out to be either a misunderstanding on your part or a necessary consequence of some other feature that, once you think about it, is really more important. Yes, most people historically have been morons. Are you saying that morons should have no rights, no opportunity for personal satisfaction or relevance to the larger world? Would you be happy with any AI that had equivalent degree of contempt for lesser beings? There's no particular need to limit it to humans, it's just that humans have the most complicated requirements. If you want to add a few more orders of magnitude to the processing time and set aside a few planets just to make sure that everything macrobiotic has it's own little happy hunting ground, go ahead.
0PhilGoetz14y
Your scheme requires that the morons can be convinced of the correctness of the AI's view by argumentation. If your scheme requires all humans to be perfect reasoners, you should mention that up front.
2Vladimir_Nesov14y
See the posts linked from * http://wiki.lesswrong.com/wiki/Complexity_of_value * http://wiki.lesswrong.com/wiki/Fake_simplicity * http://wiki.lesswrong.com/wiki/Magical_categories You might also try my restatement.

LHC shuts down again; anthropic theorists begin calculating exactly how many decibels of evidence they need...

0RobinZ14y
Duplicate.
-1gwern14y
Eh. Maybe I'll be faster next time.

Since people expressed such interest in piracetam & modafinil, here's another personal experiment with fish oil. The statistics is a bit interesting as well, maybe.

I'll be in London on April 4th and very interested in meeting any Less Wrongers who might be in the area that day. If there's a traditional LW London meetup venue, remind me what it is; if not, someone who knows the city suggest one and I'll be there. On an unrelated note, sorry I've been and will continue to be too busy/akratic to do anything more than reply to a couple of my PMs recently.

[-][anonymous]14y00

Does P(B|A) > P(B) imply P(~B|~A) > P(~B)?

ETA: Assume all probabilities are positive.

1Peter_de_Blanc14y
Yes, assuming 0 and 1 are not probabilities.
0RobinZ14y
Yes, the math works out - ig'f whfg n erfgngrzrag bs gur pynvz gung gur nofrapr bs rivqrapr vf rivqrapr bs nofrapr.
0[anonymous]14y
Ironically enough, I'm using this to prove that absence of "that particular proof" is not evidence of absence.
0RobinZ14y
Hey, as long as you do your math correctly ... :D
0Richard_Kennaway14y
Yes, even without the extra condition. Let a = P(A), b = P(B), c = P(A & B). P(B|A) > P(B) is equivalent to c > ab. P(~B|~A) > P(~B) is equivalent to 1-a-b+c > (1-a)(1-b) = 1 - a - b + ab, which is equivalent to c > ab, which is the hypothesis. As a check that the conventional definition of P(B|A)=0 when P(A)=0 doesn't affect things, if P(A)=0, P(A)=1, P(B)=0, or P(B)=1, then P(B|A) = P(B), making the antecedent false and the proposition trivially true.

The Final Now, a new short story by Gregory Benford about (literally) End Times.

Quotation in rot13 for the spoiler-averse's sake. It's an interesting passage and, as FAWS, I also think it's not that revealing, so it's probably safe to read it in advance.

("Bar" vf n cbfg-uhzna fgnaq-va sbe uhznavgl juvpu nqqerffrf n qrzvhetr ragvgl, qrfvtangrq nf "Ur" naq "Fur".)

"Bar synerq jvgu ntvgngrq raretvrf. “Vs lbh unq qrfvtarq gur havirefr gb er-pbyyncfr, gurer pbhyq unir orra vasvavgr fvzhyngrq nsgreyvsr. Gur nfxrj pbzcerffvba pbh... (read more)

0FAWS14y
I personally don't really care about spoilers, and having read the story now the passage you quote doesn't seem all that terribly spoilerish to me anyway, but you should note that spoiler protection has been enforced for "spoilers" considerably less spoilerish than that around here.
0FrF14y
I completely forget about spoilers! I used this particular quotation because I innocently thought it would be a "hook" to motivate people to read the story. Should I rot13 the quotation for reasons of precaution?
2FAWS14y
It was for me, but as I said I don't care about spoilers. Possibly. I can't always predict how people who care about spoilers act, sometimes it seems to be mainly about the principle.
0gwern14y
Indeed. Just look at Eliezer threatening to ban me for mentioning a ~5 year old plot twist in an anime.

Re: Cognitive differences

When you try to mentally visualize an image, for example a face, can you keep it constant indefinitely?

( For me visualisation seems to always entail flashing an image, I'd say for less than 0.2 seconds total. If I want to keep visualizing the image I can flash it again and again in rapid succession so that it appears almost seamless, but that takes effort and after at most a few seconds it will be replaced by a different but usually related image. )

If yes, would you describe yourself as a visual thinker? Are you good at drawing? Good at remembering faces?

(No, so so, no)

3AdeleneDawner14y
Not indefinitely, but the limiting factor is my attention quality and span. If I get distracted, the image disappears; if I try to pay attention to other things while continuing to visualize something, the visualization can subtly morph in response to the other things I'm thinking about, and it's hard to tell if it's morphing or not. (This effect seems closely related to priming.) I'm a very visual thinker. I'm not good at drawing, but that appears to be a function of poor fine motor control and lack of practice; I have been known to surprise myself and others with how well I draw for someone who almost never does so. I'm not very good at remembering faces, either, but again other factors affect that; I tend to avoid looking at faces in the first place, since I find eye contact overwhelming. I seem to be very good at remembering other complex visual things, though.
0gwern14y
I can hold an image/face steady for about a full second just sitting here. I could probably do better while meditating; so I think it's more an issue of 'can you concentrate' than anything else. (I'm a pretty visual thinker, but my hearing-impairment also means I'm anomalous.)

I'm drafting a post to build on (and beyond) some of the themes raised by Seth Godin's quote on jobs and the ensuing discussion.

I'm likely to explore the topic of "compartmentalization". But damn, is that word ugly!

Is there an acceptable substitute?

0arundelo14y
It has never bothered me.
[-][anonymous]14y00

I am curious as to why brazil84's comment has received so much karma? The way the questions were asked seemed to imply a preconception that there could not possibly be viable alternatives. Maybe it's just because I'm not a native English speaker and read something into it that isn't there, but that doesn't seem to me to be a rationalist mindset. It seemed more like »sarcasm as stop word« instead of an honest inquiry let alone an argument.

2FAWS14y
It seems entirely rational to me to ask what the envisioned alternative is when someone is criticizing something.
[-][anonymous]14y00

Suppose you're a hacker, and there's some information you want to access. The information is encrypted using a public key scheme (anyone can access the key that encrypts, only one person can access the key that decrypts), but the encryption is of poor quality. Given the encryption key, you can use your laptop to find the corresponding decryption key in about a month of computation.

Through previous hacking, you've found out how the encryption machine works. It has two keys, A and B already generated, and you have access to the encryption keys. However, neit... (read more)

3sketerpot14y
Smartass answer: use two computers, one for each of the keys. Computer time is cheap these days. If you don't have two computers, rent computation time from a cloud.
0[anonymous]14y
Why would you do that? If one key is more likely than the other, you should devote all your time toward breaking that key.
2SoullessAutomaton14y
All else equal, in practical terms you should probably devote all your time to first finding the person(s) that already know the private keys, and then patiently persuading them to share. I believe the technical term for this is "rubber hose cryptanalysis".
0Jack14y
Even if there is a high probability of completing both decryptions and the probability the machine chooses A over B is only slightly over .5?
1[anonymous]14y
Yes. At the beginning, it is better to work on A than to work on B, because the machine choosing A is more likely. After the beginning, it is still better to work on A than to work on B, because finishing A will be easier than finishing B if you've already worked on it some. On the off chance that you don't complete both decryptions, it's better to have the one you're more likely to need.
4Jack14y
I think some of us know considerably less about cryptography than you do. I think sketerpot's suggestion was based on the assumption that most of the work would just be done by the computer and that the hacker could just sit back and relax while his two laptops went to work on the encryptions (you know, like in movies!). If the hacker needs to spend a month of his/her time (rather than computer time) to complete the decryption, then I see what you're talking about.
2[anonymous]14y
The assumption that most of the work would be done by the computer is correct. Perhaps sketerpot was assuming that breaking a decryption key is an operation that's impossible to parallelize (i.e. two computers both working on a single key would be no better than just one computer doing so), whereas I'm pretty sure that two computers would do the job twice as fast as one computer.
2Jack14y
Ah, yes. That makes sense. Thanks for your patience.
2MrHen14y
Can people ROT13 their answers so I get a chance to solve this on my own? Or will there be too much math for ROT13 to work well?
2[anonymous]14y
It's not a puzzle; it's supposed to make a point.
0MrHen14y
Oh.
0Nick_Tarleton14y
hidden answer
0[anonymous]14y
p(A) (U(decrypt in 1 month) - cost(1 month computer time)) + (1 - p(A)) (U(decrypt in 2 months) - cost(2 months computer time))
0Za3k14y
Do we choose a probability p the machine picks A, or does the machine start with a probability p, which we adjust to p+q chance it picks A?
0[anonymous]14y
You choose a probability p that the machine picks A. I guess.

Thoughts about intelligence.

My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.

I've been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.

It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?

Moving alone doesn't cou... (read more)

3Richard_Kennaway14y
You are talking about control systems. A control system has two inputs (called its "perception" and "reference") and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism. What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception. The answers to your questions are: 1. A "goal" is the reference input of a control system. 2. An "obstacle" is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference. 3. "Complicated" means "I don't (yet) understand this." Suggestions for readings. And a thought: "Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet's lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely." -- William James, "The Principles of Psychology"
0Karl_Smith14y
Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not? Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.
2markrkrebs14y
The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don't personally think we'll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems. Consciousness, I hope, is something more and different in kind, and maybe that's what you were really after in the original post, but it's a subjective beast. OTOH, if it is "mere" complex behavior we're after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now. I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.
0Karl_Smith14y
I had conceived of something like the Turing test but for intelligence period, not just general intelligence. I wonder if general intelligence is about the domains under which a control system can perform. I also wonder whether "minds" is a too limiting criteria for the goals of FAI. Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don't know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start. Maybe this is a more general formulation?
-1Richard_Kennaway14y
I don't want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I'd start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words "now we just have to scale it up", if I was working on AGI I wouldn't bother mentioning it until I had a demo of a level that would scare Eliezer. Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds. 1. LessWrong, passim. 2. Marcus Hutter's Compression Prize. 3. AIXItl and the Gödel machine.
2MrHen14y
If I were standing there catching the pencil and directing it to the spot on the floor, you wouldn't consider the pencil intelligent. The behavior observed is not pointing to the pencil in particular being intelligent. Just my two cents. I don't know anything about the concept of intelligence being defined as being able to pursue goals through complicated obstacles. If I had to guess at the missing piece it would probably be some form of self-referential goal making. Namely, this takes the form of the word, "want." I want to go to this spot on the floor. I can ignore a goal but it is significantly harder to ignore a want. At some point, my wants begin to dictate and create other wants. If I had to start pursing a definition of intelligence, I would probably start here. But I don't know anything about the field so this could have already been tried and failed.
2Karl_Smith14y
Well I would consider the Pencil-MrHen system as intelligent. I think further investigation would be required to determine that the pencil is not intelligent when it is not connected to MrHen, but that MrHen is intelligent when not connected to the pencil. It then makes sense to say that the intelligence originates from MrHen. The problem with the self-referential from my perspective is that it presumes a self. It seems to me that ideas like "I" and "want" graph humanness on to other objects. So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.
0MrHen14y
Sure, that makes perfect sense. I haven't really given this a whole lot of thought; you are getting the fresh start. :) The self in self-referential isn't implied to be me or you or any form of "I". Whatever source of identity you feel comfortable with can use the term self-referential. In the case of your intelligent pencil, it very well may be the case that the pencil is self-updating in order to achieve what you are calling a goal. A "want" can describe nonhuman behavior, so I am not convinced the term is a problem. It does seem that I am beginning to place atypical restrictions on its definition, however, so perhaps "goal" would work better in the end. The main points I am working with: * An entity can have a goal without being intelligent (perhaps I am confusing goal with purpose or behavior?) * A non-intelligent entity can become intelligent * Some entities have the ability to change, add, or remove goals * These changes, additions, deletions are likely governed by other goals. (Perhaps I am confusing goals with wants or desires? Or merely causation itself?) * The "original" goal could be deleted without making an entity unintelligent. The pencil could pick a different spot on the ground but this would not cause you to doubt its intelligence. Please note that I am not trying to disagree (or agree) with you. I am just talking because I think the subject is interesting and I haven't really given it much thought. I am certainly no authority on the subject. If I am obviously wrong somewhere, please let me know.
1whpearson14y
Some food for philosophical thought, an oil drop that "solves" a maze. TL;DR it follows a chemical gradient due to it changing surface tension. I'd read something on the intentional stance.
1Kaj_Sotala14y
If you don't mind a slightly mathy article, I thought Legg & Hutter's Universal Intelligence was nice. It talks about machine intelligence, but I believe it applies to all forms of intelligence. It also addresses some of the points you made here.
0[anonymous]14y
So if something is capable, contrary to expectations, of achieving a constant state despite varying conditions, it's probably intelligent? I guess that in space, everything is intelligent.

Does anyone here know about interfacing to the world (and mathematics) in the context of a severely limiting physical disability? My questions are along the lines of: what applications are good (not buggy) to use and what are the main challenges and considerations a person of normal abilities would misjudge or not be aware of? Thanks in advance!

[-][anonymous]14y-10

People constantly ignore my good advice by contributing to the American Heart Association, the American Cancer Society, CARE, and public radio all in the same year--as if they were thinking, "OK, I think I've pretty much wrapped up the problem of heart disease; now let's see what I can do about cancer."

--- Steven Landsburg (original link by dclayh)