Open Thread: July 2010, Part 2
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Comments (770)
What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.
E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell the computer to do
# Proceed from 0 to x, in increments of 1, (where x =whatever) If divisible by 3, remainder 0, associate fizz with number If divisible by 5, remainder 0, associate buzz with number, Make ordered list from o to x, of numbers associated with fizz OR buzz For numbers associated with fizz NOT buzz, append fizz For numbers associated with buzz NOT fizz, append fizz For numbers associated with fizz AND buzz, append fizzbuzz #
I ask out of interest in acquiring money, on elance, rentacoder, odesk etc. I'm starting from a position of total ignorance but y'know it doesn't seem like learning C, and understanding Conrete Mathematics and TAOCP in a useful or even deep way would be the work of more than a year, while it would place one well above average in some domains of this activiteity.
Or have I missed something really obvious and important?
i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway.
http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG
Are there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used.
Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?
I don't know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn't get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don't know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial.
So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.
There are really people who don't get pointers.
One of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.)
Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...
Cute. Sad, but that's already more powerful than straight OO. Python and Ruby support adding/rebinding methods at runtime (one reason duck typing is more popular these days). You might want to look at functional programming if you haven't yet, since you've no doubt progressed since your epiphany. I've heard nice things about statically typed languages such as Haskell and O'Caml, and my personal favorite is Scheme.
Oddly enough, I think Morendil would get a real kick out of JavaScript. So much in JS involves passing functions around, usually carrying around some variables from their enclosing scope. That's how the OO works; it's how you make callbacks seem natural; it even lets you define new control-flow structures like jQuery's each() function, which lets you pass in a function which iterates over every element in a collection.
The clearest, most concise book on this is Doug Crockford's Javascript: The Good Parts. Highly recommended.
The technical term for this is a closure. A closure is a first-class* function with some associated state. For example, in Scheme, here is a function which returns counters, each with its own internal ticker:
To create a counter, you'd do something like
Then, to get values from the counter, you could call something like
Here is the same example in Python, since that's what most people seem to be posting in:
*That is, a function which you can pass around like a value.
While we're sharing fun information, I'd like to point out a little-used feature of Markdown syntax: if you put four spaces before a line, it's treated as code. Behold:
Also, the emacs rectangle editing functions are good for this. C-x r t is a godsend.
There is a difference in aptitude, but part of the problem is that pointers are almost never explained correctly. Many texts try to explain in abstract terms, which doesn't work; a few try to explain graphically, which doesn't work terribly well. I've met professional C programmers who therefore never understood pointers, but who did understand them after I gave them the right explanation.
The right explanation is in terms of numbers: the key is that
char *xactually means the same thing asint x(on a 32-bit machine, and modulo some superficial convenience). A pointer is just an integer that gets used to store a memory address. Then you write out a series of numbered boxes starting at e.g. 1000, to represent memory locations. People get pointers when you put it like that.There really are people who would not take that few hours.
I suspect it's like how my brain reacts to negative numbers, or decimals; I have no idea how anyone could fail to understand them. But some people do.
And, due to my tendency to analyse mistakes I make (especially factual errors) I remember the times when I got each one of those wrong. I even remember the logic I used.
But they've become so ingrained in my brain now that failure to understand them is nigh inconceivable.
Your general point is right. Ever since I started programming, it always felt like money for free. As long as you have the right mindset and never let yourself get intimidated.
Your solution to FizzBuzz is too complex and uses data structures ("associate whatever with whatever", then ordered lists) that it could've done without. Instead, do this:
This is runnable Python code. (NB: to write code in comments, indent each line by four spaces.) Python a simple language, maybe the best for beginners among all mainstream languages. Download the interpreter and use it to solve some Project Euler problems for finger exercises, because most actual programming tasks are a wee bit harder than FizzBuzz.
How did you first find work? How do you usually find work, and what would you recommend competent programmers do to get started in a career?
My first paying job was webmaster for a Quake clan that was administered by some friends of my parents. I was something like 14 or 15 then, and never stopped working since (I'm 27 now). Many people around me are aware of my skills, so work usually comes to me; I had about 20 employers (taking different positions on the spectrum from client to full-time employer) but I don't think I ever got hired the "traditional" way with a resume and an interview.
Right now my primary job is a fun project we started some years ago with my classmates from school, and it's grown quite a bit since then. My immediate boss is a former classmate of mine, and our CEO is the father of another of my classmates; moreover, I've known him since I was 12 or so when he went on hiking trips with us. In the past I've worked for friends of my parents, friends of my friends, friends of my own, people who rented a room at one of my schools, people who found me on the Internet, people I knew from previous jobs... Basically, if you need something done yesterday and your previous contractor was stupid, contact me and I'll try to help :-)
ETA: I just noticed that I didn't answer your last question. Not sure what to recommend to competent programmers because I've never needed to ask others for recomendations of this sort (hah, that pattern again). Maybe it's about networking: back when I had a steady girlfriend, I spent about three years supporting our "family" alone by random freelance work, so naturally I learned to present a good face to people. Maybe it's about location: Moscow has a chronic shortage of programmers, and I never stop searching for talented junior people myself.
I was very surprised by this until I read the word "Moscow."
Is it different in the US? I imagined it was even easier to find a job in the Valley than in Moscow.
I was unsurprised by this until I read the word "Moscow". (Russian programmers & mathematicians seem to always be heading west for jobs.)
The least-effort strategy, and the one I used for my current job, is to talk to recruiting firms. They have access to job openings that are not announced publically, and they have strong financial incentives to get you hired. The usual structure, at least for those I've worked with, is that the prospective employee pays nothing, while the employer pays some fraction of a year's salary for a successful hire, where success is defined by lasting longer than some duration.
(I've been involved in hiring at the company I work for, and most of the candidates fail the first interview on a question of comparable difficulty to fizzbuzz. I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.)
I think there are failure modes from the curiosity-stopping anti-epistemology cluster, that allow you to fail to learn indefinitely, because you don't recognize what you need to learn, and so never manage to actually learn that. With right approach anyone who is not seriously stupid could be taught (but it might take lots of time and effort, so often not worth it).
Do recruiting firms require that you have formal programming credentials?
Formal credentials certainly help, but I wouldn't say they're required, as long as you have something (such as a completed project) to prove you have skills.
I took an internship after college. Professors can always use (exploit) programming labor. That gives you semi-real experience (might be very real if the professor is good) and allows you to build credibility and confidence.
Python tip: Using "range" creates a big list in memory, which is a waste of space. If you use xrange, you get an iterable object that only uses a single counter variable.
Hah. I first wrote the example using xrange, then changed it to range to make it less confusing to someone who doesn't know Python :-)
Not in python 3 ! range in Python 3 works like xrange in the previous versions (and xrange doesn't exist any more).
(but the print functions would use a different syntax)
Cool
There was a discussion of transitioning to Python 3 on HN a week or two ago; apparently there are going to be a lot of programmers, and even more shops, holding off on transitioning, because it will break too many existing programs. (I haven't tried Python since version 1, so I don't know anything about it myself.)
A big problem with transitioning to Python 3 is that there are quite a few third-party libraries that don't support it (including two I use regularly - SciPy and Pygame). Some bits of the syntax are different, but that shouldn't be a huge issue except for big codebases, since there's a script to convert Python 2.6 to 3.0.
I've used Python 3 but had to switch back to 2.6 so I could keep using those libraries :P
In fact, range in Python 2.5ish and above works the same, which is why they removed xrange in 3.0.
I have no numbers for this, but the idea is that after interviewing for a job, competent people get hired, while incompetent people do not. These incompetents then have to interview for other jobs, so they will be seen more often, and complained about a lot. So perhaps the perceived prevalence of incompetent programmers is a result of availability bias (?).
This theory does not explain why this problem occurs in programming but not in other fields. I don't even know whether that is true. Maybe the situation is the same elsewhere, and I am biased here because I am a programmer.
Joel Spolsky gave a similar explanation.
Makes sense.
I'm a programmer, and haven't noticed that many horribly incompetent programmers (which could count as evidence that I'm one myself!).
Do you consider fizzbuzz trivial? Could you write an interpreter for a simple Forth-like language, if you wanted to? If the answers to these questions are "yes", then that's strong evidence that you're not a horribly incompetent programmer.
Is this reassuring?
Yes
Probably; I made a simple lambda-calculus interpret once and started working on a Lisp parser (I don't think I got much further than the 'parsing' bit). Forth looks relatively simple, though correctly parsing quotes and comments is always a bit tricky.
Of course, I don't think I'm a horribly incompetent programmer -- like most humans, I have a high opinion of myself :D
I'll second the suggestion that you try your hand at some actual programming tasks, relatively easy ones to start with, and see where that gets you.
The deal with programming is that some people grok it readily and some don't. There seems to be some measure of talent involved that conscientious hard word can't replace.
Still, it seems to me (I have had a post about this in the works for ages) that anyone keen on improving their thinking can benefit from giving programming a try. It's like math in that respect.
Programming as a field exhibits a weird bimodal distribution of talent. Some people are just in it for the paycheck, but others think of it as a true intellectual and creative challenge. Not only does the latter group spend extra hours perfecting their art, they also tend to be higher-IQ. Most of them could make better money in the law/medicine/MBA path. So obviously the "programming is an art" group is going to have a low opinion of the "programming is a paycheck" group.
Fixed that for you. :) (I'm a current law student.)
Do we have any refs for this? I know there's "The Camel Has Two Humps" (Alan Kay on it, the PDF), but anything else?
No, just personal experience and observation backed up by stories and blog posts from other people. See also Joel Spolsky on Hitting the High Notes. Spolsky's line is that some people are just never going to be that good at programming. I'd rephrase it as: some people are just never going to be motivated to spend long hours programming for the sheer fun and challenge of it, and so they're never going to be that good at programming.
This is a good null hypothesis for skill variation in many cases, but not one supported by the research in the paper gwern linked.
And going by his other papers, though, it looks like the effect isn't nearly so strong as was originally claimed. (Though that's wrt whether his "consistency test" works, didn't check about whether bimodalness still holds.)
In addition to this, if you're a good bricklayer, you might do, at most, twice the work of a bad bricklayer. It's quite common for an excellent programmer (a hacker) to do more work than ten average programmers--and that's conservative. The difference is more apparent. My guess might be that you hear this complaint from good programmers, Barry?
Although, I can guarantee that everyone I've met can do at least FizzBuzz. We have average programmers, not downright bad ones.
From what I can tell the average person is borderline incompetent when it comes to the 'actually getting work done' part of a job. It is perhaps slightly more obvious with a role such as programming where output is somewhat closer to the level of objective physical reality.
I don't know anything about FizzBuzz, but your program generates no buzzes and lots of fizzes (appending fizz to numbers associated only with fizz or buzz.) This is not a particularly compelling demonstration of your point that it should be easy.
(I'm not a programmer, at least not professionally. The last serious program I wrote was 23 years ago in Fortran.)
The bug would have been obvious if the pseudocode had been indented. I'm convinced that a large fraction of beginner programming bugs arise from poor code formatting. (I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.)
Actually, maybe this is a sign of a bigger conceptual problem: a lot of people see programs as sequences of instructions, rather than a tree structure. Indentation seems natural if you hold the latter view, and pointless if you can only perceive programs as serial streams of tokens.
This seems to predict that python solves this problem. Do you have any experience watching beginners with python? (Your second paragraph suggests that indentation is just the symptom and python won't help.)
--"Epigrams in Programming", by Alan J. Perlis; <small>ACM's SIGPLAN publication, September, 1982
Yeah, pretty much anyone who isn't appallingly stupid can become a reasonably good programmer in about a year. Be warned though, the kinds of people who make good programmers are also the kind of people who spontaneously find themselves recompiling their Linux kernel in order to get their patched wifi drivers to work...
xkcd reference!
Dammit! That'll shouted at my funeral!
Geoff Greer published a post on how he got convinced to sign up for cryonics: Insert Frozen Food Joke Here.
I talk about it as something I'm thinking about, and ask what they think. That way, it's not you trying to persuade someone, it's just a conversation.
"Yeah, we'll all die eventually, but this is just a way of curing aging, just like trying to find a cure for heart disease or cancer. All those things are true of any medical treatment, but that doesn't mean we shouldn't save lives."
... "and like any medical treatment, initially only the rich will benefit, but they'll help bring down the price for everyone else. Infact, for just a small weakly payment..."
If they think that we'll all eventually die even with cryonics and they think that death gives meaning to life then they don't need to worry about cryonics removing meaning since it is just pushing the amount of time until death up. (I wouldn't bother addressing the death giving meaning to life claim except to note that it seems to be a much more common meme among people who haven't actually lost loved ones.)
As to the problem of too many people, overpopulation is a massive problem whether or not a few people get cryonicly preserved.
As to the problem of just the rich getting the benefits, patiently explain that there's no reason to think that the rich now will be treated substantially different from the less rich who sign up for cryonics. And if society ever has the technology to easily revive people from cryonic suspension then the likely standard of living will be so high compared to now that even if the rich have more it won't matter.
This is off-topic but I'm curious: How did you stumble on my blog?
Google alert on "Eliezer Yudkowsky". (Usually brings up articles about Friendly AI, SIAI and Less Wrong.)
Machine learning is now being used to predict manhole explosions in New York. This is another example of how machine learning/specialized AI are becoming increasingly common place to the point where they are being used for very mundane tasks.
Somebody said that the reason there is no progress in AI is that once a problem domain is understood well enough that there are working applications in it, nobody calls it AI any longer.
I think philosophy is a similar case. Physics used to be squarely in philosophy, until it was no longer a confused mess, but actually useful. Linguistics too used to be considered a branch of philosophy.
As did economics.
It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:
We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)
This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.
Thoughts? (If someone's said this before, I apologize for not remembering it.)
The only problem with that seems to be that when people talk about emergent behavior they seem to be more often than not talking about "emergence" as a property of the territory, not a property of the map. So for example, someone says that "AI will require emergent behavior"- that's a claim about the territory. Your definition of emergence seems like a reasonable and potentially useful one but one would need to be careful that the common connotations don't cause confusion.
I agree. But given that outsiders use the term all the time, and given that they can point to a reasonably large cluster of things (which are adequately contained in the definition I offered), it might be more helpful to say that emergence is a statement of a known unknown (in particular, a missing reduction between levels) than to refuse to use the term entirely, which can appear to be ignoring phenomena.
ISTM that the actual present usage of "emergent" is actually pretty well-defined as a cluster, and it doesn't include the ideal gas laws. I'm offering a candidate way to cash-out that usage without committing the Mind Projection Fallacy.
The fallacy here is thinking there's a difference between the way the ideal gas laws emerge from particle physics, and the way intelligence emerges from neurons and neurotransmitters. I've only heard "emergent" used in the following way:
A system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, and the high-level description is not easily predictable from the low-level description
For instance, gliders moving across the screen diagonally is emergent in Conway's Life.
The "easily predictable" part is what makes emergence in the map, not the territory.
Er, did you read the grandparent comment?
Yes. My point was that emergence isn't about what we know how to derive from lower-level descriptions, it's about what we can easily see and predict from lower-level descriptions. Like Roko, I want my definition of emergence to include the ideal gas laws (and I haven't heard the word used to exclude them).
Also see this comment.
For what it's worth, Cosma Shalizi's notebook page on emergence has a very reasonable discussion of emergence, and he actually mentions macro-level properties of gas as a form of "weak" emergence:
To define emergence as it is normally used, he adds the criterion that "the new property could not be predicted from a knowledge of the lower-level properties," which looks to be exactly the definition you've chosen here (sans map/territory terminology).
Let's talk examples. One of my favorite examples to think about is Langton's Ant.
If we taboo "emergence" what do we think is going on with Langton's Ant?
We have one description of the ant/grid system in Langton's Ant: namely, the rules which totally govern the behavior of the system. We have another description of the system, however: the recurring "highway" pattern that apparently results from every initial configuration tested. These two descriptions seem to be connected, but we're not entirely sure how (The only explanation we have is akin to this: Q: Why does every initial configuration eventually result in the highway pattern? A: The rules did it.) That is, we have a gap in our map.
Since the rules, which we understand fairly well, seem on some intuitive sense to be at a "lower level" of description than the pattern we observe, and since the pattern seems to depend on the "low-level" rules in some way we can't describe, some people call this gap "emergence."
I recall hearing, although I can't find a link, that the Langton Ant problem has been solved recently. That is, someone has given a formal proof that every ant results in the highway pattern.
The high-level structure shouldn't be the same as the low level structure, because I don't want to say a pile of sand emerges from grains of sand.
I dunno, I kind of like the idea that as science advances, particular phenomena stop being emergent. I'd be very glad if "emergent" changed from a connotation of semantic stop-sign to a connotation of unsolved problem.
By your definition, is the empirical fact that one tenth of the digits of pi are 1s emergent behavior of pi?
I may not understand the work that "low-level" and "high-level" are doing in this discussion.
On the length of derivations, here are some relevant Godel cliches: System X (for instance, arithmetic) often obeys laws that are underivable. And it often obeys derivable laws of length n whose shortest derivation has length busy-beaver-of-n.
(Uber die lange von Beiwessen is the title of a famous short Godel paper. He revisits the topic in a famous letter to von Neumann, available here: http://rjlipton.wordpress.com/the-gdel-letter/)
Just a pedantic note: pi has not been proven normal. Maybe one fifth of the digits are 1s.
I'll stick to it. It's easier to perform experiments than it is to give mathematical proofs. If experiments can give strong evidence for anything (I hope they can!), then this data can give strong evidence that pi is normal: http://www.piworld.de/pi-statistics/pist_sdico.htm
Maybe past ten-to-the-one-trillion digits, the statistics of pi are radically different. Maybe past ten-to-the-one-trillion meters, the laws of physics are radically different.
The later case seems more likely to me.
I was just thinking about the latter case, actually. If g equalled G * (m1 ^ (1 + (10 ^ -30)) * (m2 ^ (1 + (10 ^ -30))) / (r ^ 2), would we know about it?
Well, the force of gravity isn't exactly what you get from Newton's laws anyways (although most of the easily detectable differences like that in the orbit of Mercury are better thought of as due to relativity's effect on time than a change in g). I'm not actually sure how gravitational force could be non-additive with respect to mass. One would have the problem of then deciding what constitutes a single object. A macroscopic object isn't a single object in any sense useful to physics. Would for example this calculate the gravity of Earth as a large collection of particles or as all of them together?
But the basic point, that there could be weird small errors in our understanding of the laws of physics is always an issue. To use a slightly more plausible example, if say the force of gravity on baryons is slightly stronger than that on leptons (slightly different values of G) we'd be unlikely to notice. I don't think we'd notice even if it were in the 2nd or 3rd decimal of G (partially because G is such a very hard constant to measure.)
IMO, that would be emergent behaviour of mathematics, rather than of pi.
Pi isn't a system in itself as far as I can see.
I have in mind a system, for instance a computer program, that computes pi digit-by-digit. There are features of such a computer program that you can notice from its output, but not (so far as anyone knows) from its code, like the frequency of 1s.
I can't disagree about what you want but I myself don't really see the point in using the word emergent for a straightforward property of irrational numbers. I wouldn't go so far as to say the term is useless but whatever use it could have would need to describe something more complex properties that are caused by simpler rules.
This isn't a general property of irrational numbers, although with probability 1 any irrational number will have this property. In fact, any random real number will have this property with probability 1 (rational numbers have measure 0 since they form a countable set). This is pretty easy to prove if one is familiar with Lebesque measure.
There are irrational numbers which do not share this property. For example, .101001000100001000001... is irrational and does not share this property.
True enough. it would seem that irrational number is not the correct term for the set I refer to.
The property you are looking for is normalness to base 10. See normal number.
ETA: Actually, you want simple normalness to base 10 which is slighly weaker.
Any irrational number drawn from what distribution? There are plenty of distributions that you could draw irrational numbers from which do not have this property, and which contain the same number of numbers in them. For example, the set of all irrational numbers in which every other digit is zero has the same cardinality as the set of all irrational numbers.
Yes, although generally when asking these sorts of questions one looks at the standard Lebesque measure on [0,1] or [0,1) since that's easier to normalize. I've been told that this result also holds for any bell-curve distribution centered at 0 but I haven't seen a proof of that and it isn't at all obvious to me how to construct one.
Well, the quick way is to note that the bell-curve measure is absolutely continuous with respect to Lebesgue measure, as is any other measure given by an integrable distribution function on the real line. (If you want, you can do this by hand as well, comparing the probability of a small bounded open set in the bell curve distribution with its Lebesgue measure, taking limits, and then removing the condition of boundedness.)
It's worth checking on the Stanford Encyclopedia of Philosophy when this kind of issue comes up. It looks like this view - emergent=hard to predict from low-level model - is pretty mainstream.
The first paragraph of the article on emergence says that it's a controversial term with various related uses, generally meaning that some phenomenon arises from lower-level processes but is somehow not reducible to them. At the start of section 2 ("Epistemological Emergence"), the article says that the most popular approach is to "characterize the concept of emergence strictly in terms of limits on human knowledge of complex systems." It then gives a few different variations on this type of view, like that the higher-level behavior could not be predicted "practically speaking; or for any finite knower; or for even an ideal knower."
There's more there, some of which seems sensible and some of which I don't understand.
Many thanks!
Very interesting story about a project that involved massive elicitation of expert probabilities. Especially of interest to those with Bayes Nets/Decision analysis background. http://web.archive.org/web/20000709213303/www.lis.pitt.edu/~dsl/hailfinder/probms2.html
I have a question about prediction markets. I expect that it has a standard answer.
It seems like the existence of casinos presents a kind of problem for prediction markets. Casinos are a sort of prediction market where people go to try to cash out on their ability to predict which card will be drawn, or where the ball will land on a roulette wheel. They are enticed to bet when the casino sets the odds at certain levels. But casinos reliably make money, so people are reliably wrong when they try to make these predictions.
Casinos don't invalidate prediction markets, but casinos do seem to show that prediction markets will be predictably inefficient in some way. How is this fact dealt with in futarchy proposals?
One way to think of it is that decisions to gamble are based on both information and an error term which reflects things like irrationality or just the fact that people enjoy gambling. Prediction markets are designed to get rid of the error and have prices reflect the information: errors cancel out as people who err in opposite directions bet on opposite sides, and errors in one direction create +EV opportunities which attract savvy, informed gamblers to bet on the other side. But casinos are designed to drive gambling based solely on the error term - people are betting on events that are inherently unpredictable (so they have little or no useful information) against the house at fixed prices, not against each other (so the errors don't cancel out), and the prices are set so that bets are -EV for everyone regardless of how many errors other people make (so there aren't incentives for savvy informed people to come wager).
Sports gambling is structured more similarly to prediction markets - people can bet on both sides, and it's possible for a smart gambler to have relevant information and to profit from it, if the lines aren't set properly - and sports betting lines tend to be pretty accurate.
I have also heard of at least one professional gambler who makes his living by identifying and confronting other peoples' superstitious gambling strategies. For example, if someone claims that 30 hasn't come up in a while, and thus is 'due,' he would make a separate bet with them (to which the house is not a party), claiming simply that they're wrong.
Often, this is an even-money bet which he has upwards of a 97% chance of winning; when he loses, the relatively small payoff to the other party is supplemented by both the warm fuzzies associated with rampant confirmation bias, and the status kick from defeating a professional gambler in single combat.
The money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.
Right. Maybe I shouldn't have said that a prediction market would be "predictably inefficient". I can see that rational players can swoop in and profit from irrational players.
But that's not what I was trying to get at with "predictably inefficient". What I meant was this:
Suppose that you know next to nothing about the construction of roulette wheels. You have no "expert knowledge" about whether a particular roulette ball will land in a particular spot. However, for some reason, you want to make an accurate prediction. So you decide to treat the casino (or, better, all casinos taken together) as a prediction market, and to use the odds at which people buy roulette bets to determine your prediction about whether the ball will land in that spot.
Won't you be consistently wrong if you try that strategy? If so, how Is this consistent wrongness accounted for in futarchy theory?
I understand that, in a casino, players are making bets with the house, not with each other. But no casino has a monopoly on roulette. Players can go to the casino that they think is offering the best odds. Wouldn't this make the gambling market enough like a prediction market for the issue I raise to be a problem?
I may just have a very basic misunderstanding of how futarchy would work. I figured that it worked like this: The market settles on a certain probability that something will happen by settling on an equilibrium for the odds at which people are willing to buy bets. Then policy makers look at the market's settled probability and craft their policy accordingly.
In the stock market, as in a prediction market, the smart money is what actually sets the price, taking others' irrationalities as their profit margin. There's no such mechanism in casinos, since the "smart money" doesn't gamble in casinos for profit (excepting card-counting, cheating, and poker tournaments hosted by casinos, etc).
Roulette odds are actually very close to representing probabilities, although you'd consistently overestimate the probability if you just translated directly. Each $1 bet on a specific number pays out a $35 profit, suggesting p=1/36, but in reality p=1/38. Relative odds get you even closer to accurate probabilities; for instance, 7 & 32 have the same payout, from which we could conclude (correctly, in this case) that they are equally likely. With a little reasoning - 38 possible outcomes with identical payouts - you can find the correct probability of 1/38.
This table shows that every possible roulette bet except for one has the same EV, which means that you'd only be wrong about relative probabilities if you were considering that one particular bet. Other casino games have more variability in EV, but you'd still usually get pretty close to correct probabilities. The biggest errors would probably be for low probability-high payout games like lotteries or raffles.
It's interesting that the market drives the odds so close to reality, but doesn't quite close the gap. Do you know if there are regulations that keep some rogue casino from selling roulette bets as though the odds were 1/37, instead of 1/36?
I'm thinking now that the entire answer to my question is contained in Dagon's reply. Perhaps the gambling market is distorted by regulation, and its failure as a prediction market is entirely due to these regulations. Without such regulations, maybe the gambling business would function much more like an accurate prediction market, which I suppose would make it seem like a much less enticing business to go into.
This would imply that, if you don't like casinos, you should want regulation on gambling to focus entirely on making sure that casinos don't use violence to keep other casinos from operating. Then maybe we'd see the casinos compete by bringing their odds closer to reality, which would, of course, make the casinos less profitable, so that they might close down of their own accord.
(Of course, I'm ignoring games that aren't entirely games of chance.)
This really doesn't have much to do with the market. While I don't know the details of gambling laws in all the US states and Indian nations, I would be very surprised if there were regulations on roulette odds. Many casinos have roulette wheels with only one 0 (paid as if 1/36, actual odds 1/37), and with other casino games, such as blackjack, casinos frequently change the rules as part of a promotion or to try to get better odds.
There is no "gambling market": casinos are places where people pay for entertainment, not to make money. While casinos do offer promotions and advertise favorable rules and odds, most people go for the entertainment, and no one who's serious about math and probability goes to make money (with exceptions for card-counting and poker tournaments, as orthonormal notes).
Also see Unnamed's comment. Essentially, the answer is that a casino is not a market.
A single casino is not a market, but don't all casinos and gamblers together form a market for something? Maybe it's a market for entertainment instead of prediction ability, but it's a market for something, isn't it? Moreover, it seems, at least naïvely, to be a market in which a casino would attract more customers by offering more realistic odds.
Some casinos in Vegas have European roulette with a smaller house edge. I know this from a Vegas guidebook which listed where you could find the best odds at various games suggesting that at least some gamblers seek out the best odds. The Wikipedia link also states:
Casinos have an assymetry: creation of new casinos is heavily regulated, so there's no way for people with good information to bet on their beliefs, and no mechanism for the true odds to be reached as the market price for a wager.
Normally I wouldn't comment on a typo, but I can't read "assymetry" without chuckling.
The most obvious thing: customers are only allowed to take one side of a bet, whose terms are dictated by the house.
If you had a general-topic prediction market with one agent who chose the odds for everything, and only allowed people to bet in one chosen direction on each topic, that agent (if they were at all clever) could make a lot of money, but the odds wouldn't be any "smarter" than that agent (and in fact would be dumber so as to make a profit margin).
But no casino has a monopoly on roulette. Yet the market doesn't seem to drive the odds to their correct values. Dagon notes above that regulations make it hard to enter the market as a casino. Maybe that explains why my naive expectations don't happen.
Actually this raises another question for me. If I start a casino in Vegas, am I required to sell roulette bets as though the odds were p = 1/36, instead of, say, p = 1/37 ?
[Edited for lack of clarity.]
Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.
I'm not sure it's that relevant to rationality, but I think most humans (myself included!) are interested in hearing juicy gossip, especially if it features a popular trope such as "high status (but mildly disliked by the audience) person meets downfall".
How about this division of labor: you tell us the story and we come up with some explanation for how it relates to rationality, probably involving evolutionary psychology.
He is not high status as such, although he possibly could be if he didn't waste time being drunk.
Okay here goes the broad brush description of characters. Feel free to ask more questions to fill in details that you want.
Dramatis Personae
Mr G: Me. Tall scruffy geek. Takes little care of appearance. Tidy in social areas. Chats to everyone, remembers details of peoples lives, although forgets peoples names. Not particularly close (not facebook friends with any of the others). Doesn't bring girls/friends home. Can tell a joke or make a humorous observation but not a master, can hold his own in banter though. Little evidence of social circle apart from occasional visits to friends far away. Accommodating to peoples niggles and competent at fixing stuff that needs fixing. Does a fair amount of house work, because it needs doing. Has never suggested going out with the others, but has gone out by himself to random things. Is often out doing work at uni when others are at home. Shares some food with others, occasionally.
Miss C: Assertive, short, fairly plump Canadian supply teacher. Is mocked by Mr S for canadianisms, especially when teaching the children that the British idiom is wrong. For example saying that learnt is not a word. Young, not very knowledgeable about current affairs/world. Boyfriend back home. Has smoked pot. Drinks and parties on the weekend, generally going out with friends from home. Facebook friends with the other 2 (I think). Fairly liberal. Came into the house a week before Mr G. Watches a lot of TV in the shared area. Has family and friends visit occasionally.
Miss B: Works in digital marketing (does stuff on managing virals). Dry sense of humour. Boyfriend occasionally comes to visit, boyfriend is teacher who wants to be a stand up comedian. Is away most weekends, visiting family or boyfriend. Gets on with everyone on a surface level. Fairly pretty although not a stunner. Can banter a bit, but not much. Plays up to the "ditzy" personae sometimes.
Favourite Family Guy character is Brian. Scared of spiders/insects, to the extent that she dreamt a giant spider was on her pillow and didn't know it was a dream and shrieked (was investigated by Mr G to make sure there was nothing). Newest house mate by a couple of months. Probably a bit more conservative than the rest of the house.
Mr S: Self described cocky ex-skater. Well travelled. Older than the others. Takes care in his dress although not uber smart, has expensive trainers. Has had quite a few dates and 3 girlfriends of various lengths of time, in the 10 months I have been here. Fairly high quality girls, from the evidence I have seen. Talks to everyone. Witty, urbane. Generous with his food and drink.
Does some house work, makes sure everyone knows about it. Fairly emotional. He complains about Miss C not doing housework to Mr G, upon occasion. On one occasion when Mr G does not reciprocate in the complaints, he gets angry, but not in a serious way. Apologises the day after.
Self-identifies as geek of sorts to Mr G to try and sway Mr G on various points. Mr G less than enthused. Asks people how they stuff makes them feel.
His main problem is his drink and pot. He drinks himself to a stupor with almost clockwork regularity (he can be reliably zonked out on the living room sofa on a Sunday) and gets the munchies and steals food. He is always apologetic and replaces it when he does so and is confronted. Mr G doesn't confront when he suspects food has gone missing, although Miss C seems to get most of the food stolen and is most confrontational.
Often forgets conversations he has had when drunk. Gets angry upon occasions and crashes around slamming doors. He doesn't feel dangerous to Mr G at these points. Leaves the oven on with food in while asleep on said sofa. Miss B is worried by this behaviour and tells Mr G. Mr G not overly worried for himself, but can see her point.
The final straw that lead to being kicked out was when he was found walking around naked by Miss B in the kitchen. Miss C had tried to get him kicked out previously for eating food. Mr G was away.
He has lived in this flat for a year with other people and I don't think his behaviour has changed, so why did this set of people get him kicked out, when others hadn't? I'm guessing the moral of the story is don't be an alcoholic in general. But some people put up with worse behaviour.
This description seems very British and I'm not quite clear on some of it. For instance, I had no idea what a strop is. Urban Dictionary defines it as sulking, being angry, or being in a bad mood.
Some of the other things seem like they would only make sense with more cultural context, specifically the emphasis on bantering and making witty remarks.
I wouldn't say that this guy has great social skills, given his getting drunk and stealing food, slamming doors and walking around naked, and so forth. Pretty much the opposite, in fact.
As to why he got kicked out, I guess people finally got tired of the way he acted, or this group of people was less tolerant of it.
By social skills I meant what people with Aspergers lack naturally. Magnetism/charisma, etc. It is hard to get that across in a textual description. People with poor social skills here know not to get drunk and wander around naked, but can't charm the pants off a pretty girl. The point of the story is that having charisma is in itself not a get out of jail free card that is sometimes described here.
Sorry for the british-ness. It is hard to talk about social situations without thinking in my native idiom. I'll try and translate it tomorrow.
You're conflating a few different things here. There's seduction ability, which is its own unique set of skills (it's very possible to be good at seduction but poor at social skills; some of the PUA gurus fall in this category). There's the ability to pick up social nuances in real-time, which is what people with Aspergers tend to learn slower than others (though no one has this "naturally"; it has to be learned through experience). There's knowledge of specific rules, like "don't wander around naked". And charisma or magnetism is essentially confidence and acting ability. These skillsets are all independent: you can be good at some and poor at others.
Well, of course not. For instance, if you punch someone in the face, they'll get upset regardless of your social skills in other situations. What this guy did was similar (though perhaps less extreme).
Understood, and thanks for writing that story; it was really interesting. The whole British way of thinking is foreign to this clueless American, and I'm curious about it. (I'm also confused by the suggestion that being Facebook friends is a measure of intimacy.)
Interesting, I wouldn't have said that they were as independent as you make out. I'd say it is unusual to be confidant with good acting ability and not be able to read social nuances (how do you know how you should act?). And confidance is definately part of the PUA skillset. Apart from that I'd agree, there are different levels of skill.
When sober he was fairly good at everything. He would steer the conversations where he wanted, generally organise the flat to his liking and not do anything stupid like going around naked. If you looked at our interactions as a group, he would have appeared the Alpha.
His excuse for wandering around naked was that he thought he was alone and that he should have the right to go into the kitchen naked if he wanted to. I.e. he tried to brazen it out. That might give you some idea of his attitude, what he expected to get away with and that he had probably gotten away with it in the past.
Apart from the lack of common sense (when very drunk), I think his main problem was underestimating people or at least not being able to read them. He was too reliant on his feeling of being the Alpha to realise his position was tenuous. No one was relying upon the flat as their main social group, so no one cared about him being Alpha of that group.
You might get upset but still not be able to do anything against the Guy. See Highschool.
People use Facebook in a myriad of different ways. Some people friend everyone they come across, which means their friends lists gives little information. Mine is to keep an eye on the doings of people I care about. People I don't care about just add noise. So mine is more informative than most. Mr S. is very promiscuous with over 700 friends, I'm not sure about the other two.
I just assumed that for the sake of brevity he covered the other aspects under "etc". I would add in "intuitive aptitude for Machiavellian social politics".
Do I correctly interpret this to say that both Miss C and Miss B goes out (drinking?) on the weekends, but not together?
Yup. Sorry, that wasn't clear.
Yes. And do not hesitate to use many many words.
Definitely.
I thought Less Wrong might be interested to see a documentary I made about cognitive bias. It was made as part of a college project and a lot of the resources that the film uses are pulled directly from Overcoming Bias and Less Wrong. The subject of what role film can play in communicating the ideas of Less Wrong is one that I have heard brought up, but not discussed at length. Despite the film's student-quality shortcomings, hopefully this documentary can start a more thorough dialogue that I would love to be a part of.
The link to the video is Here: http://www.youtube.com/watch?v=FOYEJF7nmpE
I just posted a comment over there noting that the last interviewee rediscovered anchoring and adjustment.
del
Heard on #lesswrong:
(I hope posting only a log is ok)
More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.
In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!
Plus, he gives a lot of information from his personal experience.
Be warned, though: it's mixed with a lot of blame-the-government themes and certainty about future hyperinflation, and the preservation of real estate's value therein, if that kind of thing turns you off.
Edit: Okay, I've edited this comment about eight times now, but I left this out: from a rationality perspective, this essay shows the worst parts of Goodhart's Law: apparently, the old, functional criteria that would correctly identify some mortgage applicants is going to be mandated as the standard on all future mortgages. Yikes!
I've seen discussion of Goodhart's Law + Conservation of Thought playing out nastily in investment. For example, junk bonds started out as finding some undervalued bonds among junk bonds. Fine, that's how the market is supposed to work. Then people jumped to the conclusion that everything which was called a junk bond was undervalued. Oops.
Is there any philosophy worth reading?
As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.
For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.
However, at the same time I'm concerned that this leads me to read things that only reinforce the beliefs I already have. And there's little point in seeking information if it doesn't change your beliefs.
It's a complicated question what purpose philosophy serves, but I wouldn't be posting here if I thought it served none. So my question is: What philosophical works and authors have you found especially valuable, for whatever reason? Perhaps the recommendations of such esteemed individuals as yourselves will carry enough evidentiary weight that I'll actually read the darned things.
None that actively affiliate themselves with the label 'philosophy'.
Yoreth:
That's an extremely bad way to draw conclusions. If you were living 300 years ago, you could have similarly heard that some English dude named Isaac Newton is spending enormous amounts of time scribbling obsessive speculations about Biblical apocalypse and other occult subjects -- and concluded that even if he had some valid insights about physics, it wouldn't be worth your time to go looking for them.
A bad way to draw conclusions. A good way to make significant updates based on inference.
Would you be so kind as to spell out the exact sort of "update based on inference" that applies here?
???
"People who say stupid things are, all else being equal, more likely to say other stupid things in related areas".
Yep, and note that Hegel's philosophy is related to states more than Newton's physics is related to the occult.
That's a very vague statement, however. How exactly should one identify those expressions of stupid opinions that are relevant enough to imply that the rest of the author's work is not worth one's time?
In the context of LessWrong it should be considered trivial to the point of outright patronising if not explicitly prompted. Bayesian inference is quite possibly the core premise of the community.
In the process of redacting my reply I coined the term "Freudian Double-Entendre". Given my love of irony I hope the reader appreciates my restraint! <-- Example of a very vague statement. In fact if anyone correctly follows that I expect I would thoroughly enjoy reading their other comments.
Nobody knows (obviously), but you can try to train your intuition to do that well. You'd expect this correlation to be there.
The value of Newton's theories themselves can quite easily be checked, independently of the quality of his epistemology.
For a philosopher like Hegel, it's much harder to dissociate the different bits of what he wrote, and if one part looks rotten, there's no obvious place to cut.
(What's more, Newton's obsession with alchemy would discourage me from reading whatever Newton had to say about science in general)
Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist.
I've developed quite a respect for Hilary Putnam and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile.
I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in.
PS - There was this thread over at Hacker News that just came up yesterday if you're looking to cast a wider net.
Maybe LW should have resident intellectual historians who read philosophy. They could distill any actual insights from dubious, old or badly written philosophy, and tell if a work is worthy reading for rationalists.
I've enjoyed Nietzsche, he's an entertaining and thought-provoking writer. He offers some interesting perspectives on morality, history, etc.
Laktatos, Quine and Kuhn are all worth reading. Recommended works from each follows:
Lakatos: " Proofs and Refutations" Quine: "Two Dogmas of Empiricism" Kuhn: "The Copernican Revolution" and "The Structure of Scientific Revolution"
All of these have things which are wrong but they make arguments that need to be grappled with and understood (Copernican Revolution is more of a history book than a philosophy book but it helps present a case of Kuhn's approach to the history and philosophy of science in great detail). Kuhn is a particularly interesting case- I think that his general thesis about how science operates and what science is is wrong, but he makes a strong enough case such that I find weaker versions of his claims to be highly plausible. Kuhn also is just an excellent writer full of interesting factual tidbits.
This seems like in general not a great attitude. The Descartes case is especially relevant in that Descartes did a lot of stuff not just philosophy. And some of his philosophy is worth understanding simply due to the fact that later authors react to him and discuss things in his context. And although he's often wrong, he's often wrong in a very precise fashion. His dualism is much more well-defined than people before him. Hegel however is a complete muddle. I'd label a lot of Hegel as not even wrong. ETA: And if I'm going to be bashing Hegel a bit, what kind of arrogant individual does it take to write a book entitled "The Encyclopedia of the Philosophical Sciences" that is just one's magnum opus about one's own philosophical views and doesn't discuss any others?
This is an understandable sentiment, but it's pretty harsh. Everybody makes mistakes -- there is no such thing as a perfect scholar, or perfect author. And I think that when Descartes is studied, there is usually a good deal of critique and rejection of his ideas. But there's still a lot of good stuff there, in the end.
I have found Foucault to be a very interesting modern philosopher/historian. His book, I believe entitled "Madness and civilization", (translated from French), strikes me as a highly impressive analysis on many different levels. His writing style is striking, and his concentration on motivation and purpose goes very, very deep.
Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.
However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought is vastly preferable to trying to read Kant yourself. For similar reasons, I've steered clear of Hegel's original texts.
Unfortunately for the present purpose, I myself went the long way (I went to a college with a strong Great Books core in several subjects), so I don't have a good digest to recommend. Anyone else have one?
You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?
An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.
This implies that the mantra "Will I become a syndicated cartoonist?" could be more effective than the original affirmative version, "I will become a syndicated cartoonist".
If anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter recently had a post on the NY Times article.
The comments aren't hilarious and insane, they're more of a casually dismissive nature. In this thread, cryonics is called an "afterlife scam", a pseudoscience, science fiction (technically true at this stage, but there's definitely an implied negative connotation on the "fiction" part, as if you shouldn't invest in cryonics because it's just nerd fantasy), and Pascal's Wager for atheists (The comparison is fallacious, and I thought the original Pascal's Wager was for atheists anyways...). There are a few criticisms that it's selfish, more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist.
All in all, a delightful cornucopia of irrationality.
ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post. There was also a comment from someone registered with Alcor that was very good, I thought. I attempted a couple of rebuttals, but I don't think they were well-received.
Also, check out this hilarious description of Robin Hanson from a commenter there:
I guess that the fatal problem with cryonics is all the freaking nerds interested in it.
The responses are interesting. I think this is the most helpful to my understanding:
I think this is the biggest PR hurdle for cryonics: it resembles (superficially) a transparent scam selling the hope of immortality for thousands of dollars.
um... why isn't it? There's a logically possible chance of revival someday, yeah. But with no way to estimate how likely it is, you're blowing money on mere possibility.
We don't normally make bets that depend on the future development of currently unknown technologies. We aren't all investing in cold fusion just because it would be really awesome if it panned out.
Sorry, I know this is a cryonics-friendly site, but somebody's got to say it.
There isn't no way to estimate it. We can make reasonable estimations of probability based on the data we have (what we know about nanotech, what we know about brain function, what we know about chemical activity at very low temperatures, etc.).
Moreover, it is always possible to estimate something's likelyhood, and one cannot simply say "oh, this is difficult to estimate accurately, so I'll assign it a low probability." For any statement A that is difficult to estimate, I could just as easily make the same argument for ~A. Obviously, both A and ~A can't both have low probabilities.
That's true; uncertainty about A doesn't make A less likely. It does, however, make me less likely to spend money on A, because I'm risk-averse.
Have you decided on a specific sum that you would spend based on your subjective impression of the chances of cryonics working?
Maybe $50. That's around the most I'd be willing to accept losing completely.
Nice. I believe that would buy you indefinite cooling as a neuro patient, if about a billion other individuals (perhaps as few as 100 million) are also willing to spend the same amount.
Would you pay that much for a straight-freeze, or would that need to be an ideal perfusion with maximum currently-available chances of success?
That's ok, it's a skepticism friendly site as well.
I don't see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.
Well, if you think there's a decent probability for cryonics to turn out then investing in pretty much anything long-term becomes much more likely to be personally beneficial. Indeed, research in general increases the probability that cryonics will end up working (since it reduces the chance of catastrophic events or social problems and the like occurring before the revival technology is reached). The problem with cold fusion is that it is extremely unlikely to work given the data we have. I'd estimate that it is orders of magnitude more likely that say Etale cohomolgy turns out to have a practical application than it is that cold fusion will turn out to function. (I'm picking Etale cohomology as an example because it is pretty but very abstract math that as far as I am aware has no applications and seems very unlikely to have any applications for the foreseeable future).
You don't think it likely that etale cohomology will be applied to cryptography? I'm sure there are papers already claiming to apply it, but I wouldn't want to evaluate them. Some people describe it as part of Schoof's algorithm, but I'm not sure that's fair. (or maybe you count elliptic curve cryptography as whimsy - it won't survive quantum computers any longer than rsa)
Yeah, ok. That may have been a bad example, or it may be an indication that everything gets some application. I don't know how it relates to Schoof's algorithm. It isn't as far as I'm aware used in the algorithm or in the correctness proof but this is stretching my knowledge base. I don't have enough expertise to evaluate any claims about applying Etale cohomology to cryptography.
I'm not sure what to replace that example with. Stupid cryptographers going and making my field actually useful to people.
There are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level.
I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it.
However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.
Don't read too much into my fusion analogy; you're right that cryonics is different than fusion.
... and different to almost any other unproven technology (for the exact same reason).
May I suggest also that we be careful to distinguish cold fusion from fusion in general? Cold fusion is extremely unlikely. Hot fusion reactors whether laser confinement or magnetic confinement already exist, the only issue is getting them to produce more useful energy than you put in. This is very different than cold fusion where the scientific consensus is that there's nothing fusing.
Well, right off the bat, there's a difference between "cryonics is a scam" and "cryonics is a dud investment". I think there's sufficient evidence to establish the presence of good intentions - the more difficult question is whether there's good evidence that resuscitation will become feasible.
You seem to be under the assumption that there is some minimum amount of evidence needed to give a probability. This is very common, but it is not the case. It's just as valid to say that the probability that an unknown statement X about which nothing is known is true is 0.5, as it is to say that the probability that a particular well-tested fair coin will come up heads is 0.5.
Probabilities based on lots of evidence are better than probabilities based on little evidence, of course; and in particular, probabilities based on little evidence can't be too close to 0 or 1. But not having enough evidence doesn't excuse you from having to estimate the probability of something before accepting or rejecting it.
I'm not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X. It's not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don't you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.
Example: If you hear someone whom you don't specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don't know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.
Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.
How facts Backfire
There are a number of ways you can run with this article. It is interesting seeing it in the major press. It is also a little ironic that it is presenting facts to try and overturn an opinion (that information cannot be good for trying to overturn an opinion).
In terms of existential risk and thinking better in general. Obviously sometimes facts can overturn opinions but it makes me wonder, where is the organisation that uses non-fact based methods to sway opinion about existential risk. It would make sense if they were seperate, the fact based organisations (SIAI, FHI) need to be honest so that people that are fact-phillic to their message will trust them. I tend to ignore the fact-phobic (with respect to existential risk) people. But if it became sufficiently clear that foom style AI was possible, engineering society would become necessary.
Interesting tidbit from the article:
I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen. These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.
Data point: After years of having the correct arguments in my hand, having indeed generated many of them myself, and simply refusing to update, Eliezer, Cectic, and Dan Meissler ganged up on me and got the job done.
I think Jesus and Mo helped too, now I think of it. That period's already getting murky in my head =/
Anyhow, point is, none of the above are what you'd call gentle.
ETA: I really do think humor is incredibly corrosive to religion. Years before this, the closest I ever came to deconversion was right after I read "Kissing Hank's Ass"
I'd guess aggression would have a polarising affect, depending upon ingroup or outgroup affiliation.
Aggression from an member of your own group is directed at something important that you ought to take note of. Aggression from an outsider is possibly directed at you so something to be ignored (if not credible) or countered.
We really need some students to do some tests upon, or a better way of searching psych research than google.
Presumably there's heterogeneity in people's reactions to aggressiveness and to soft approaches. Most likely a minority of people react better to aggressive approaches and most people react better to being fed opposing arguments in a sandwich with self-affirmation bread.
I think one of the reasons this self-esteem seeding works is that identifying your core values makes other issues look less important.
On the other hand, if you e.g. independently expressed that God is an important element of your identity and belief in him is one of your treasured values, then it may backfire and you will be even harder to move you away from that. (Of course I am not sure: I have never seen any scientific data on that. This is purely a wild guess.)
I believe aggressive debates are not about convincing the people you are debating with, that is likely to be impossible. Instead it is about convincing third parties who have not yet made up their mind. For that purpose it might be better to take an overly extreme position and to attack your opponents as much as possible.
The primary study in question is here. I haven't been able to locate online a copy of the study about self-esteem and corrections.
The selective attention test (YouTube video link) is quite well-known. If you haven't heard of it, watch it now.
Now try the sequel (another YouTube video).
Even when you're expecting the tbevyyn, you still miss other things. Attention doesn't help in noticing what you aren't looking for.
More here.
I just finished polishing off a top level post, but 5 new posts went up tonight - 3 of them substantial. So I ask, what should my strategy be? Should I just submit my post now because it doesn't really matter anyway? Or wait until the conversation dies down a bit so my post has a decent shot of being talked about? If I should wait, how long?
Definitely wait. My personal favorite timing is one day for each new (substantial) post.
When thinking about my own rationality I have to identify problems. This means that I write statements like "I wait to long with making decisions, see X,Y". Now I worry that by stating this as a fact I somehow anchor it more deeply in my mind, and make myself act more in accordance with that statement. Is there actually any evidence for that? And if so, how do I avoid this problem?
On the complete implausibility of the History Channel.
That is brilliant.
SiteMeter gives some statistics about number of visitors that LessWrong has, per hour/per day/per month, etc.
According to the SiteMeter FAQ, multiple views from the same IP address are considered to be the same "visit" only if the they're spaced by 30 minutes or less. It would be nice to know how many visitors LessWrong has over a given time interval, where two visits are counted to be the same if they come from the same IP address. Does anyone know how to collect this information?
I was examining some of the arguments for the existence of god that separate beings into contingent (exist in some worlds but not all) and necessary (exist in all worlds). And it occurred to me that if the multiverse is indeed true, and its branches are all possible worlds, then we are all necessary beings, along with the multiverse, a part of whose structure we are.
Am I retreating into madness? :D
I'm curious what peoples opinions are of Jeff Hawkins' book 'on intelligence', and specifically the idea that 'intelligence is about prediction'. I'm about halfway through and I'm not convinced, so I was wondering if anybody could point me to further proofs of this or something, cheers
Intelligence-as-prediction/compression is a pretty familiar idea to LWers; there are a number of posts on them which you can find by searching, or you can try looking into the bibliographies and links in:
(I have no comments anent On Intelligence specifically. I remember it as being pretty vague as to specifics, and not very dense at all - unobjectionable.)
That is really a beautiful comment.
It's a good point, and one I never would have thought of on my own: people find it painful to think they might have a chance to survive after they've struggled to give up hope.
One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.
I always figured that was a rather large sector of people's negative reaction to cryonics; I'm amazed to find someone self-aware enough to notice and work through it.
That's more comparable to being in a long coma with some uncertain possibility of waking up from it, so perhaps it could be reframed along those lines; some people probably do specify that they should be taken off of life support if they are found comatose, but to choose to be kept alive is not socially disapproved of, as far as I know.
From a recent newspaper story:
I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?
The most eyebrow-raising part of that article:
From the article (there is a near invisible more text button)
And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)...
I did see an analysis of this somewhere, I'll try and dig it up. Here it is. There is hackernews commentary here.
I find this, from the original msnbc article, depressing
It seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24.
Of course, the chances of "Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4" are also 10^24, and this happens every four weeks.
Day-to-day question:
I live in a ground floor apartment with a sunken entryway. Behind my fairly large apartment building is a small wooded area including a pond and a park. During the spring and summer, oftentimes (~1 per 2 weeks) a frog will hop down the entryway at night and hop around on the dusty concrete until dying of dehydration. I occasionally notice them in the morning as I'm leaving for work, and have taken various actions depending on my feelings at the time and the circumstances of the moment.
What would you do, why, and how long would you keep doing it?