Open Thread: July 2010, Part 2
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Comments (770)
That is really a beautiful comment.
It's a good point, and one I never would have thought of on my own: people find it painful to think they might have a chance to survive after they've struggled to give up hope.
One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.
I always figured that was a rather large sector of people's negative reaction to cryonics; I'm amazed to find someone self-aware enough to notice and work through it.
I thought Less Wrong might be interested to see a documentary I made about cognitive bias. It was made as part of a college project and a lot of the resources that the film uses are pulled directly from Overcoming Bias and Less Wrong. The subject of what role film can play in communicating the ideas of Less Wrong is one that I have heard brought up, but not discussed at length. Despite the film's student-quality shortcomings, hopefully this documentary can start a more thorough dialogue that I would love to be a part of.
The link to the video is Here: http://www.youtube.com/watch?v=FOYEJF7nmpE
del
Pen and paper interviews would almost certainly be more accurate. The problem being that images of people writing on paper are especially un-cinematic. The participants were encouraged to take as much time as they needed, many of which took several minutes before responding on some questions. However, the majority of them were concerned with how much time the interview would take up, and their quick responses were self imposed.
Whether the evidence is too messy to draw firm conclusions from, I agree that it is. This is an inherent problem with documentaries. Omissions of fact are easily justified. Also, just like in fiction films, a higher degree of manipulation over the audience is more sought after than accuracy.
I just posted a comment over there noting that the last interviewee rediscovered anchoring and adjustment.
Heard on #lesswrong:
(I hope posting only a log is ok)
Hopefully this provides incentive for people to kick Eliezer's ass at FAI theory. You don't want to look cultish, do you?
Geoff Greer published a post on how he got convinced to sign up for cryonics: Insert Frozen Food Joke Here.
If they think that we'll all eventually die even with cryonics and they think that death gives meaning to life then they don't need to worry about cryonics removing meaning since it is just pushing the amount of time until death up. (I wouldn't bother addressing the death giving meaning to life claim except to note that it seems to be a much more common meme among people who haven't actually lost loved ones.)
As to the problem of too many people, overpopulation is a massive problem whether or not a few people get cryonicly preserved.
As to the problem of just the rich getting the benefits, patiently explain that there's no reason to think that the rich now will be treated substantially different from the less rich who sign up for cryonics. And if society ever has the technology to easily revive people from cryonic suspension then the likely standard of living will be so high compared to now that even if the rich have more it won't matter.
I talk about it as something I'm thinking about, and ask what they think. That way, it's not you trying to persuade someone, it's just a conversation.
"Yeah, we'll all die eventually, but this is just a way of curing aging, just like trying to find a cure for heart disease or cancer. All those things are true of any medical treatment, but that doesn't mean we shouldn't save lives."
Are any LWer's familiar with adversarial publishing? The basic idea is that two researchers who disagree on some empirically testable proposition come together with an arbiter to design an experiment to resolve their disagreement.
Here's a summary of the process from an article (pdf) I recently read (where Daniel Kahneman was one of the adversaries).
This seems like a great way to resolve academic disputes. Philip Tetlock appears to be an advocate. What do you think?
Paul Graham on guarding your creative productivity:
So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.
Now, being a professional exorcist does not give a high prior for rationality.
But still, even given that background, that's a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.
I wonder if this uncriticality has anything to do with, well, not expecting to be criticized. If most of the hacks that humans use in place of rationality are socially motivated, we can safely turn them off when speaking to a child who doesn't know any better.
I wonder how much benefit we'd get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?
Probably not very, because we can't actually imagine what that hypothetical person would say to us. It'd probably end up used as a way to affirm your positions by only testing strong points.
I do it sometimes, and I think it helps.
An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.
This implies that the mantra "Will I become a syndicated cartoonist?" could be more effective than the original affirmative version, "I will become a syndicated cartoonist".
There's a course "Street Fighting Mathematics" on MIT OCW, with an associated free Creative Commons textbook (PDF). It's about estimation tricks and heuristics that can be used when working with math problems. Despite the pop-sounding title, it appears to be written for people who are actually expected to be doing nontrivial math.
Might be relevant to the simple math of everything stuff.
From a recent newspaper story:
I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?
From the article (there is a near invisible more text button)
And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)...
I did see an analysis of this somewhere, I'll try and dig it up. Here it is. There is hackernews commentary here.
I find this, from the original msnbc article, depressing
Is it depressing because someone with a Ph.D. in math is playing the lottery, or depressing because she must've have figured out something we don't know, given that she's won four times?
It seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24.
Of course, the chances of "Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4" are also 10^24, and this happens every four weeks.
The most eyebrow-raising part of that article:
It's also far more likely that she cheated. Or that there is a conspiracy in the Lottery to make she win four times.
Is there any philosophy worth reading?
As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.
For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.
However, at the same time I'm concerned that this leads me to read things that only reinforce the beliefs I already have. And there's little point in seeking information if it doesn't change your beliefs.
It's a complicated question what purpose philosophy serves, but I wouldn't be posting here if I thought it served none. So my question is: What philosophical works and authors have you found especially valuable, for whatever reason? Perhaps the recommendations of such esteemed individuals as yourselves will carry enough evidentiary weight that I'll actually read the darned things.
You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?
Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.
However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought is vastly preferable to trying to read Kant yourself. For similar reasons, I've steered clear of Hegel's original texts.
Unfortunately for the present purpose, I myself went the long way (I went to a college with a strong Great Books core in several subjects), so I don't have a good digest to recommend. Anyone else have one?
Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist.
I've developed quite a respect for Hilary Putnam and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile.
I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in.
PS - There was this thread over at Hacker News that just came up yesterday if you're looking to cast a wider net.
Yoreth:
That's an extremely bad way to draw conclusions. If you were living 300 years ago, you could have similarly heard that some English dude named Isaac Newton is spending enormous amounts of time scribbling obsessive speculations about Biblical apocalypse and other occult subjects -- and concluded that even if he had some valid insights about physics, it wouldn't be worth your time to go looking for them.
The value of Newton's theories themselves can quite easily be checked, independently of the quality of his epistemology.
For a philosopher like Hegel, it's much harder to dissociate the different bits of what he wrote, and if one part looks rotten, there's no obvious place to cut.
(What's more, Newton's obsession with alchemy would discourage me from reading whatever Newton had to say about science in general)
Laktatos, Quine and Kuhn are all worth reading. Recommended works from each follows:
Lakatos: " Proofs and Refutations" Quine: "Two Dogmas of Empiricism" Kuhn: "The Copernican Revolution" and "The Structure of Scientific Revolution"
All of these have things which are wrong but they make arguments that need to be grappled with and understood (Copernican Revolution is more of a history book than a philosophy book but it helps present a case of Kuhn's approach to the history and philosophy of science in great detail). Kuhn is a particularly interesting case- I think that his general thesis about how science operates and what science is is wrong, but he makes a strong enough case such that I find weaker versions of his claims to be highly plausible. Kuhn also is just an excellent writer full of interesting factual tidbits.
This seems like in general not a great attitude. The Descartes case is especially relevant in that Descartes did a lot of stuff not just philosophy. And some of his philosophy is worth understanding simply due to the fact that later authors react to him and discuss things in his context. And although he's often wrong, he's often wrong in a very precise fashion. His dualism is much more well-defined than people before him. Hegel however is a complete muddle. I'd label a lot of Hegel as not even wrong. ETA: And if I'm going to be bashing Hegel a bit, what kind of arrogant individual does it take to write a book entitled "The Encyclopedia of the Philosophical Sciences" that is just one's magnum opus about one's own philosophical views and doesn't discuss any others?
I've enjoyed Nietzsche, he's an entertaining and thought-provoking writer. He offers some interesting perspectives on morality, history, etc.
Daniel Dennett and Linda LaScola on Preachers who are not believers:
Day-to-day question:
I live in a ground floor apartment with a sunken entryway. Behind my fairly large apartment building is a small wooded area including a pond and a park. During the spring and summer, oftentimes (~1 per 2 weeks) a frog will hop down the entryway at night and hop around on the dusty concrete until dying of dehydration. I occasionally notice them in the morning as I'm leaving for work, and have taken various actions depending on my feelings at the time and the circumstances of the moment.
What would you do, why, and how long would you keep doing it?
How often do you find frogs in the stairwell? Could it make sense to carry something (a plastic bag?) to pick up the frog with so that you don't get slime on your hands?
If it were me, I think I'd go with plastic bag or other hand cover, possibly have room temperature water with me (probably good enough for frogs, and I'm willing to drink the stuff), and put the frog on the lawn unless I'm in the mood for a bit of a walk and seeing the woods.
I have no doubt that I would habitually wonder whether there are weird events in people's lives which are the result of interventions by incomprehensibly powerful beings.
2: I would put the frog in the grass. Warm fuzzies are a great way to start the day, and it only costs 30 seconds.
If you're truly concerned about the well-being of frogs, you might want to do more. You'd also want to ask yourself what you're doing to help frogs everywhere. The fact that the frog ended up on your doorstep doesn't make you extra responsible for the frog; it merely provides you with an opportunity to help.
Also, wash your hands before eating.
The goal of helping frogs is to gain fuzzies, not utilons. Thinking about all the frogs that you don't have the opportunity to help would mean losing those fuzzies.
There's no utility in saving (animal) life? Or is that only for this particular instance?
Edit 20-Jun-2014: Frogs saved since my original post: 21.5. Frogs I've failed to save: 23.5.
I don't consider frogs to be objects of moral worth.
Why not?
Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?
(Edited for clarity.)
-- David Pearce via Facebook
I'm surprised. Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where's the discontinuity or discontinuities?
Hopefully he still thinks there's a small probability of frogs being able to experience pain, so that the expected suffering of frog torture would be hugely greater than a dust speck.
I have a question about prediction markets. I expect that it has a standard answer.
It seems like the existence of casinos presents a kind of problem for prediction markets. Casinos are a sort of prediction market where people go to try to cash out on their ability to predict which card will be drawn, or where the ball will land on a roulette wheel. They are enticed to bet when the casino sets the odds at certain levels. But casinos reliably make money, so people are reliably wrong when they try to make these predictions.
Casinos don't invalidate prediction markets, but casinos do seem to show that prediction markets will be predictably inefficient in some way. How is this fact dealt with in futarchy proposals?
The money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.
Right. Maybe I shouldn't have said that a prediction market would be "predictably inefficient". I can see that rational players can swoop in and profit from irrational players.
But that's not what I was trying to get at with "predictably inefficient". What I meant was this:
Suppose that you know next to nothing about the construction of roulette wheels. You have no "expert knowledge" about whether a particular roulette ball will land in a particular spot. However, for some reason, you want to make an accurate prediction. So you decide to treat the casino (or, better, all casinos taken together) as a prediction market, and to use the odds at which people buy roulette bets to determine your prediction about whether the ball will land in that spot.
Won't you be consistently wrong if you try that strategy? If so, how Is this consistent wrongness accounted for in futarchy theory?
I understand that, in a casino, players are making bets with the house, not with each other. But no casino has a monopoly on roulette. Players can go to the casino that they think is offering the best odds. Wouldn't this make the gambling market enough like a prediction market for the issue I raise to be a problem?
I may just have a very basic misunderstanding of how futarchy would work. I figured that it worked like this: The market settles on a certain probability that something will happen by settling on an equilibrium for the odds at which people are willing to buy bets. Then policy makers look at the market's settled probability and craft their policy accordingly.
In the stock market, as in a prediction market, the smart money is what actually sets the price, taking others' irrationalities as their profit margin. There's no such mechanism in casinos, since the "smart money" doesn't gamble in casinos for profit (excepting card-counting, cheating, and poker tournaments hosted by casinos, etc).
Roulette odds are actually very close to representing probabilities, although you'd consistently overestimate the probability if you just translated directly. Each $1 bet on a specific number pays out a $35 profit, suggesting p=1/36, but in reality p=1/38. Relative odds get you even closer to accurate probabilities; for instance, 7 & 32 have the same payout, from which we could conclude (correctly, in this case) that they are equally likely. With a little reasoning - 38 possible outcomes with identical payouts - you can find the correct probability of 1/38.
This table shows that every possible roulette bet except for one has the same EV, which means that you'd only be wrong about relative probabilities if you were considering that one particular bet. Other casino games have more variability in EV, but you'd still usually get pretty close to correct probabilities. The biggest errors would probably be for low probability-high payout games like lotteries or raffles.
One way to think of it is that decisions to gamble are based on both information and an error term which reflects things like irrationality or just the fact that people enjoy gambling. Prediction markets are designed to get rid of the error and have prices reflect the information: errors cancel out as people who err in opposite directions bet on opposite sides, and errors in one direction create +EV opportunities which attract savvy, informed gamblers to bet on the other side. But casinos are designed to drive gambling based solely on the error term - people are betting on events that are inherently unpredictable (so they have little or no useful information) against the house at fixed prices, not against each other (so the errors don't cancel out), and the prices are set so that bets are -EV for everyone regardless of how many errors other people make (so there aren't incentives for savvy informed people to come wager).
Sports gambling is structured more similarly to prediction markets - people can bet on both sides, and it's possible for a smart gambler to have relevant information and to profit from it, if the lines aren't set properly - and sports betting lines tend to be pretty accurate.
I have also heard of at least one professional gambler who makes his living by identifying and confronting other peoples' superstitious gambling strategies. For example, if someone claims that 30 hasn't come up in a while, and thus is 'due,' he would make a separate bet with them (to which the house is not a party), claiming simply that they're wrong.
Often, this is an even-money bet which he has upwards of a 97% chance of winning; when he loses, the relatively small payoff to the other party is supplemented by both the warm fuzzies associated with rampant confirmation bias, and the status kick from defeating a professional gambler in single combat.
The most obvious thing: customers are only allowed to take one side of a bet, whose terms are dictated by the house.
If you had a general-topic prediction market with one agent who chose the odds for everything, and only allowed people to bet in one chosen direction on each topic, that agent (if they were at all clever) could make a lot of money, but the odds wouldn't be any "smarter" than that agent (and in fact would be dumber so as to make a profit margin).
Casinos have an assymetry: creation of new casinos is heavily regulated, so there's no way for people with good information to bet on their beliefs, and no mechanism for the true odds to be reached as the market price for a wager.
Normally I wouldn't comment on a typo, but I can't read "assymetry" without chuckling.
It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:
We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)
This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.
Thoughts? (If someone's said this before, I apologize for not remembering it.)
I dunno, I kind of like the idea that as science advances, particular phenomena stop being emergent. I'd be very glad if "emergent" changed from a connotation of semantic stop-sign to a connotation of unsolved problem.
It's worth checking on the Stanford Encyclopedia of Philosophy when this kind of issue comes up. It looks like this view - emergent=hard to predict from low-level model - is pretty mainstream.
The first paragraph of the article on emergence says that it's a controversial term with various related uses, generally meaning that some phenomenon arises from lower-level processes but is somehow not reducible to them. At the start of section 2 ("Epistemological Emergence"), the article says that the most popular approach is to "characterize the concept of emergence strictly in terms of limits on human knowledge of complex systems." It then gives a few different variations on this type of view, like that the higher-level behavior could not be predicted "practically speaking; or for any finite knower; or for even an ideal knower."
There's more there, some of which seems sensible and some of which I don't understand.
The only problem with that seems to be that when people talk about emergent behavior they seem to be more often than not talking about "emergence" as a property of the territory, not a property of the map. So for example, someone says that "AI will require emergent behavior"- that's a claim about the territory. Your definition of emergence seems like a reasonable and potentially useful one but one would need to be careful that the common connotations don't cause confusion.
On the complete implausibility of the History Channel.
That is brilliant.
Reading Michael Vassar's comments on WrongBot's article (http://lesswrong.com/lw/2i6/forager_anthropology/2c7s?c=1&context=1#2c7s) made me feel that the current technique of learning how to write a LW post isn't very efficient (read lots of LW, write a post, wait for lots of comments, try to figure out how their issues could be resolved, write another post etc - it uses up lots of the writer's time and lot's of the commentors time).
I was wondering whether there might be a more focused way of doing this. Ie. A short term workshop, a few writers who have been promoted offer to give feedback to a few writers who are struggling to develop the necessary rigour etc by providing a faster feedback cycle, the ability to redraft an article rather than having to start totally afresh and just general advice.
Some people may not feel that this is very beneficial - there's no need for writing to LW to be made easier (in fact, possibly the opposite) but first off, I'm not talking about making writing for LW easier, I'm talking about making more of the writing of a higher quality. And secondly, I certainly learn a lot better given a chance to interact on that extra level. I think learning to write at an LW level is an excellent way of achieving LW aim of helping people to think at that level.
I'm a long time lurker but I haven't even really commented before because I find it hard to jump to that next level of understanding that enables me to communicate anything of value. I wonder if there are others who feel the same or a similar way.
Good idea? Bad idea?
We could use a more structured system, perhaps. At this point, there's nothing to stop you from writing a post before you're ready, except your own modesty. Raise the threshold, and nobody will have to yell at people for writing posts that don't quite work.
Possibilities:
Significantly raise the minimum karma level.
An editorial system: a more "advanced" member has to read your post before it becomes top-level.
A wiki page about instructions for posting. It should include: a description of appropriate subject matter, formatting instructions, common errors in reasoning or etiquette.
A social norm that encourages editing (including totally reworking an essay.) The convention for blog posts on the internet in general mandates against editing -- a post is supposed to be an honest record of one's thoughts at the time. But LessWrong is different, and we're supposed to be updating as we learn from each other. We could make "Please edit this" more explicit.
A related thought on karma -- I have the suspicion that we upvote more than we downvote. It would be possible to adjust the site to keep track of each person's upvote/downvote stats. That is, some people are generous with karma, and some people give more negative feedback. We could calibrate ourselves better if we had a running tally.
Another technical solution. Not trivial to implement, but also contains significant side benefits.
Some karma + passing test gets top posting privileges.
I have to confess I abused my newly acquired posting privileges and probably diluted the site's value with a couple of posts. Thank goodness they were rather short :). I took the hint though and took to participating in the comment discussion and reading sequences until I am ready to contribute at a higher level.
Kuro5hin had an editorial system, where all posts started out in a special section where they were separate and only visible to logged in users. Commenters would label their comments as either "topical" or "editorial", and all editorial comments would be deleted when the post left editing; and votes cast during editing would determine where the post went (front page, less prominent section, or deleted).
Unfortunately, most of the busy smart people only looked at the posts after editing, while the trolls and people with too much free time managed the edit queue, eventually destroying the quality of the site and driving the good users away. It might be possible to salvage that model somehow, though.
We upvote much more than we downvote - just look at the mean comment and post scores. Also, the number of downvotes a user can make is capped at their karma.
Is there any consensus about the "right" way to write a LW post? I see a lot of diversity in style, topic, and level of rigor in highly-voted posts. I certainly have no good way to tell if I'm doing it right; Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was. (Voting is not solely determined by post quality; this is a big part of the problem.)
I would certainly love to have a better way to get feedback than the current mechanisms; it's indisputable that my writing could be better. Being able to workshop posts would be great, but I think it would be hard to find the right people to do the workshopping; off the top of my head I can really only think of a handful of posters I'd want to have doing that, and I get the impression that they're all too busy. Maybe not, though.
(I think this is a great idea.)
I didn't think there was anything particularly wrong with your post, but newer posts get a much higher level of karma than old ones, which must be taken into account. Some of the core sequence posts have only 2 karma, for example.
Rationality applied to swimming
The author was a lousy swimmer for a long time, but got respect because he put in so much effort. Eventually he became a swim coach, and he quickly noticed that the bad swimmers looked the way he did, and the good swimmers looked very different, so he started teaching the bad swimmers to look like the good swimmers, and began becoming a better swimmer himself.
Later, he got into the physics of good swimming. For example, it's more important to minimize drag than to put out more effort.
I'm posting this partly because it's always a pleasure to see rationality, partly because the most recent chapter of Methods of Rationality reminded me of it, and mostly because it's a fine example of clue acquisition.
Two things of interest to Less Wrong:
First, there's an article about intelligence and religiosity. I don't have access to the papers in question right now, but the upshot is apparently that the correlation between intelligence (as measured by IQ and other tests) and irreligiosity can be explained with minimal emphasis on intelligence but rather on ability to process information and estimate your own knowledge base as well. They found for example that people who were overconfident about their knowledge level were much more likely to be religious. There may still be correlation v. causation issues, but tentatively it looks like having fewer cognitive biases and having better default rationality actually makes one less religious.
The second matter of interest to LW: Today's featured article on the English Wikipedia is the article on confirmation bias.
Is there a bias, maybe called the 'compensation bias', that causes one to think that any person with many obvious positive traits or circumstances (really attractive, rich, intelligent, seemingly happy, et cetera) must have at least one huge compensating flaw or a tragic history or something? I looked through Wiki's list of cognitive biases and didn't see it, but I thought I'd heard of something like this. Maybe it's not a real bias?
If not, I'd be surprised. Whenever I talk to my non-rationalist friends about how amazing persons X Y or Z are, they invariably (out of 5 or so occasions when I brought it up) replied with something along the lines of 'Well I bet he/she is secretly horribly depressed / a horrible person / full of ennui / not well-liked by friends and family". This is kind of the opposite of the halo effect. It could be that this bias only occurs when someone is asked to evaluate the overall goodness of someone who they themselves have not gotten the chance to respect or see as high status.
Anyway, I know Eliezer had a post called 'competent elites' or summat along these lines, but I'm not sure if this effect is a previously researched bias I'm half-remembering or if it's just a natural consequence of some other biases (e.g. just world bias).
Added: Alternative hypothesis that is more consistent with the halo effect and physical attractiveness stereotype data: my friends are themselves exceptionally physically attractive and competent but have compensatory personal flaws or depression or whatever, and are thus generalizing from one or two examples when assuming that others that share similar traits as themselves would also have such problems. I think this is the more likely of my two current hypotheses, as my friends are exceptionally awesome as well as exceptionally angsty. Aspiring rationalists! Empiricists and theorists needed! Do you have data or alternative hypotheses?
It may have to do with the manner you bring it up - it's not hard to see how saying something like "X is amazing" could be interpreted "X is amazing...and you're not" (after all, how often do you tell your friends how amazing they are?), in which case the bias is some combination of status jockeying, cognitive dissonance and ego protection.
Wow, that's seems like a very likely hypothesis that I completely missed. Is there some piece of knowledge you came in with or heuristic you used that I could have used to think up your hypothesis?
I've spent some time thinking about this, and the best answer I can give is that I spend enough time thinking about the origins and motivations of my own behavior that, if it's something I might conceivably do right now, or (more importantly) at some point in the past, I can offer up a possible motivation behind it.
Apparently this is becoming more and more subconscious, as it took quite a bit of thinking before I realized that that's what I had done.
Could it be a matter of being excessively influenced by fiction? It's more convenient for stories if a character has some flaws and suffering.
http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/06/28/risky-business-james-bagian-nasa-astronaut-turned-patient-safety-expert-on-being-wrong.aspx
This article is pretty cool, since it describes someone running quality control on a hospital from an engineering perspective. He seems to have a good understanding of how stuff works, and it reads like something one might see on lesswrong.
The selective attention test (YouTube video link) is quite well-known. If you haven't heard of it, watch it now.
Now try the sequel (another YouTube video).
Even when you're expecting the tbevyyn, you still miss other things. Attention doesn't help in noticing what you aren't looking for.
More here.
Thought without Language Discussion of adults who've grown up profoundly deaf without having been exposed to sign language or lip-reading.
Edited because I labeled the link as "Language without Thought-- this counts as an example of itself.
Machine learning is now being used to predict manhole explosions in New York. This is another example of how machine learning/specialized AI are becoming increasingly common place to the point where they are being used for very mundane tasks.
Somebody said that the reason there is no progress in AI is that once a problem domain is understood well enough that there are working applications in it, nobody calls it AI any longer.
I think philosophy is a similar case. Physics used to be squarely in philosophy, until it was no longer a confused mess, but actually useful. Linguistics too used to be considered a branch of philosophy.
As did economics.
Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.
I'm not sure it's that relevant to rationality, but I think most humans (myself included!) are interested in hearing juicy gossip, especially if it features a popular trope such as "high status (but mildly disliked by the audience) person meets downfall".
How about this division of labor: you tell us the story and we come up with some explanation for how it relates to rationality, probably involving evolutionary psychology.
What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.
E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell the computer to do
# Proceed from 0 to x, in increments of 1, (where x =whatever) If divisible by 3, remainder 0, associate fizz with number If divisible by 5, remainder 0, associate buzz with number, Make ordered list from o to x, of numbers associated with fizz OR buzz For numbers associated with fizz NOT buzz, append fizz For numbers associated with buzz NOT fizz, append fizz For numbers associated with fizz AND buzz, append fizzbuzz #
I ask out of interest in acquiring money, on elance, rentacoder, odesk etc. I'm starting from a position of total ignorance but y'know it doesn't seem like learning C, and understanding Conrete Mathematics and TAOCP in a useful or even deep way would be the work of more than a year, while it would place one well above average in some domains of this activiteity.
Or have I missed something really obvious and important?
I have no numbers for this, but the idea is that after interviewing for a job, competent people get hired, while incompetent people do not. These incompetents then have to interview for other jobs, so they will be seen more often, and complained about a lot. So perhaps the perceived prevalence of incompetent programmers is a result of availability bias (?).
This theory does not explain why this problem occurs in programming but not in other fields. I don't even know whether that is true. Maybe the situation is the same elsewhere, and I am biased here because I am a programmer.
Joel Spolsky gave a similar explanation.
Makes sense.
I'm a programmer, and haven't noticed that many horribly incompetent programmers (which could count as evidence that I'm one myself!).
Do you consider fizzbuzz trivial? Could you write an interpreter for a simple Forth-like language, if you wanted to? If the answers to these questions are "yes", then that's strong evidence that you're not a horribly incompetent programmer.
Is this reassuring?
Yes
Probably; I made a simple lambda-calculus interpret once and started working on a Lisp parser (I don't think I got much further than the 'parsing' bit). Forth looks relatively simple, though correctly parsing quotes and comments is always a bit tricky.
Of course, I don't think I'm a horribly incompetent programmer -- like most humans, I have a high opinion of myself :D
From what I can tell the average person is borderline incompetent when it comes to the 'actually getting work done' part of a job. It is perhaps slightly more obvious with a role such as programming where output is somewhat closer to the level of objective physical reality.
I don't know anything about FizzBuzz, but your program generates no buzzes and lots of fizzes (appending fizz to numbers associated only with fizz or buzz.) This is not a particularly compelling demonstration of your point that it should be easy.
(I'm not a programmer, at least not professionally. The last serious program I wrote was 23 years ago in Fortran.)
The bug would have been obvious if the pseudocode had been indented. I'm convinced that a large fraction of beginner programming bugs arise from poor code formatting. (I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.)
Actually, maybe this is a sign of a bigger conceptual problem: a lot of people see programs as sequences of instructions, rather than a tree structure. Indentation seems natural if you hold the latter view, and pointless if you can only perceive programs as serial streams of tokens.
This seems to predict that python solves this problem. Do you have any experience watching beginners with python? (Your second paragraph suggests that indentation is just the symptom and python won't help.)
Your general point is right. Ever since I started programming, it always felt like money for free. As long as you have the right mindset and never let yourself get intimidated.
Your solution to FizzBuzz is too complex and uses data structures ("associate whatever with whatever", then ordered lists) that it could've done without. Instead, do this:
This is runnable Python code. (NB: to write code in comments, indent each line by four spaces.) Python a simple language, maybe the best for beginners among all mainstream languages. Download the interpreter and use it to solve some Project Euler problems for finger exercises, because most actual programming tasks are a wee bit harder than FizzBuzz.
How did you first find work? How do you usually find work, and what would you recommend competent programmers do to get started in a career?
The least-effort strategy, and the one I used for my current job, is to talk to recruiting firms. They have access to job openings that are not announced publically, and they have strong financial incentives to get you hired. The usual structure, at least for those I've worked with, is that the prospective employee pays nothing, while the employer pays some fraction of a year's salary for a successful hire, where success is defined by lasting longer than some duration.
(I've been involved in hiring at the company I work for, and most of the candidates fail the first interview on a question of comparable difficulty to fizzbuzz. I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.)
I think there are failure modes from the curiosity-stopping anti-epistemology cluster, that allow you to fail to learn indefinitely, because you don't recognize what you need to learn, and so never manage to actually learn that. With right approach anyone who is not seriously stupid could be taught (but it might take lots of time and effort, so often not worth it).
My first paying job was webmaster for a Quake clan that was administered by some friends of my parents. I was something like 14 or 15 then, and never stopped working since (I'm 27 now). Many people around me are aware of my skills, so work usually comes to me; I had about 20 employers (taking different positions on the spectrum from client to full-time employer) but I don't think I ever got hired the "traditional" way with a resume and an interview.
Right now my primary job is a fun project we started some years ago with my classmates from school, and it's grown quite a bit since then. My immediate boss is a former classmate of mine, and our CEO is the father of another of my classmates; moreover, I've known him since I was 12 or so when he went on hiking trips with us. In the past I've worked for friends of my parents, friends of my friends, friends of my own, people who rented a room at one of my schools, people who found me on the Internet, people I knew from previous jobs... Basically, if you need something done yesterday and your previous contractor was stupid, contact me and I'll try to help :-)
ETA: I just noticed that I didn't answer your last question. Not sure what to recommend to competent programmers because I've never needed to ask others for recomendations of this sort (hah, that pattern again). Maybe it's about networking: back when I had a steady girlfriend, I spent about three years supporting our "family" alone by random freelance work, so naturally I learned to present a good face to people. Maybe it's about location: Moscow has a chronic shortage of programmers, and I never stop searching for talented junior people myself.
I was very surprised by this until I read the word "Moscow."
--"Epigrams in Programming", by Alan J. Perlis; <small>ACM's SIGPLAN publication, September, 1982
Programming as a field exhibits a weird bimodal distribution of talent. Some people are just in it for the paycheck, but others think of it as a true intellectual and creative challenge. Not only does the latter group spend extra hours perfecting their art, they also tend to be higher-IQ. Most of them could make better money in the law/medicine/MBA path. So obviously the "programming is an art" group is going to have a low opinion of the "programming is a paycheck" group.
Do we have any refs for this? I know there's "The Camel Has Two Humps" (Alan Kay on it, the PDF), but anything else?
I'll second the suggestion that you try your hand at some actual programming tasks, relatively easy ones to start with, and see where that gets you.
The deal with programming is that some people grok it readily and some don't. There seems to be some measure of talent involved that conscientious hard word can't replace.
Still, it seems to me (I have had a post about this in the works for ages) that anyone keen on improving their thinking can benefit from giving programming a try. It's like math in that respect.
i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway.
http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG
Are there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used.
Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?
There really are people who would not take that few hours.
I don't know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn't get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don't know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial.
So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.
There are really people who don't get pointers.
One of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.)
Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...
Why are Roko's posts deleted? Every comment or post he made since April last year is gone! WTF?
Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn't want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.
I see. A side effect of banning one post, I think; only one post should've been banned, for certain. I'll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name ("masterbater"), and code changes were quickly made to get that out of the system when their post was banned.
Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.
EDIT: No, it wasn't a side effect, Roko did it on purpose.
Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.
And that is exactly what Professor Quirrell would say!
Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.
cf. Order of the Stick on the double-bluff.
Of course <level of reasoning plus one> as you know very well. :)
In a certain sense, it is.
Of course, we already established that you're Light Yagami.
I'm not sure we should believe you.
I've deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.
I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.
ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.
I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(
So you've deleted the posts you've made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.
For example, consider these posts, and comments on them, that you deleted:
I believe it's against community blog ethics to delete posts in this manner. I'd like them restored.
Edit: Roko accepted this argument and said he's OK with restoring the posts under an anonymous username (if it's technically possible).
It's ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy's actions constitute proof of what happens as a result.
(This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)
What "treatment" did you have in mind? At best, Roko made a honest mistake, and the deletion of a single post of his was necessary to avoid more severe consequences (such as FAI never being built). Roko's MindWipe was within his rights, but he can't help having this very public action judged by others.
What many people will infer from this is that he cares more about arguing for his position (about CEV and other issues) than honestly providing info, and now that he has "failed" to do that he's just picking up his toys and going home.
Parent is inaccurate: although Roko's comments are not, Roko's posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko's comments (but Roko's name is no longer on them and they are no longer accessible via /user/Roko/ URLs).
Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.
Could the people who have such links post them here?
And I'd like the post of Roko's that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I'm angry about it now and I didn't even write it. That's what was "harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable." That's what should be against the blog ethics.
I don't blame him for removing all of his contributions after his post was treated like that.
It's also generally impolite (though completely within the TOS) to delete a person's contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn't seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.
I understand. I've been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.
I'm not them, but I'd very much like your comment to stay here and never be deleted.
Does not seem very nice to take such an out-of-context partial quote from Eliezer's comment. You could have included the first paragraph, where he commented on the unusual nature of the language he's going to use now (the comment indeed didn't start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.
Slashdot having an epic case of tribalism blinding their judgment? This poster tries to argue that, despite Intelligent Design proponents being horribly wrong, it is still appropriate for them to use the term "evolutionist" to refer to those they disagree with.
The reaction seems to be basically, "but they're wrong, why should they get to use that term?"
Huh?
I haven't regularly read Slashdot in several years, but I seem to recall that it was like that pretty much all the time.
There's a legitimate reason to not want ID proponents and creationists to use the term "evolutionist" although it isn't getting stated well in that thread. In particular, the term is used to portray evolution as an ideology with ideological adherents. Thus, the use of the term "evolutionism" as well. It seems like the commentators in question have heard some garbled bit about that concern and aren't quite reproducing it accurately.
A second post has been banned. Strange: it was on a totally different topic from Roko's.
(comment edited)
I wonder why PlaidX's post isn't getting deleted - the discussion there is way closer to the forbidden topic.
Still the sort of thing that will send people close to the OCD side of the personality spectrum into a spiral of nightmares, which, please note, has apparently already happened in at least two cases. I'm surprised by this, but accept reality. It's possible we may have more than the usual number of OCD-side-of-the-spectrum people among us.
Was the discussion in question epistemologicaly interesting (vs. intellectual masturbation)? If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks.
As an aside, I've had some SL-related psychological effects, particularly related to material notion of self: a bit of trouble going to sleep, realizing that logically there is little distinction from death-state. This lasted a short while, but then you just learn to "stop worrying and love the bomb". Besides "time heals all wounds" certain ideas helped, too. (I actually think this is an important SL, though it does not sit well within the SciFi hierarchy).
This worked for me, but I am generally very low on the OCD scale, and I am still mentally not quite ready for some of the discussions going on here.
It is impossible to have rules without Mr. Potter exploiting them.
John Hari - My Experiment With Smart Drugs (2008)
How does everyone here feel about these 'Smart Drugs'? They seem quire tempting to me, but are there candidates that have been in use long and considered safe?
It surprised me that he didn't consider taking provigil one or two days a week.
It also should have surprised me (but didn't-- it just occurred to me) that he didn't consider testing the drugs' effects on his creativity.
There's some discussion here and here.
Have any LWers traveled the US without a car/house/lot-of-money for a year or more? Is there anything an aspiring rationalist in particular should know on top of the more traditional advice? Did you learn much? Was there something else you wish you'd done instead? Any unexpected setbacks (e.g. ended up costing more than expected; no access to books; hard to meet worthwhile people; etc.)? Any unexpected benefits? Was it harder or easier than you had expected? Is it possible to be happy without a lot of social initiative? Did it help you develop social initiative? What questions do you wish you would have asked beforehand, and what are the answers to those questions?
Actually, any possibly relevant advice or wisdom would be appreciated. :D
I figure the open thread is as good as any for a personal advice request. It might be a rationality issue as well.
I have incredible difficulty believing that anybody likes me. Ever since I was old enough to be aware of my own awkwardness, I have the constant suspicion that all my "friends" secretly think poorly of me, and only tolerate me to be nice.
It occurred to me that this is a problem when a close friend actually said, outright, that he liked me -- and I happen to know that he never tells even white lies, as a personal scruple -- and I simply couldn't believe him. I know I've said some weird or embarrassing things in front of him, and so I just can't conceive of him not looking down on me.
So. Is there a way of improving my emotional response to fit the evidence better? Sometimes there is evidence that people like me (they invite me to events; they go out of their way to spend time with me; or, in the generalized professional sense, I get some forms of recognition for my work). But I find myself ignoring the good and only seeing the bad.
Update for the curious: did talk to a friend (the same one mentioned above, who, I think, is a better "shrink" than some real shrinks) and am now resolved to kick this thing, because sooner or later, excessive approval-seeking will get me in trouble.
I'm starting with what I think of as homebrew CBT: I will not gratuitously apologize or verbally belittle myself. I will try to replace "I suck, everyone hates me" thoughts with saner alternatives. I will keep doing this even when it seems stupid and self-deluding. Hopefully the concrete behavioral stuff will affect the higher-level stuff.
After all. A mathematician I really admire gave me career advice -- and it was "Believe in yourself." Yeah, in those words, and he's a logical guy, not very soft and fuzzy.
For what it's worth, this is often known as Imposter Syndrome, though it's not any sort of real psychiatric diagnosis. Unfortunately, I'm not aware of any reliable strategies for defeating it; I have a friend who has had similar issues in a more academic context and she seems to have largely overcome the problem, but I'm not sure as to how.
Alicorn's Living Luminously series covers some methods of systematic mental introspection and tweaking like this. The comments on alief are especially applicable.
An object lesson in how not to think about the future:
http://www.futuretimeline.net/
(from Pharyngula)
Very interesting story about a project that involved massive elicitation of expert probabilities. Especially of interest to those with Bayes Nets/Decision analysis background. http://web.archive.org/web/20000709213303/www.lis.pitt.edu/~dsl/hailfinder/probms2.html
How facts Backfire
There are a number of ways you can run with this article. It is interesting seeing it in the major press. It is also a little ironic that it is presenting facts to try and overturn an opinion (that information cannot be good for trying to overturn an opinion).
In terms of existential risk and thinking better in general. Obviously sometimes facts can overturn opinions but it makes me wonder, where is the organisation that uses non-fact based methods to sway opinion about existential risk. It would make sense if they were seperate, the fact based organisations (SIAI, FHI) need to be honest so that people that are fact-phillic to their message will trust them. I tend to ignore the fact-phobic (with respect to existential risk) people. But if it became sufficiently clear that foom style AI was possible, engineering society would become necessary.
Interesting tidbit from the article:
I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen. These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.
Data point: After years of having the correct arguments in my hand, having indeed generated many of them myself, and simply refusing to update, Eliezer, Cectic, and Dan Meissler ganged up on me and got the job done.
I think Jesus and Mo helped too, now I think of it. That period's already getting murky in my head =/
Anyhow, point is, none of the above are what you'd call gentle.
ETA: I really do think humor is incredibly corrosive to religion. Years before this, the closest I ever came to deconversion was right after I read "Kissing Hank's Ass"
I'd guess aggression would have a polarising affect, depending upon ingroup or outgroup affiliation.
Aggression from an member of your own group is directed at something important that you ought to take note of. Aggression from an outsider is possibly directed at you so something to be ignored (if not credible) or countered.
We really need some students to do some tests upon, or a better way of searching psych research than google.
Presumably there's heterogeneity in people's reactions to aggressiveness and to soft approaches. Most likely a minority of people react better to aggressive approaches and most people react better to being fed opposing arguments in a sandwich with self-affirmation bread.
I believe aggressive debates are not about convincing the people you are debating with, that is likely to be impossible. Instead it is about convincing third parties who have not yet made up their mind. For that purpose it might be better to take an overly extreme position and to attack your opponents as much as possible.
I think one of the reasons this self-esteem seeding works is that identifying your core values makes other issues look less important.
On the other hand, if you e.g. independently expressed that God is an important element of your identity and belief in him is one of your treasured values, then it may backfire and you will be even harder to move you away from that. (Of course I am not sure: I have never seen any scientific data on that. This is purely a wild guess.)
The primary study in question is here. I haven't been able to locate online a copy of the study about self-esteem and corrections.
Nobel Laureate Jean-Marie Lehn is a transhumanist.
Sparked by my recent interested in PredictionBook.com, I went back to take a look at Wrong Tomorrow, a prediction registry for pundits - but it's down. And doesn't seem to have been active recently.
I've emailed the address listed on the original OB ANN for WT, but while I'm waiting on that, does anyone know what happened to it?
UDT/TDT understanding check: Of the 3 open problems Eliezer lists for TDT, the one UDT solves is counterfactual mugging. Is this correct? (A yes or no is all I'm looking for, but if the answer is no, an explanation of any length would be appreciated)
Yes.
Do the various versions of the Efficient Market Hypothesis only apply to investment in existing businesses?
The discussions of possible market blind spots in clothing makes me wonder how close the markets are to efficient for new businesses.
I'm curious what peoples opinions are of Jeff Hawkins' book 'on intelligence', and specifically the idea that 'intelligence is about prediction'. I'm about halfway through and I'm not convinced, so I was wondering if anybody could point me to further proofs of this or something, cheers
With regards to further reading, you can look at Hawkins' most recent (that I'm aware of) paper, "Towards a Mathematical Theory of Cortical Micro-Circuits". It's fairly technical, however, so I hope your math/neuroscience background is strong (I'm not knowledgeable enough to get much out of it).
You can also take a look at Hawkins' company Numenta, particularly the Technology Overview. Hierarchical Temporal Memory is the name of Hawkins' model of the neocortex, which IIRC he believes is responsible for some of the core prediction mechanisms in the human brain.
Edit: I almost forgot, this video of a talk he presented earlier this year may be the best introduction to HTM.
I was examining some of the arguments for the existence of god that separate beings into contingent (exist in some worlds but not all) and necessary (exist in all worlds). And it occurred to me that if the multiverse is indeed true, and its branches are all possible worlds, then we are all necessary beings, along with the multiverse, a part of whose structure we are.
Am I retreating into madness? :D
I just finished polishing off a top level post, but 5 new posts went up tonight - 3 of them substantial. So I ask, what should my strategy be? Should I just submit my post now because it doesn't really matter anyway? Or wait until the conversation dies down a bit so my post has a decent shot of being talked about? If I should wait, how long?
Definitely wait. My personal favorite timing is one day for each new (substantial) post.
More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.
In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!
Plus, he gives a lot of information from his personal experience.
Be warned, though: it's mixed with a lot of blame-the-government themes and certainty about future hyperinflation, and the preservation of real estate's value therein, if that kind of thing turns you off.
Edit: Okay, I've edited this comment about eight times now, but I left this out: from a rationality perspective, this essay shows the worst parts of Goodhart's Law: apparently, the old, functional criteria that would correctly identify some mortgage applicants is going to be mandated as the standard on all future mortgages. Yikes!
I've seen discussion of Goodhart's Law + Conservation of Thought playing out nastily in investment. For example, junk bonds started out as finding some undervalued bonds among junk bonds. Fine, that's how the market is supposed to work. Then people jumped to the conclusion that everything which was called a junk bond was undervalued. Oops.
If anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter recently had a post on the NY Times article.
The comments aren't hilarious and insane, they're more of a casually dismissive nature. In this thread, cryonics is called an "afterlife scam", a pseudoscience, science fiction (technically true at this stage, but there's definitely an implied negative connotation on the "fiction" part, as if you shouldn't invest in cryonics because it's just nerd fantasy), and Pascal's Wager for atheists (The comparison is fallacious, and I thought the original Pascal's Wager was for atheists anyways...). There are a few criticisms that it's selfish, more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist.
All in all, a delightful cornucopia of irrationality.
ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post. There was also a comment from someone registered with Alcor that was very good, I thought. I attempted a couple of rebuttals, but I don't think they were well-received.
Also, check out this hilarious description of Robin Hanson from a commenter there:
I guess that the fatal problem with cryonics is all the freaking nerds interested in it.
The responses are interesting. I think this is the most helpful to my understanding:
I think this is the biggest PR hurdle for cryonics: it resembles (superficially) a transparent scam selling the hope of immortality for thousands of dollars.
um... why isn't it? There's a logically possible chance of revival someday, yeah. But with no way to estimate how likely it is, you're blowing money on mere possibility.
We don't normally make bets that depend on the future development of currently unknown technologies. We aren't all investing in cold fusion just because it would be really awesome if it panned out.
Sorry, I know this is a cryonics-friendly site, but somebody's got to say it.
There are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level.
I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it.
However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.
Well, right off the bat, there's a difference between "cryonics is a scam" and "cryonics is a dud investment". I think there's sufficient evidence to establish the presence of good intentions - the more difficult question is whether there's good evidence that resuscitation will become feasible.
That's ok, it's a skepticism friendly site as well.
I don't see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.
There isn't no way to estimate it. We can make reasonable estimations of probability based on the data we have (what we know about nanotech, what we know about brain function, what we know about chemical activity at very low temperatures, etc.).
Moreover, it is always possible to estimate something's likelyhood, and one cannot simply say "oh, this is difficult to estimate accurately, so I'll assign it a low probability." For any statement A that is difficult to estimate, I could just as easily make the same argument for ~A. Obviously, both A and ~A can't both have low probabilities.
That's true; uncertainty about A doesn't make A less likely. It does, however, make me less likely to spend money on A, because I'm risk-averse.
Have you decided on a specific sum that you would spend based on your subjective impression of the chances of cryonics working?
Maybe $50. That's around the most I'd be willing to accept losing completely.
Nice. I believe that would buy you indefinite cooling as a neuro patient, if about a billion other individuals (perhaps as few as 100 million) are also willing to spend the same amount.
Would you pay that much for a straight-freeze, or would that need to be an ideal perfusion with maximum currently-available chances of success?
There's always a way to estimate how likely something is, even if it's not a very accurate prediction. And mere used like seems kinda like a dark side word, if you'll excuse me.
Cryonics is theoretically possible, in that it isn't inconsistant with science/physics as we know it so far. I can't really delve into this part much, as I don't know anything about cold fusion and thus can't understand the comparison properly, but it sounds as if it might be inconsistant with physics?
Possibly relevant: Is Molecular Nanotechnology Scientific?
Also, the benefits of cryonics working if you invested in it would be greater than those of investing in cold fusion.
And this is just the impression I get, but it sounds like you're being a contrarian contrarian. I think it's your last sentence: it made me think of Lonely Dissent.
The unfair thing is, the more a community (like LW) values critical thinking, the more we feel free to criticize it. You get a much nicer reception criticizing a cryonicist's reasoning than criticizing a religious person's. It's easy to criticize people who tell you they don't mind. The result is that it's those who need constructive criticism the most who get the least. I'll admit I fall into this trap sometimes.
You seem to be under the assumption that there is some minimum amount of evidence needed to give a probability. This is very common, but it is not the case. It's just as valid to say that the probability that an unknown statement X about which nothing is known is true is 0.5, as it is to say that the probability that a particular well-tested fair coin will come up heads is 0.5.
Probabilities based on lots of evidence are better than probabilities based on little evidence, of course; and in particular, probabilities based on little evidence can't be too close to 0 or 1. But not having enough evidence doesn't excuse you from having to estimate the probability of something before accepting or rejecting it.
I'm not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X. It's not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don't you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.
Example: If you hear someone whom you don't specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don't know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.
Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.
I wonder what the average probability of truth is for every spoken statement made by the human populace on your average day, for various message lengths. Anybody wanna try some Fermi calculations?
I'm guessing it's rather high, as most statements are trivial observations about sensory data, performative utterances, or first-glance approximations of one's preferences. I would also predict sentence accuracy drops off extremely quickly the more words the sentence has, and especially so the more syllables there are per word in that sentence.
Once you are beyond the most elementary of statements I really don't think so, rather the opposite, at least for unique rather than for repeated statements. Most untrue statements are probably either ad hoc lies ("You look great." "That's a great gift." "I don't have any money with me.") or misremembered information.
In the case of of ad hoc lies there is not enough time to invent plausible details and inventing details without time to think it through increases the risk of being caught, in the case of misremembered information you are less likely to know or remember additional information you could include in the statement than someone who really knows the subject and wouldn't make that error. Of course more information simply means including more things even the best experts on the subject are simply wrong about as well as more room for misrememberings, but I think the first effect dominates because there are many subjects the second effect doesn't really apply to, e. g. the content of a work of fiction or the constitution of a state (to an extent even legal matters in general).
Complex untrue statements would be things like rehearsed lies and anecdotes/myths/urban legends.
Consider the so called conjunction fallacy, if it was maladaptive for evaluating the truth of statements encountered normally it probably wouldn't exist. So in every day conversation (or at least the sort of situations that are relevant for the propagation of the memes and or genes involved) complex statements, at least of those kinds that can be observed to be evaluated "fallaciously", are probably more likely to be true.
(So this is just about the first real post I made here and I kinda have stage fright posting here, so if its horribly bad and uninteresting and so please tell me what I did wrong, ok? Also, I've been frying to figure out the spelling and grammar and failed, sorry about that.) (Disclaimer: This post is humorous, and not everything should be taken all to seriously! As someone (Boxo) reviewing it put it: "it's like a contest between 3^^^3 and common sense!")
1) My analysis of http://lesswrong.com/lw/kn/torture_vs_dust_specks/
Lets say 1 second of torture is -1 000 000 utilions. Because there are about 100 000 seconds in a day, and about 20 000 days in 50 years, that makes -2*10^15 utilions.
Now, I'm tempted to say a dust speck has no negative utility at all, but I'm not COMPLETELY certain I'm right. Let's say there's a 1/1000 000 chance I'm wrong*, in which case the dust speck is -1 utilion. That means the the dust speck option is -1 * 10^-6 * 3^^^3, which is approximately -3^^^3.
-3^^^3 < -10^15, therefore I chose the torture.
2) The ant speck problem.
The ant speck problem is like the dust speck problem, except instead of being 3^^^3 humans that get specks in their eyes, it's 3^^^3 ordinary ants, and it's a billion humans being tortured for a millennia.
Now, I'm bigoted against ants, and pretty sure I don't value them as much as humans. In fact, with 99.9999% certain I don't value ants suffering at all. The remaining probability space is dominated by that moral value is equal to 1000^[the number of neurons in the entity's brain] for brains similar to earth type animals. Humans have about 10^11, ants have about 10^4 That means an ant is worth about 10^(-10^14) as much as a human, if it's worth anything at all.
Now lets multiply this together... -1 utilions * 10^(-10^14) discount * 1/10^6 that ants are worth anything at all * 1/10^6 that dust specks are bad * 3^^^3... That's about -3^^^3!
And for the other side: -10^15 for 50 years. Multiply that with 20, and then with the billion... about -10^25.
-3^^^3 < -10^25, therefore I chose the torture!
((*I do not actually think this, the numbers are for the sake of argument and have little to do with my actual beliefs at all.))
3) Obvious derived problems: There are variations of the ant problem, can you work out and post what if...
The ants will only be tortured if also all the protons in the earth decays within one second of the choice, the torture however is certain?
Instead of ants, you have bacteria, with behaviour as complicated as to be equivalent of 1/100 neurons?
The source you get the info from is unreliable, there's only a 1/googol chance the specks could actual happen, while the torture, again, is certain?
All of the above?
Given some heavy utilitarian assumptions. This isn't an argument, it's more plausible to just postulate disutility of torture without explanation.
Given all the recent discussion of contrived infinite torture scenarios, I'm curious to hear if anyone has reconsidered their opinion of my post on Dangerous Thoughts. I am specifically not interested in discussing the details or plausibility of said scenarios.
Has anyone been doing, or thinking of doing, a documentary (preferably feature-length and targeted at popular audiences) about existential risk? People seem to love things that tell them the world is about to end, whether it's worth believing or not (2012 prophecies, apocalyptic religion, etc., and on the more respectable side: climate change, and... anything else?), so it may be worthwhile to have a well-researched, rational, honest look at the things that are actually most likely to destroy us in the next century, while still being emotionally compelling enough to get people to really comprehend it, care about it, and do what they can about it. (Geniuses viewing it might decide to go into existential risk reduction when they might otherwise have turned to string theory; it could raise awareness so that existential risk reduction is seen more widely as an important and respectable area of research; it could attract donors to organizations like FHI, SIAI, Foresight, and Lifeboat; etc.)
Something weird is going on. Every time I check, virtually all my recent comments are being steadily modded up, but I'm slowly losing karma. So even if someone is on an anti-Silas karma rampage, they're doing it even faster than my comments are being upvoted.
Since this isn't happening on any recent thread that I can find, I'd like to know if there's something to this -- if I made a huge cluster of errors on thread a while ago. (I also know someone who might have motive, but I don't want to throw around accusations at this point.)
This reminds me of something I mentioned as an improvement for LW a while ago, though for other reasons-- the ability to track all changes in karma for one's posts.
I tend to vote down a wide swath of your comments when I come across them in a thread such as this one or this one, attempting to punish you for being mean and wasting peoples' time. I'm a late reader, so you may not notice those comments being further downvoted; I guess I should post saying what I've done and why.
In the spirit of your desire for explanations, it is for the negative tone of your posts. You create this tone by the small additions you make that cause the text to sound more like verbal speech, specifically: emphasis, filler words, rhetorical questions, and the like. These techniques work significantly better when someone is able to gauge your body language and verbal tone of voice. In text, they turn your comments hostile.
That, and you repeat yourself. A lot.