Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread: July 2010, Part 2

6 Post author: Alicorn 09 July 2010 06:54AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.


July Part 1

Comments (770)

Comment deleted 14 July 2010 11:58:31AM [-]
Comment author: Blueberry 14 July 2010 04:26:44PM 8 points [-]

That is really a beautiful comment.

It's a good point, and one I never would have thought of on my own: people find it painful to think they might have a chance to survive after they've struggled to give up hope.

One way to fight this is to reframe cryonics as similar to CPR: you'll still die eventually, but this is just a way of living a little longer. But people seem to find it emotionally different, perhaps because of the time delay, or the uncertainty.

Comment author: Eliezer_Yudkowsky 14 July 2010 08:46:27PM 5 points [-]

I always figured that was a rather large sector of people's negative reaction to cryonics; I'm amazed to find someone self-aware enough to notice and work through it.

Comment author: John-Henry 10 July 2010 08:53:16PM 13 points [-]

I thought Less Wrong might be interested to see a documentary I made about cognitive bias. It was made as part of a college project and a lot of the resources that the film uses are pulled directly from Overcoming Bias and Less Wrong. The subject of what role film can play in communicating the ideas of Less Wrong is one that I have heard brought up, but not discussed at length. Despite the film's student-quality shortcomings, hopefully this documentary can start a more thorough dialogue that I would love to be a part of.

The link to the video is Here: http://www.youtube.com/watch?v=FOYEJF7nmpE

Comment author: [deleted] 14 July 2010 08:00:48AM *  2 points [-]

del

Comment author: John-Henry 16 July 2010 06:16:04PM *  2 points [-]

Pen and paper interviews would almost certainly be more accurate. The problem being that images of people writing on paper are especially un-cinematic. The participants were encouraged to take as much time as they needed, many of which took several minutes before responding on some questions. However, the majority of them were concerned with how much time the interview would take up, and their quick responses were self imposed.

Whether the evidence is too messy to draw firm conclusions from, I agree that it is. This is an inherent problem with documentaries. Omissions of fact are easily justified. Also, just like in fiction films, a higher degree of manipulation over the audience is more sought after than accuracy.

Comment author: RobinZ 10 July 2010 11:57:20PM *  2 points [-]

I just posted a comment over there noting that the last interviewee rediscovered anchoring and adjustment.

Comment author: Bongo 10 July 2010 09:30:05PM *  12 points [-]

Heard on #lesswrong:

<a> BTW, I figured out why Eliezer looks like a cult leader to some people. It's because he has both social authority (he's a leader figure, solicits donations) and epistemological authority (he's the top expert, wrote the sequences which are considered canonical).

<a> If, for example, Wei Dai kicked Eliezer's ass at FAI theory, LW would not appear cultish

<b> This suggests that we should try to make someone else a social authority so that he doesn't have to be.

(I hope posting only a log is ok)

Comment author: Will_Newsome 16 July 2010 12:13:42AM 3 points [-]

Hopefully this provides incentive for people to kick Eliezer's ass at FAI theory. You don't want to look cultish, do you?

Comment author: Vladimir_Nesov 09 July 2010 01:48:30PM 11 points [-]

Geoff Greer published a post on how he got convinced to sign up for cryonics: Insert Frozen Food Joke Here.

Comment deleted 09 July 2010 08:42:43PM *  [-]
Comment author: JoshuaZ 09 July 2010 11:36:58PM 2 points [-]

I know those issues had been discussed already, but how could one react in a five minute coffee-break, when the co-worker responds (standard phrases to go): "But death gives meaning to live. And if nobody died, there would be too many people around here. Only the rich ones could get the benefits. And ultimately, whatever end the universe takes, we will all die, you know science, don't ya?

If they think that we'll all eventually die even with cryonics and they think that death gives meaning to life then they don't need to worry about cryonics removing meaning since it is just pushing the amount of time until death up. (I wouldn't bother addressing the death giving meaning to life claim except to note that it seems to be a much more common meme among people who haven't actually lost loved ones.)

As to the problem of too many people, overpopulation is a massive problem whether or not a few people get cryonicly preserved.

As to the problem of just the rich getting the benefits, patiently explain that there's no reason to think that the rich now will be treated substantially different from the less rich who sign up for cryonics. And if society ever has the technology to easily revive people from cryonic suspension then the likely standard of living will be so high compared to now that even if the rich have more it won't matter.

Comment author: Blueberry 09 July 2010 08:50:47PM 2 points [-]

It does not help to not being signed up for cryonics oneself.

I talk about it as something I'm thinking about, and ask what they think. That way, it's not you trying to persuade someone, it's just a conversation.

"But death gives meaning to live. And if nobody died, there would be too many people around here. Only the rich ones could get the benefits. And ultimately, whatever end the universe takes, we will all die, you know science, don't ya?"

"Yeah, we'll all die eventually, but this is just a way of curing aging, just like trying to find a cure for heart disease or cancer. All those things are true of any medical treatment, but that doesn't mean we shouldn't save lives."

Comment author: Matt_Simpson 18 July 2010 07:04:25AM *  10 points [-]

Are any LWer's familiar with adversarial publishing? The basic idea is that two researchers who disagree on some empirically testable proposition come together with an arbiter to design an experiment to resolve their disagreement.

Here's a summary of the process from an article (pdf) I recently read (where Daniel Kahneman was one of the adversaries).

  1. When tempted to write a critique or to run an experimental refutation of a recent publication, consider the possibility of proposing joint research under an agreed protocol. We call the scholars engaged in such an effort participants. If theoretical differences are deep or if there are large differences in experimental routines between the laboratories, consider the possibility of asking a trusted colleague to coordinate the effort, referee disagreements, and collect the data. We call that person an arbiter.
  2. Agree on the details of an initial study, designed to subject the opposing claims to an informative empirical test. The participants should seek to identify results that would change their mind, at least to some extent, and should explicitly anticipate their interpretations of outcomes that would be inconsistent with their theoretical expectations. These predictions should be recorded by the arbiter to prevent future disagreements about remembered interpretations.
  3. If there are disagreements about unpublished data, a replication that is agreed to by both participants should be included in the initial study.
  4. Accept in advance that the initial study will be inconclusive. Allow each side to propose an additional experiment to exploit the fount of hindsight wisdom that commonly becomes available when disliked results are obtained. Additional studies should be planned jointly, with the arbiter resolving disagreements as they occur.
  5. Agree in advance to produce an article with all participants as authors. The arbiter can take responsibility for several parts of the article: an introduction to the debate, the report of experimental results, and a statement of agreed-upon conclusions. If significant disagreements remain, the participants should write individual discussions. The length of these discussions should be determined in advance and monitored by the arbiter. An author who has more to say than the arbiter allows should indicate this fact in a footnote and provide readers with a way to obtain the added material.
  6. The data should be under the control of the arbiter, who should be free to publish with only one of the original participants if the other refuses to cooperate. Naturally, the circumstances of such an event should be part of the report.
  7. All experimentation and writing should be done quickly, within deadlines agreed to in advance. Delay is likely to breed discord.
  8. The arbiter should have the casting vote in selecting a venue for publication, and editors should be informed that requests for major revisions are likely to create impossible problems for the participants in the exercise.

This seems like a great way to resolve academic disputes. Philip Tetlock appears to be an advocate. What do you think?

Comment author: Vladimir_Nesov 22 July 2010 01:14:45PM *  8 points [-]

Paul Graham on guarding your creative productivity:

I'd noticed startups got way less done when they started raising money, but it was not till we ourselves raised money that I understood why. The problem is not the actual time it takes to meet with investors. The problem is that once you start raising money, raising money becomes the top idea in your mind. That becomes what you think about when you take a shower in the morning. And that means other questions aren't. [...]

You can't directly control where your thoughts drift. If you're controlling them, they're not drifting. But you can control them indirectly, by controlling what situations you let yourself get into. That has been the lesson for me: be careful what you let become critical to you. Try to get yourself into situations where the most urgent problems are ones you want think about.

Comment author: MBlume 17 July 2010 12:50:50AM 8 points [-]

So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.

Now, being a professional exorcist does not give a high prior for rationality.

But still, even given that background, that's a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.

I wonder if this uncriticality has anything to do with, well, not expecting to be criticized. If most of the hacks that humans use in place of rationality are socially motivated, we can safely turn them off when speaking to a child who doesn't know any better.

I wonder how much benefit we'd get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?

Comment author: LucasSloan 17 July 2010 02:52:44AM *  4 points [-]

I wonder how much benefit we'd get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?

Probably not very, because we can't actually imagine what that hypothetical person would say to us. It'd probably end up used as a way to affirm your positions by only testing strong points.

Comment author: jimmy 17 July 2010 05:35:03AM 3 points [-]

I wonder how much benefit we'd get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?

I do it sometimes, and I think it helps.

Comment author: mindviews 11 July 2010 09:07:23AM 8 points [-]

An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.

Comment author: Vladimir_Golovin 11 July 2010 10:30:14AM *  2 points [-]

This implies that the mantra "Will I become a syndicated cartoonist?" could be more effective than the original affirmative version, "I will become a syndicated cartoonist".

Comment author: Risto_Saarelma 16 July 2010 06:55:30PM *  7 points [-]

There's a course "Street Fighting Mathematics" on MIT OCW, with an associated free Creative Commons textbook (PDF). It's about estimation tricks and heuristics that can be used when working with math problems. Despite the pop-sounding title, it appears to be written for people who are actually expected to be doing nontrivial math.

Might be relevant to the simple math of everything stuff.

Comment author: Eliezer_Yudkowsky 14 July 2010 08:43:05PM 7 points [-]

From a recent newspaper story:

The odds that Joan Ginther would hit four Texas Lottery jackpots for a combined $21 million are astronomical. Mathematicians say the chances are as slim as 1 in 18 septillion — that's 18 and 24 zeros.

I haven't checked this calculation at all, but I'm confident that it's wrong, for the simple reason that it is far more likely that some "mathematician" gave them the wrong numbers than that any compactly describable event with odds of 1 in 18 septillion against it has actually been reported on, in writing, in the history of intelligent life on my Everett branch of Earth. Discuss?

Comment author: whpearson 14 July 2010 08:53:46PM *  7 points [-]

From the article (there is a near invisible more text button)

Calculating the actual odds of Ginther hitting four multimillion-dollar lottery jackpots is tricky. If Ginther's winning tickets were the only four she ever bought, the odds would be one in 18 septillion, according to Sandy Norman and Eduardo Duenez, math professors at the University of Texas at San Antonio.

And she was the only person ever to have bought 4 tickets (birthday paradoxes and all)...

I did see an analysis of this somewhere, I'll try and dig it up. Here it is. There is hackernews commentary here.

I find this, from the original msnbc article, depressing

After all, the only way to win is to keep playing. Ginther is smart enough to know that's how you beat the odds: she earned her doctorate from Stanford University in 1976, then spent a decade on faculty at several colleges in California.

Teaching math.

Comment author: nhamann 15 July 2010 07:34:28PM 2 points [-]

I find this, from the original msnbc article, depressing

Is it depressing because someone with a Ph.D. in math is playing the lottery, or depressing because she must've have figured out something we don't know, given that she's won four times?

Comment author: Blueberry 14 July 2010 09:40:10PM 6 points [-]

It seems right to me. If the chance of one ticket winning is one in 10^6, the chance of four specified tickets winning four drawings is one in 10^24.

Of course, the chances of "Person X winning the lottery week 1 AND Person Y winning the lottery week 2 AND Person Z winning the lottery week 3 AND Person W winning the lottery week 4" are also 10^24, and this happens every four weeks.

Comment author: Tyrrell_McAllister 14 July 2010 08:50:39PM 3 points [-]

The most eyebrow-raising part of that article:

After all, the only way to win is to keep playing. Ginther is smart enough to know that's how you beat the odds: she earned her doctorate from Stanford University in 1976, then spent a decade on faculty at several colleges in California.

Teaching math.

Comment author: mchouza 15 July 2010 05:43:26PM 2 points [-]

It's also far more likely that she cheated. Or that there is a conspiracy in the Lottery to make she win four times.

Comment author: Yoreth 11 July 2010 04:25:06AM 7 points [-]

Is there any philosophy worth reading?

As far as I can tell, a great deal of "philosophy" (basically the intellectuals' wastebasket taxon) consists of wordplay, apologetics, or outright nonsense. Consequently, for any given philosophical work, my prior strongly favors not reading it because the expected benefit won't outweigh the cost. It takes a great deal of evidence to tip the balance.

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

However, at the same time I'm concerned that this leads me to read things that only reinforce the beliefs I already have. And there's little point in seeking information if it doesn't change your beliefs.

It's a complicated question what purpose philosophy serves, but I wouldn't be posting here if I thought it served none. So my question is: What philosophical works and authors have you found especially valuable, for whatever reason? Perhaps the recommendations of such esteemed individuals as yourselves will carry enough evidentiary weight that I'll actually read the darned things.

Comment author: Tyrrell_McAllister 11 July 2010 11:37:18PM *  8 points [-]

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

You might find it more helpful to come at the matter from a topic-centric direction, instead of an author-centric direction. Are there topics that interest you, but which seem to be discussed mostly by philosophers? If so, which community of philosophers looks like it is exploring (or has explored) the most productive avenues for understanding that topic?

Comment author: orthonormal 11 July 2010 11:19:26PM 7 points [-]

Remember that philosophers, like everyone else, lived before the idea of motivated cognition was fully developed; it was commonplace to have theories of epistemology which didn't lead you to be suspicious enough of your own conclusions. You may be holding them to too high a standard by pointing to some of their conclusions, when some of their intermediate ideas and methods are still of interest and value today.

However, you should be selective of who you read. Unless you're an academic philosopher, for instance, reading a modern synopsis of Kantian thought is vastly preferable to trying to read Kant yourself. For similar reasons, I've steered clear of Hegel's original texts.

Unfortunately for the present purpose, I myself went the long way (I went to a college with a strong Great Books core in several subjects), so I don't have a good digest to recommend. Anyone else have one?

Comment author: mindviews 11 July 2010 09:52:23AM 3 points [-]

Is there any philosophy worth reading?

Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist.

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

I've developed quite a respect for Hilary Putnam and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile.

I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in.

PS - There was this thread over at Hacker News that just came up yesterday if you're looking to cast a wider net.

Comment author: Vladimir_M 11 July 2010 07:13:00AM *  9 points [-]

Yoreth:

For example: I've heard vague rumors that GWF Hegel concludes that the Prussian State (under which, coincidentally, he lived) was the best form of human existence. I've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

That's an extremely bad way to draw conclusions. If you were living 300 years ago, you could have similarly heard that some English dude named Isaac Newton is spending enormous amounts of time scribbling obsessive speculations about Biblical apocalypse and other occult subjects -- and concluded that even if he had some valid insights about physics, it wouldn't be worth your time to go looking for them.

Comment author: Emile 11 July 2010 12:55:18PM *  3 points [-]

The value of Newton's theories themselves can quite easily be checked, independently of the quality of his epistemology.

For a philosopher like Hegel, it's much harder to dissociate the different bits of what he wrote, and if one part looks rotten, there's no obvious place to cut.

(What's more, Newton's obsession with alchemy would discourage me from reading whatever Newton had to say about science in general)

Comment author: JoshuaZ 11 July 2010 03:56:18PM *  2 points [-]

It's a complicated question what purpose philosophy serves, but I wouldn't be posting here if I thought it served none. So my question is: What philosophical works and authors have you found especially valuable, for whatever reason? Perhaps the recommendations of such esteemed individuals as yourselves will carry enough evidentiary weight that I'll actually read the darned things.

Laktatos, Quine and Kuhn are all worth reading. Recommended works from each follows:

Lakatos: " Proofs and Refutations" Quine: "Two Dogmas of Empiricism" Kuhn: "The Copernican Revolution" and "The Structure of Scientific Revolution"

All of these have things which are wrong but they make arguments that need to be grappled with and understood (Copernican Revolution is more of a history book than a philosophy book but it helps present a case of Kuhn's approach to the history and philosophy of science in great detail). Kuhn is a particularly interesting case- I think that his general thesis about how science operates and what science is is wrong, but he makes a strong enough case such that I find weaker versions of his claims to be highly plausible. Kuhn also is just an excellent writer full of interesting factual tidbits.

've also heard that Descartes "proves" that God exists. Now, whether or not Hegel or Descartes may have had any valid insights, this is enough to tell me that it's not worth my time to go looking for them.

This seems like in general not a great attitude. The Descartes case is especially relevant in that Descartes did a lot of stuff not just philosophy. And some of his philosophy is worth understanding simply due to the fact that later authors react to him and discuss things in his context. And although he's often wrong, he's often wrong in a very precise fashion. His dualism is much more well-defined than people before him. Hegel however is a complete muddle. I'd label a lot of Hegel as not even wrong. ETA: And if I'm going to be bashing Hegel a bit, what kind of arrogant individual does it take to write a book entitled "The Encyclopedia of the Philosophical Sciences" that is just one's magnum opus about one's own philosophical views and doesn't discuss any others?

Comment author: Emile 11 July 2010 12:48:20PM 2 points [-]

I've enjoyed Nietzsche, he's an entertaining and thought-provoking writer. He offers some interesting perspectives on morality, history, etc.

Comment author: bogus 26 July 2010 03:37:58PM 6 points [-]

Daniel Dennett and Linda LaScola on Preachers who are not believers:

There are systemic features of contemporary Christianity that create an almost invisible class of non-believing clergy, ensnared in their ministries by a web of obligations, constraints, comforts, and community. ... The authors anticipate that the discussion generated on the Web (at On Faith, the Newsweek/Washington Post website on religion, link) and on other websites will facilitate a larger study that will enable the insights of this pilot study to be clarified, modified, and expanded.

Comment author: Rain 14 July 2010 10:53:01PM *  6 points [-]

Day-to-day question:

I live in a ground floor apartment with a sunken entryway. Behind my fairly large apartment building is a small wooded area including a pond and a park. During the spring and summer, oftentimes (~1 per 2 weeks) a frog will hop down the entryway at night and hop around on the dusty concrete until dying of dehydration. I occasionally notice them in the morning as I'm leaving for work, and have taken various actions depending on my feelings at the time and the circumstances of the moment.

  1. Action: Capture the frog and put it in the woods out back. Cost: ~10 seconds to capture, ~2 minutes to put into the woods, getting slimy frog on my hands and dew on my shoes. Benefit: frog potentially survives.
  2. Action: Capture the frog and put it in the dew-covered grass out front. Cost: ~10 seconds to capture, ~20 seconds to put into the grass, getting slimy frog on my hands. Benefit: no frog corpses in the stairwell after I get home from work, and it has a possibility of surviving.
  3. Action: Either of the above, but also taking a glass of warm water and pouring it over the frog to clean off the dust and cobwebs from hopping around the stairwell. Cost: ~1 minute to get a glass of water, consumption of resources to ensure it's warm enough not to cause harm, ~10 seconds of cleaning the frog. Benefit: makes frog look less deathly, potentially increases chances at survival.
  4. Action: Leave the frog in the stairwell. Cost: slight emotional guilt at not helping the frog, slight advance of the current human-caused mass extinction event. Benefit: no action required.
  5. Action: As above, but once the frog is dead, position it in the stairwell in such a way as to be aesthetically pleasing, as small ceramic animals sometimes are. Cost: touching a dead frog, being seen as obviously creepy or weird. Benefit: cute little squatting frog in the shade under the stairwell every morning.

What would you do, why, and how long would you keep doing it?

Comment author: NancyLebovitz 21 July 2010 08:40:24AM *  3 points [-]

How often do you find frogs in the stairwell? Could it make sense to carry something (a plastic bag?) to pick up the frog with so that you don't get slime on your hands?

If it were me, I think I'd go with plastic bag or other hand cover, possibly have room temperature water with me (probably good enough for frogs, and I'm willing to drink the stuff), and put the frog on the lawn unless I'm in the mood for a bit of a walk and seeing the woods.

I have no doubt that I would habitually wonder whether there are weird events in people's lives which are the result of interventions by incomprehensibly powerful beings.

Comment author: Nisan 17 July 2010 05:53:00PM 2 points [-]

2: I would put the frog in the grass. Warm fuzzies are a great way to start the day, and it only costs 30 seconds.

If you're truly concerned about the well-being of frogs, you might want to do more. You'd also want to ask yourself what you're doing to help frogs everywhere. The fact that the frog ended up on your doorstep doesn't make you extra responsible for the frog; it merely provides you with an opportunity to help.

Also, wash your hands before eating.

Comment author: jimrandomh 17 July 2010 06:14:09PM 4 points [-]

If you're truly concerned about the well-being of frogs, you might want to do more. You'd also want to ask yourself what you're doing to help frogs everywhere. The fact that the frog ended up on your doorstep doesn't make you extra responsible for the frog; it merely provides you with an opportunity to help.

The goal of helping frogs is to gain fuzzies, not utilons. Thinking about all the frogs that you don't have the opportunity to help would mean losing those fuzzies.

Comment author: Rain 19 July 2010 02:48:47PM *  4 points [-]

There's no utility in saving (animal) life? Or is that only for this particular instance?

Edit 20-Jun-2014: Frogs saved since my original post: 21.5. Frogs I've failed to save: 23.5.

Comment author: Eliezer_Yudkowsky 21 July 2010 09:50:27AM 2 points [-]

I don't consider frogs to be objects of moral worth.

Comment author: multifoliaterose 21 July 2010 10:19:39AM 8 points [-]

Why not?

Comment author: VNKKET 21 July 2010 09:28:45PM *  12 points [-]

Are there any possible facts that would make you consider frogs objects of moral worth if you found out they were true?

(Edited for clarity.)

Comment author: XiXiDu 22 July 2010 10:35:46AM *  11 points [-]

Questions of priority - and the relative intensity of suffering between members of different species - need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it's unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.

-- David Pearce via Facebook

Comment author: CarlShulman 21 July 2010 06:34:51PM 10 points [-]

I'm surprised. Do you mean you wouldn't trade off a dust speck in your eye (in some post-singularity future where x-risk is settled one way or another) to avert the torture of a billion frogs, or of some noticeable portion of all frogs? If we plotted your attitudes to progressively more intelligent entities, where's the discontinuity or discontinuities?

Comment author: Bongo 24 July 2010 11:50:35AM 2 points [-]

Hopefully he still thinks there's a small probability of frogs being able to experience pain, so that the expected suffering of frog torture would be hugely greater than a dust speck.

Comment author: RichardKennaway 21 July 2010 01:13:56PM 2 points [-]

Would you save a stranded frog, though?

Comment author: Tyrrell_McAllister 09 July 2010 04:35:10PM 6 points [-]

I have a question about prediction markets. I expect that it has a standard answer.

It seems like the existence of casinos presents a kind of problem for prediction markets. Casinos are a sort of prediction market where people go to try to cash out on their ability to predict which card will be drawn, or where the ball will land on a roulette wheel. They are enticed to bet when the casino sets the odds at certain levels. But casinos reliably make money, so people are reliably wrong when they try to make these predictions.

Casinos don't invalidate prediction markets, but casinos do seem to show that prediction markets will be predictably inefficient in some way. How is this fact dealt with in futarchy proposals?

Comment author: Vladimir_Nesov 09 July 2010 06:43:46PM 5 points [-]

The money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.

Comment author: Tyrrell_McAllister 09 July 2010 08:59:40PM *  3 points [-]

The money brought in by stupid gamblers creates additional incentive for smart players to clear it out with correct predictions. The crazier the prediction market, the more reason for rational players to make it rational.

Right. Maybe I shouldn't have said that a prediction market would be "predictably inefficient". I can see that rational players can swoop in and profit from irrational players.

But that's not what I was trying to get at with "predictably inefficient". What I meant was this:

Suppose that you know next to nothing about the construction of roulette wheels. You have no "expert knowledge" about whether a particular roulette ball will land in a particular spot. However, for some reason, you want to make an accurate prediction. So you decide to treat the casino (or, better, all casinos taken together) as a prediction market, and to use the odds at which people buy roulette bets to determine your prediction about whether the ball will land in that spot.

Won't you be consistently wrong if you try that strategy? If so, how Is this consistent wrongness accounted for in futarchy theory?

I understand that, in a casino, players are making bets with the house, not with each other. But no casino has a monopoly on roulette. Players can go to the casino that they think is offering the best odds. Wouldn't this make the gambling market enough like a prediction market for the issue I raise to be a problem?

I may just have a very basic misunderstanding of how futarchy would work. I figured that it worked like this: The market settles on a certain probability that something will happen by settling on an equilibrium for the odds at which people are willing to buy bets. Then policy makers look at the market's settled probability and craft their policy accordingly.

Comment author: orthonormal 09 July 2010 10:28:39PM 3 points [-]

In the stock market, as in a prediction market, the smart money is what actually sets the price, taking others' irrationalities as their profit margin. There's no such mechanism in casinos, since the "smart money" doesn't gamble in casinos for profit (excepting card-counting, cheating, and poker tournaments hosted by casinos, etc).

Comment author: Unnamed 09 July 2010 11:04:10PM 2 points [-]

Roulette odds are actually very close to representing probabilities, although you'd consistently overestimate the probability if you just translated directly. Each $1 bet on a specific number pays out a $35 profit, suggesting p=1/36, but in reality p=1/38. Relative odds get you even closer to accurate probabilities; for instance, 7 & 32 have the same payout, from which we could conclude (correctly, in this case) that they are equally likely. With a little reasoning - 38 possible outcomes with identical payouts - you can find the correct probability of 1/38.

This table shows that every possible roulette bet except for one has the same EV, which means that you'd only be wrong about relative probabilities if you were considering that one particular bet. Other casino games have more variability in EV, but you'd still usually get pretty close to correct probabilities. The biggest errors would probably be for low probability-high payout games like lotteries or raffles.

Comment author: Unnamed 09 July 2010 05:45:12PM 4 points [-]

One way to think of it is that decisions to gamble are based on both information and an error term which reflects things like irrationality or just the fact that people enjoy gambling. Prediction markets are designed to get rid of the error and have prices reflect the information: errors cancel out as people who err in opposite directions bet on opposite sides, and errors in one direction create +EV opportunities which attract savvy, informed gamblers to bet on the other side. But casinos are designed to drive gambling based solely on the error term - people are betting on events that are inherently unpredictable (so they have little or no useful information) against the house at fixed prices, not against each other (so the errors don't cancel out), and the prices are set so that bets are -EV for everyone regardless of how many errors other people make (so there aren't incentives for savvy informed people to come wager).

Sports gambling is structured more similarly to prediction markets - people can bet on both sides, and it's possible for a smart gambler to have relevant information and to profit from it, if the lines aren't set properly - and sports betting lines tend to be pretty accurate.

Comment author: Strange7 12 July 2010 12:42:22PM 6 points [-]

I have also heard of at least one professional gambler who makes his living by identifying and confronting other peoples' superstitious gambling strategies. For example, if someone claims that 30 hasn't come up in a while, and thus is 'due,' he would make a separate bet with them (to which the house is not a party), claiming simply that they're wrong.

Often, this is an even-money bet which he has upwards of a 97% chance of winning; when he loses, the relatively small payoff to the other party is supplemented by both the warm fuzzies associated with rampant confirmation bias, and the status kick from defeating a professional gambler in single combat.

Comment author: orthonormal 09 July 2010 10:18:41PM 2 points [-]

The most obvious thing: customers are only allowed to take one side of a bet, whose terms are dictated by the house.

If you had a general-topic prediction market with one agent who chose the odds for everything, and only allowed people to bet in one chosen direction on each topic, that agent (if they were at all clever) could make a lot of money, but the odds wouldn't be any "smarter" than that agent (and in fact would be dumber so as to make a profit margin).

Comment author: Dagon 09 July 2010 09:03:10PM 2 points [-]

Casinos have an assymetry: creation of new casinos is heavily regulated, so there's no way for people with good information to bet on their beliefs, and no mechanism for the true odds to be reached as the market price for a wager.

Comment author: orthonormal 09 July 2010 10:14:33PM 2 points [-]

Normally I wouldn't comment on a typo, but I can't read "assymetry" without chuckling.

Comment author: orthonormal 09 July 2010 02:32:55PM 15 points [-]

It seems to me that "emergence" has a useful meaning once we recognize the Mind Projection Fallacy:

We say that a system X has emergent behavior if we have heuristics for both a low-level description and a high-level description, but we don't know how to connect one to the other. (Like "confusing", it exists in the map but not the territory.)

This matches the usage: the ideal gas laws aren't "emergent" since we know how to derive them (at a physics level of rigor) from lower-level models; however, intelligence is still "emergent" for us since we're too dumb to find the lower-level patterns in the brain which give rise to patterns like thoughts and awareness, which we have high-level heuristics for.

Thoughts? (If someone's said this before, I apologize for not remembering it.)

Comment deleted 09 July 2010 02:54:35PM *  [-]
Comment deleted 09 July 2010 11:55:25PM *  [-]
Comment author: orthonormal 10 July 2010 12:17:18AM *  5 points [-]

I dunno, I kind of like the idea that as science advances, particular phenomena stop being emergent. I'd be very glad if "emergent" changed from a connotation of semantic stop-sign to a connotation of unsolved problem.

Comment author: Unnamed 10 July 2010 05:50:18PM 4 points [-]

It's worth checking on the Stanford Encyclopedia of Philosophy when this kind of issue comes up. It looks like this view - emergent=hard to predict from low-level model - is pretty mainstream.

The first paragraph of the article on emergence says that it's a controversial term with various related uses, generally meaning that some phenomenon arises from lower-level processes but is somehow not reducible to them. At the start of section 2 ("Epistemological Emergence"), the article says that the most popular approach is to "characterize the concept of emergence strictly in terms of limits on human knowledge of complex systems." It then gives a few different variations on this type of view, like that the higher-level behavior could not be predicted "practically speaking; or for any finite knower; or for even an ideal knower."

There's more there, some of which seems sensible and some of which I don't understand.

Comment author: JoshuaZ 09 July 2010 02:38:59PM 2 points [-]

The only problem with that seems to be that when people talk about emergent behavior they seem to be more often than not talking about "emergence" as a property of the territory, not a property of the map. So for example, someone says that "AI will require emergent behavior"- that's a claim about the territory. Your definition of emergence seems like a reasonable and potentially useful one but one would need to be careful that the common connotations don't cause confusion.

Comment author: WrongBot 13 July 2010 06:17:38PM *  14 points [-]
Comment author: Blueberry 13 July 2010 07:59:29PM 2 points [-]

That is brilliant.

Comment author: [deleted] 29 July 2010 06:03:09PM 5 points [-]

Reading Michael Vassar's comments on WrongBot's article (http://lesswrong.com/lw/2i6/forager_anthropology/2c7s?c=1&context=1#2c7s) made me feel that the current technique of learning how to write a LW post isn't very efficient (read lots of LW, write a post, wait for lots of comments, try to figure out how their issues could be resolved, write another post etc - it uses up lots of the writer's time and lot's of the commentors time).

I was wondering whether there might be a more focused way of doing this. Ie. A short term workshop, a few writers who have been promoted offer to give feedback to a few writers who are struggling to develop the necessary rigour etc by providing a faster feedback cycle, the ability to redraft an article rather than having to start totally afresh and just general advice.

Some people may not feel that this is very beneficial - there's no need for writing to LW to be made easier (in fact, possibly the opposite) but first off, I'm not talking about making writing for LW easier, I'm talking about making more of the writing of a higher quality. And secondly, I certainly learn a lot better given a chance to interact on that extra level. I think learning to write at an LW level is an excellent way of achieving LW aim of helping people to think at that level.

I'm a long time lurker but I haven't even really commented before because I find it hard to jump to that next level of understanding that enables me to communicate anything of value. I wonder if there are others who feel the same or a similar way.

Good idea? Bad idea?

Comment author: [deleted] 29 July 2010 07:52:42PM *  3 points [-]

We could use a more structured system, perhaps. At this point, there's nothing to stop you from writing a post before you're ready, except your own modesty. Raise the threshold, and nobody will have to yell at people for writing posts that don't quite work.

Possibilities:

  1. Significantly raise the minimum karma level.

  2. An editorial system: a more "advanced" member has to read your post before it becomes top-level.

  3. A wiki page about instructions for posting. It should include: a description of appropriate subject matter, formatting instructions, common errors in reasoning or etiquette.

  4. A social norm that encourages editing (including totally reworking an essay.) The convention for blog posts on the internet in general mandates against editing -- a post is supposed to be an honest record of one's thoughts at the time. But LessWrong is different, and we're supposed to be updating as we learn from each other. We could make "Please edit this" more explicit.

A related thought on karma -- I have the suspicion that we upvote more than we downvote. It would be possible to adjust the site to keep track of each person's upvote/downvote stats. That is, some people are generous with karma, and some people give more negative feedback. We could calibrate ourselves better if we had a running tally.

Comment author: xamdam 30 July 2010 06:21:24PM *  3 points [-]

Significantly raise the minimum karma level.

Another technical solution. Not trivial to implement, but also contains significant side benefits.

  • Find some subset of sequences and other highly ranked posts that are "super-core" and has large consensus not just in karma, but also in agreement by high-karma members (say top ten).
  • Create a multiple choice test and implement it online, which is external technologies exist for already I am sure.

Some karma + passing test gets top posting privileges.

I have to confess I abused my newly acquired posting privileges and probably diluted the site's value with a couple of posts. Thank goodness they were rather short :). I took the hint though and took to participating in the comment discussion and reading sequences until I am ready to contribute at a higher level.

Comment author: jimrandomh 29 July 2010 09:03:23PM *  4 points [-]

Kuro5hin had an editorial system, where all posts started out in a special section where they were separate and only visible to logged in users. Commenters would label their comments as either "topical" or "editorial", and all editorial comments would be deleted when the post left editing; and votes cast during editing would determine where the post went (front page, less prominent section, or deleted).

Unfortunately, most of the busy smart people only looked at the posts after editing, while the trolls and people with too much free time managed the edit queue, eventually destroying the quality of the site and driving the good users away. It might be possible to salvage that model somehow, though.

We upvote much more than we downvote - just look at the mean comment and post scores. Also, the number of downvotes a user can make is capped at their karma.

Comment author: WrongBot 29 July 2010 07:07:25PM 3 points [-]

Is there any consensus about the "right" way to write a LW post? I see a lot of diversity in style, topic, and level of rigor in highly-voted posts. I certainly have no good way to tell if I'm doing it right; Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was. (Voting is not solely determined by post quality; this is a big part of the problem.)

I would certainly love to have a better way to get feedback than the current mechanisms; it's indisputable that my writing could be better. Being able to workshop posts would be great, but I think it would be hard to find the right people to do the workshopping; off the top of my head I can really only think of a handful of posters I'd want to have doing that, and I get the impression that they're all too busy. Maybe not, though.

(I think this is a great idea.)

Comment author: Larks 30 July 2010 10:35:34PM 2 points [-]

Michael Vassar doesn't think so, but he's never had a post voted as highly as my first one was.

I didn't think there was anything particularly wrong with your post, but newer posts get a much higher level of karma than old ones, which must be taken into account. Some of the core sequence posts have only 2 karma, for example.

Comment author: NancyLebovitz 25 July 2010 04:16:48PM *  5 points [-]

Rationality applied to swimming

The author was a lousy swimmer for a long time, but got respect because he put in so much effort. Eventually he became a swim coach, and he quickly noticed that the bad swimmers looked the way he did, and the good swimmers looked very different, so he started teaching the bad swimmers to look like the good swimmers, and began becoming a better swimmer himself.

Later, he got into the physics of good swimming. For example, it's more important to minimize drag than to put out more effort.

I'm posting this partly because it's always a pleasure to see rationality, partly because the most recent chapter of Methods of Rationality reminded me of it, and mostly because it's a fine example of clue acquisition.

Comment author: JoshuaZ 23 July 2010 03:33:27AM 5 points [-]

Two things of interest to Less Wrong:

First, there's an article about intelligence and religiosity. I don't have access to the papers in question right now, but the upshot is apparently that the correlation between intelligence (as measured by IQ and other tests) and irreligiosity can be explained with minimal emphasis on intelligence but rather on ability to process information and estimate your own knowledge base as well. They found for example that people who were overconfident about their knowledge level were much more likely to be religious. There may still be correlation v. causation issues, but tentatively it looks like having fewer cognitive biases and having better default rationality actually makes one less religious.

The second matter of interest to LW: Today's featured article on the English Wikipedia is the article on confirmation bias.

Comment author: Will_Newsome 20 July 2010 11:15:09PM *  5 points [-]

Is there a bias, maybe called the 'compensation bias', that causes one to think that any person with many obvious positive traits or circumstances (really attractive, rich, intelligent, seemingly happy, et cetera) must have at least one huge compensating flaw or a tragic history or something? I looked through Wiki's list of cognitive biases and didn't see it, but I thought I'd heard of something like this. Maybe it's not a real bias?

If not, I'd be surprised. Whenever I talk to my non-rationalist friends about how amazing persons X Y or Z are, they invariably (out of 5 or so occasions when I brought it up) replied with something along the lines of 'Well I bet he/she is secretly horribly depressed / a horrible person / full of ennui / not well-liked by friends and family". This is kind of the opposite of the halo effect. It could be that this bias only occurs when someone is asked to evaluate the overall goodness of someone who they themselves have not gotten the chance to respect or see as high status.

Anyway, I know Eliezer had a post called 'competent elites' or summat along these lines, but I'm not sure if this effect is a previously researched bias I'm half-remembering or if it's just a natural consequence of some other biases (e.g. just world bias).

Added: Alternative hypothesis that is more consistent with the halo effect and physical attractiveness stereotype data: my friends are themselves exceptionally physically attractive and competent but have compensatory personal flaws or depression or whatever, and are thus generalizing from one or two examples when assuming that others that share similar traits as themselves would also have such problems. I think this is the more likely of my two current hypotheses, as my friends are exceptionally awesome as well as exceptionally angsty. Aspiring rationalists! Empiricists and theorists needed! Do you have data or alternative hypotheses?

Comment author: hegemonicon 26 July 2010 02:33:07PM 4 points [-]

It may have to do with the manner you bring it up - it's not hard to see how saying something like "X is amazing" could be interpreted "X is amazing...and you're not" (after all, how often do you tell your friends how amazing they are?), in which case the bias is some combination of status jockeying, cognitive dissonance and ego protection.

Comment author: Will_Newsome 27 July 2010 07:41:07AM 3 points [-]

Wow, that's seems like a very likely hypothesis that I completely missed. Is there some piece of knowledge you came in with or heuristic you used that I could have used to think up your hypothesis?

Comment author: hegemonicon 27 July 2010 04:51:20PM 2 points [-]

I've spent some time thinking about this, and the best answer I can give is that I spend enough time thinking about the origins and motivations of my own behavior that, if it's something I might conceivably do right now, or (more importantly) at some point in the past, I can offer up a possible motivation behind it.

Apparently this is becoming more and more subconscious, as it took quite a bit of thinking before I realized that that's what I had done.

Comment author: NancyLebovitz 21 July 2010 09:24:46AM 2 points [-]

Could it be a matter of being excessively influenced by fiction? It's more convenient for stories if a character has some flaws and suffering.

Comment author: jimmy 15 July 2010 10:21:30PM 5 points [-]

http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/06/28/risky-business-james-bagian-nasa-astronaut-turned-patient-safety-expert-on-being-wrong.aspx

This article is pretty cool, since it describes someone running quality control on a hospital from an engineering perspective. He seems to have a good understanding of how stuff works, and it reads like something one might see on lesswrong.

Comment author: RichardKennaway 12 July 2010 11:02:04AM *  5 points [-]

The selective attention test (YouTube video link) is quite well-known. If you haven't heard of it, watch it now.

Now try the sequel (another YouTube video).

Even when you're expecting the tbevyyn, you still miss other things. Attention doesn't help in noticing what you aren't looking for.

More here.

Comment author: NancyLebovitz 23 July 2010 09:15:55PM *  4 points [-]

Thought without Language Discussion of adults who've grown up profoundly deaf without having been exposed to sign language or lip-reading.

Edited because I labeled the link as "Language without Thought-- this counts as an example of itself.

Comment author: JoshuaZ 09 July 2010 02:00:20PM 4 points [-]

Machine learning is now being used to predict manhole explosions in New York. This is another example of how machine learning/specialized AI are becoming increasingly common place to the point where they are being used for very mundane tasks.

Comment author: billswift 09 July 2010 05:21:25PM 6 points [-]

Somebody said that the reason there is no progress in AI is that once a problem domain is understood well enough that there are working applications in it, nobody calls it AI any longer.

Comment author: wnoise 09 July 2010 08:55:55PM 4 points [-]

I think philosophy is a similar case. Physics used to be squarely in philosophy, until it was no longer a confused mess, but actually useful. Linguistics too used to be considered a branch of philosophy.

Comment author: Blueberry 09 July 2010 08:58:37PM 3 points [-]

As did economics.

Comment author: whpearson 10 July 2010 01:14:01PM 10 points [-]

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)? I can't guarantee an unbiased account, as I was a player. But I think it might be interesting, purely as an example where social situations and what should be done are not as simple as sometimes portrayed.

Comment author: Emile 10 July 2010 03:23:49PM 9 points [-]

Would people be interested in a description of someone with high-social skills failing in a social situation (getting kicked out of a house)?

I'm not sure it's that relevant to rationality, but I think most humans (myself included!) are interested in hearing juicy gossip, especially if it features a popular trope such as "high status (but mildly disliked by the audience) person meets downfall".

How about this division of labor: you tell us the story and we come up with some explanation for how it relates to rationality, probably involving evolutionary psychology.

Comment author: Barry_Cotter 09 July 2010 11:19:52AM 9 points [-]

What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

E.g., Fizzbuzz. Apparently most people who come into an interview won't be able to do it. Now, I can't code or anything but computers do only and exactly what you tell them (assuming you're not dealing with a thicket of code so dense it has emergent properties) but here's what I'd tell the computer to do

# Proceed from 0 to x, in increments of 1, (where x =whatever) If divisible by 3, remainder 0, associate fizz with number If divisible by 5, remainder 0, associate buzz with number, Make ordered list from o to x, of numbers associated with fizz OR buzz For numbers associated with fizz NOT buzz, append fizz For numbers associated with buzz NOT fizz, append fizz For numbers associated with fizz AND buzz, append fizzbuzz #

I ask out of interest in acquiring money, on elance, rentacoder, odesk etc. I'm starting from a position of total ignorance but y'know it doesn't seem like learning C, and understanding Conrete Mathematics and TAOCP in a useful or even deep way would be the work of more than a year, while it would place one well above average in some domains of this activiteity.

Or have I missed something really obvious and important?

Comment author: twanvl 09 July 2010 12:20:20PM 11 points [-]

I have no numbers for this, but the idea is that after interviewing for a job, competent people get hired, while incompetent people do not. These incompetents then have to interview for other jobs, so they will be seen more often, and complained about a lot. So perhaps the perceived prevalence of incompetent programmers is a result of availability bias (?).

This theory does not explain why this problem occurs in programming but not in other fields. I don't even know whether that is true. Maybe the situation is the same elsewhere, and I am biased here because I am a programmer.

Comment author: Emile 09 July 2010 08:13:31PM *  6 points [-]

Joel Spolsky gave a similar explanation.

That means, in this horribly simplified universe, that the entire world could consist of 1,000,000 programmers, of whom the worst 199 keep applying for every job and never getting them, but the best 999,801 always get jobs as soon as they apply for one. So every time a job is listed the 199 losers apply, as usual, and one guy from the pool of 999,801 applies, and he gets the job, of course, because he's the best, and now, in this contrived example, every employer thinks they're getting the top 0.5% when they're actually getting the top 99.9801%.

Makes sense.

I'm a programmer, and haven't noticed that many horribly incompetent programmers (which could count as evidence that I'm one myself!).

Comment author: sketerpot 10 July 2010 08:36:52PM *  2 points [-]

Do you consider fizzbuzz trivial? Could you write an interpreter for a simple Forth-like language, if you wanted to? If the answers to these questions are "yes", then that's strong evidence that you're not a horribly incompetent programmer.

Is this reassuring?

Comment author: Emile 10 July 2010 09:05:33PM 2 points [-]

Do you consider fizzbuzz trivial?

Yes

Could you write an interpreter for a simple Forth-like language, if you wanted to?

Probably; I made a simple lambda-calculus interpret once and started working on a Lisp parser (I don't think I got much further than the 'parsing' bit). Forth looks relatively simple, though correctly parsing quotes and comments is always a bit tricky.

Of course, I don't think I'm a horribly incompetent programmer -- like most humans, I have a high opinion of myself :D

Comment author: wedrifid 09 July 2010 03:27:34PM 8 points [-]

What's the deal with programming, as a careeer? It seems like the lower levels at least should be readily accessible even to people of thoroughly average intelligence but I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

From what I can tell the average person is borderline incompetent when it comes to the 'actually getting work done' part of a job. It is perhaps slightly more obvious with a role such as programming where output is somewhat closer to the level of objective physical reality.

Comment author: JRMayne 10 July 2010 03:06:37AM 5 points [-]

Proceed from 0 to x, in increments of 1, (where x =whatever) If divisible by 3, remainder 0, associate fizz with number If divisible by 5, remainder 0, associate buzz with number, Make ordered list from o to x, of numbers associated with fizz OR buzz For numbers associated with fizz NOT buzz, append fizz For numbers associated with buzz NOT fizz, append fizz For numbers associated with fizz AND buzz, append fizzbuzz

I don't know anything about FizzBuzz, but your program generates no buzzes and lots of fizzes (appending fizz to numbers associated only with fizz or buzz.) This is not a particularly compelling demonstration of your point that it should be easy.

(I'm not a programmer, at least not professionally. The last serious program I wrote was 23 years ago in Fortran.)

Comment author: sketerpot 10 July 2010 08:34:38PM *  5 points [-]

The bug would have been obvious if the pseudocode had been indented. I'm convinced that a large fraction of beginner programming bugs arise from poor code formatting. (I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.)

Actually, maybe this is a sign of a bigger conceptual problem: a lot of people see programs as sequences of instructions, rather than a tree structure. Indentation seems natural if you hold the latter view, and pointless if you can only perceive programs as serial streams of tokens.

Comment author: Douglas_Knight 10 July 2010 10:20:30PM 5 points [-]

I got this idea from watching beginners make mistakes, over and over again, which would have been obvious if they had heeded my dire warnings and just frickin' indented their code.

This seems to predict that python solves this problem. Do you have any experience watching beginners with python? (Your second paragraph suggests that indentation is just the symptom and python won't help.)

Comment author: cousin_it 09 July 2010 12:07:48PM *  5 points [-]

Your general point is right. Ever since I started programming, it always felt like money for free. As long as you have the right mindset and never let yourself get intimidated.

Your solution to FizzBuzz is too complex and uses data structures ("associate whatever with whatever", then ordered lists) that it could've done without. Instead, do this:

for x in range(1, 101):
fizz = (x%3 == 0)
buzz = (x%5 == 0)
if fizz and buzz:
print "FizzBuzz"
elif fizz:
print "Fizz"
elif buzz:
print "Buzz"
else:
print x

This is runnable Python code. (NB: to write code in comments, indent each line by four spaces.) Python a simple language, maybe the best for beginners among all mainstream languages. Download the interpreter and use it to solve some Project Euler problems for finger exercises, because most actual programming tasks are a wee bit harder than FizzBuzz.

Comment author: SilasBarta 09 July 2010 12:41:43PM 2 points [-]

How did you first find work? How do you usually find work, and what would you recommend competent programmers do to get started in a career?

Comment author: jimrandomh 09 July 2010 03:02:25PM 6 points [-]

The least-effort strategy, and the one I used for my current job, is to talk to recruiting firms. They have access to job openings that are not announced publically, and they have strong financial incentives to get you hired. The usual structure, at least for those I've worked with, is that the prospective employee pays nothing, while the employer pays some fraction of a year's salary for a successful hire, where success is defined by lasting longer than some duration.

(I've been involved in hiring at the company I work for, and most of the candidates fail the first interview on a question of comparable difficulty to fizzbuzz. I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.)

Comment author: Vladimir_Nesov 09 July 2010 03:12:45PM 4 points [-]

I think the problem is that there are some unteachable intrinsic talents necessary for programming, and many people irrevocably commit to getting comp sci degrees before discovering that they can't be taught to program.

I think there are failure modes from the curiosity-stopping anti-epistemology cluster, that allow you to fail to learn indefinitely, because you don't recognize what you need to learn, and so never manage to actually learn that. With right approach anyone who is not seriously stupid could be taught (but it might take lots of time and effort, so often not worth it).

Comment author: cousin_it 09 July 2010 01:05:12PM *  5 points [-]

My first paying job was webmaster for a Quake clan that was administered by some friends of my parents. I was something like 14 or 15 then, and never stopped working since (I'm 27 now). Many people around me are aware of my skills, so work usually comes to me; I had about 20 employers (taking different positions on the spectrum from client to full-time employer) but I don't think I ever got hired the "traditional" way with a resume and an interview.

Right now my primary job is a fun project we started some years ago with my classmates from school, and it's grown quite a bit since then. My immediate boss is a former classmate of mine, and our CEO is the father of another of my classmates; moreover, I've known him since I was 12 or so when he went on hiking trips with us. In the past I've worked for friends of my parents, friends of my friends, friends of my own, people who rented a room at one of my schools, people who found me on the Internet, people I knew from previous jobs... Basically, if you need something done yesterday and your previous contractor was stupid, contact me and I'll try to help :-)

ETA: I just noticed that I didn't answer your last question. Not sure what to recommend to competent programmers because I've never needed to ask others for recomendations of this sort (hah, that pattern again). Maybe it's about networking: back when I had a steady girlfriend, I spent about three years supporting our "family" alone by random freelance work, so naturally I learned to present a good face to people. Maybe it's about location: Moscow has a chronic shortage of programmers, and I never stop searching for talented junior people myself.

Comment author: Blueberry 09 July 2010 04:03:42PM 3 points [-]

I was very surprised by this until I read the word "Moscow."

Comment author: gwern 10 July 2010 10:04:28AM *  4 points [-]

Most people find the concept of programming obvious, but the doing impossible.

--"Epigrams in Programming", by Alan J. Perlis; <small>ACM's SIGPLAN publication, September, 1982

Comment author: Daniel_Burfoot 09 July 2010 01:52:39PM 4 points [-]

I've read a lot that leads me to believe the average professional programmer is borderline incompetent.

Programming as a field exhibits a weird bimodal distribution of talent. Some people are just in it for the paycheck, but others think of it as a true intellectual and creative challenge. Not only does the latter group spend extra hours perfecting their art, they also tend to be higher-IQ. Most of them could make better money in the law/medicine/MBA path. So obviously the "programming is an art" group is going to have a low opinion of the "programming is a paycheck" group.

Comment author: gwern 10 July 2010 10:06:18AM *  3 points [-]

Programming as a field exhibits a weird bimodal distribution of talent.

Do we have any refs for this? I know there's "The Camel Has Two Humps" (Alan Kay on it, the PDF), but anything else?

Comment author: Morendil 09 July 2010 12:46:22PM 4 points [-]

I'll second the suggestion that you try your hand at some actual programming tasks, relatively easy ones to start with, and see where that gets you.

The deal with programming is that some people grok it readily and some don't. There seems to be some measure of talent involved that conscientious hard word can't replace.

Still, it seems to me (I have had a post about this in the works for ages) that anyone keen on improving their thinking can benefit from giving programming a try. It's like math in that respect.

Comment author: MartinB 09 July 2010 11:39:54AM *  3 points [-]

i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway.

http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG

Comment author: sketerpot 10 July 2010 08:46:31PM 2 points [-]

It seems there are those that get pointers, and the others.

Are there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used.

Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?

Comment author: wedrifid 11 July 2010 01:35:47AM 5 points [-]

Are there really people who don't get pointers? I'm having a hard time even imagining this. Pointers really aren't that hard, if you take a few hours to learn what they do and how they're used.

There really are people who would not take that few hours.

Comment author: cata 10 July 2010 09:16:58PM *  4 points [-]

I don't know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn't get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don't know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial.

So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.

Comment author: apophenia 10 July 2010 10:18:52PM 2 points [-]

There are really people who don't get pointers.

Comment author: Morendil 10 July 2010 10:30:20PM 2 points [-]

One of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.)

Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...

Comment author: SilasBarta 28 July 2010 07:08:02PM *  12 points [-]

Why are Roko's posts deleted? Every comment or post he made since April last year is gone! WTF?

Edit: It looks like this discussion sheds some light on it. As best I can tell, Roko said something that someone didn't want to get out, so someone (maybe Roko?) deleted a huge chunk of his posts just to be safe.

Comment author: Eliezer_Yudkowsky 28 July 2010 07:40:45PM *  7 points [-]

I see. A side effect of banning one post, I think; only one post should've been banned, for certain. I'll try to undo it. There was a point when a prototype of LW had just gone up, someone somehow found it and posted using an obscene user name ("masterbater"), and code changes were quickly made to get that out of the system when their post was banned.

Holy Cthulhu, are you people paranoid about your evil administrator. Notice: I am not Professor Quirrell in real life.

EDIT: No, it wasn't a side effect, Roko did it on purpose.

Comment author: Unnamed 28 July 2010 07:54:15PM 15 points [-]

Notice: I am not Professor Quirrell in real life.

Indeed. You are open about your ambition to take over the world, rather than hiding behind the identity of an academic.

Comment author: whpearson 28 July 2010 07:43:56PM 12 points [-]

Notice: I am not Professor Quirrell in real life.

And that is exactly what Professor Quirrell would say!

Comment author: Eliezer_Yudkowsky 28 July 2010 08:31:10PM 15 points [-]

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

Comment author: RobinZ 28 July 2010 08:40:20PM 8 points [-]
Comment author: wedrifid 25 September 2010 07:20:56AM 3 points [-]

Professor Quirrell wouldn't give himself away by writing about Professor Quirrell, even after taking into account that this is exactly what he wants you to think.

Of course <level of reasoning plus one> as you know very well. :)

Comment author: DanielVarga 30 July 2010 04:48:55AM 10 points [-]

A side effect of banning one post, I think;

In a certain sense, it is.

Comment author: JoshuaZ 29 July 2010 12:22:54AM 6 points [-]

Notice: I am not Professor Quirrell in real life.

Of course, we already established that you're Light Yagami.

Comment author: thomblake 02 August 2010 02:36:19PM 3 points [-]

I am not Professor Quirrell in real life.

I'm not sure we should believe you.

Comment author: Roko 28 July 2010 08:01:29PM 9 points [-]

I've deleted them myself. I think that my time is better spent looking for a quant job to fund x-risk research than on LW, where it seems I am actually doing active harm by staying rather than merely wasting time. I must say, it has been fun, but I think I am in the region of negative returns, not just diminishing ones.

Comment author: JoshuaZ 28 July 2010 11:59:54PM *  12 points [-]

I'm deeply confused by this logic. There was one post where due to a potentially weird quirk of some small fraction of the population, reading that post could create harm. I fail to see how the vast majority of other posts are therefore harmful. This is all the more the case because this breaks the flow of a lot of posts and a lot of very interesting arguments and points you've made.

ETA: To be more clear, leaving LW doesn't mean you need to delete the posts.

Comment author: daedalus2u 29 July 2010 12:07:22AM 6 points [-]

I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(

Comment author: Vladimir_Nesov 28 July 2010 10:38:29PM *  45 points [-]

So you've deleted the posts you've made in the past. This is harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable.

For example, consider these posts, and comments on them, that you deleted:

I believe it's against community blog ethics to delete posts in this manner. I'd like them restored.

Edit: Roko accepted this argument and said he's OK with restoring the posts under an anonymous username (if it's technically possible).

Comment author: cousin_it 29 July 2010 09:11:13AM *  6 points [-]

It's ironic that, from a timeless point of view, Roko has done well. Future copies of Roko on LessWrong will not receive the same treatment as this copy did, because this copy's actions constitute proof of what happens as a result.

(This comment is part of my ongoing experiment to explain anything at all with timeless/acausal reasoning.)

Comment author: rhollerith_dot_com 29 July 2010 12:11:27AM *  3 points [-]

Parent is inaccurate: although Roko's comments are not, Roko's posts (i.e., top-level submissions) are still available, as are their comment sections minus Roko's comments (but Roko's name is no longer on them and they are no longer accessible via /user/Roko/ URLs).

Comment author: RobinZ 29 July 2010 03:30:50AM 13 points [-]

Not via user/Roko or via /tag/ or via /new/ or via /top/ or via / - they are only accessible through direct links saved by previous users, and that makes them much harder to stumble upon. This remains a cost.

Comment author: nickernst 18 August 2010 02:36:24AM 5 points [-]

Could the people who have such links post them here?

Comment author: Blueberry 29 July 2010 10:36:06AM 22 points [-]

And I'd like the post of Roko's that got banned restored. If I were Roko I would be very angry about having my post deleted because of an infinitesimal far-fetched chance of an AI going wrong. I'm angry about it now and I didn't even write it. That's what was "harmful for the blog, disrupts the record and makes the comments by other people on those posts unavailable." That's what should be against the blog ethics.

I don't blame him for removing all of his contributions after his post was treated like that.

Comment author: [deleted] 29 July 2010 05:37:07AM *  8 points [-]

It's also generally impolite (though completely within the TOS) to delete a person's contributions according to some arbitrary rules. Given that Roko is the seventh highest contributor to the site, I think he deserves some more respect. Since Roko was insulted, there doesn't seem to be a reason for him to act nicely to everyone else. If you really want the posts restored, it would probably be more effective to request an admin to do so.

Comment author: Clippy 28 July 2010 11:32:11PM 25 points [-]

I understand. I've been thinking about quitting LessWrong so that I can devote more time to earning money for paperclips.

Comment deleted 29 July 2010 12:38:10AM *  [-]
Comment author: cousin_it 29 July 2010 09:06:32AM *  11 points [-]

I'm not them, but I'd very much like your comment to stay here and never be deleted.

Comment author: Aleksei_Riikonen 30 July 2010 01:18:15PM 4 points [-]

Does not seem very nice to take such an out-of-context partial quote from Eliezer's comment. You could have included the first paragraph, where he commented on the unusual nature of the language he's going to use now (the comment indeed didn't start off as you here implied), and also the later parts where he again commented on why he thought such unusual language was appropriate.

Comment author: SilasBarta 27 July 2010 04:03:05AM 3 points [-]

Slashdot having an epic case of tribalism blinding their judgment? This poster tries to argue that, despite Intelligent Design proponents being horribly wrong, it is still appropriate for them to use the term "evolutionist" to refer to those they disagree with.

The reaction seems to be basically, "but they're wrong, why should they get to use that term?"

Huh?

Comment author: ata 27 July 2010 04:21:51AM *  3 points [-]

Slashdot having an epic case of tribalism blinding their judgment?

I haven't regularly read Slashdot in several years, but I seem to recall that it was like that pretty much all the time.

Comment author: JoshuaZ 27 July 2010 04:05:34AM 2 points [-]

There's a legitimate reason to not want ID proponents and creationists to use the term "evolutionist" although it isn't getting stated well in that thread. In particular, the term is used to portray evolution as an ideology with ideological adherents. Thus, the use of the term "evolutionism" as well. It seems like the commentators in question have heard some garbled bit about that concern and aren't quite reproducing it accurately.

Comment author: Unknowns 26 July 2010 10:40:51AM 3 points [-]

A second post has been banned. Strange: it was on a totally different topic from Roko's.

Comment author: cousin_it 26 July 2010 11:25:11AM *  3 points [-]

(comment edited)

I wonder why PlaidX's post isn't getting deleted - the discussion there is way closer to the forbidden topic.

Comment author: Eliezer_Yudkowsky 26 July 2010 12:02:50PM 2 points [-]

Still the sort of thing that will send people close to the OCD side of the personality spectrum into a spiral of nightmares, which, please note, has apparently already happened in at least two cases. I'm surprised by this, but accept reality. It's possible we may have more than the usual number of OCD-side-of-the-spectrum people among us.

Comment author: xamdam 26 July 2010 01:55:27PM 3 points [-]

Was the discussion in question epistemologicaly interesting (vs. intellectual masturbation)? If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks.

As an aside, I've had some SL-related psychological effects, particularly related to material notion of self: a bit of trouble going to sleep, realizing that logically there is little distinction from death-state. This lasted a short while, but then you just learn to "stop worrying and love the bomb". Besides "time heals all wounds" certain ideas helped, too. (I actually think this is an important SL, though it does not sit well within the SciFi hierarchy).

This worked for me, but I am generally very low on the OCD scale, and I am still mentally not quite ready for some of the discussions going on here.

Comment author: Apprentice 26 July 2010 02:35:32PM 4 points [-]

If so, how many OCD personalities joining the site would call for closing the thread? I am curious about decision criteria. Thanks.

It is impossible to have rules without Mr. Potter exploiting them.

Comment author: Alexandros 18 July 2010 12:13:19PM *  3 points [-]

John Hari - My Experiment With Smart Drugs (2008)

How does everyone here feel about these 'Smart Drugs'? They seem quire tempting to me, but are there candidates that have been in use long and considered safe?

Comment author: NancyLebovitz 20 July 2010 07:55:16AM 3 points [-]

It surprised me that he didn't consider taking provigil one or two days a week.

It also should have surprised me (but didn't-- it just occurred to me) that he didn't consider testing the drugs' effects on his creativity.

Comment author: arundelo 18 July 2010 02:37:00PM 3 points [-]

There's some discussion here and here.

Comment author: Will_Newsome 16 July 2010 12:10:49AM 3 points [-]

Have any LWers traveled the US without a car/house/lot-of-money for a year or more? Is there anything an aspiring rationalist in particular should know on top of the more traditional advice? Did you learn much? Was there something else you wish you'd done instead? Any unexpected setbacks (e.g. ended up costing more than expected; no access to books; hard to meet worthwhile people; etc.)? Any unexpected benefits? Was it harder or easier than you had expected? Is it possible to be happy without a lot of social initiative? Did it help you develop social initiative? What questions do you wish you would have asked beforehand, and what are the answers to those questions?

Actually, any possibly relevant advice or wisdom would be appreciated. :D

Comment author: [deleted] 15 July 2010 05:36:53PM 3 points [-]

I figure the open thread is as good as any for a personal advice request. It might be a rationality issue as well.

I have incredible difficulty believing that anybody likes me. Ever since I was old enough to be aware of my own awkwardness, I have the constant suspicion that all my "friends" secretly think poorly of me, and only tolerate me to be nice.

It occurred to me that this is a problem when a close friend actually said, outright, that he liked me -- and I happen to know that he never tells even white lies, as a personal scruple -- and I simply couldn't believe him. I know I've said some weird or embarrassing things in front of him, and so I just can't conceive of him not looking down on me.

So. Is there a way of improving my emotional response to fit the evidence better? Sometimes there is evidence that people like me (they invite me to events; they go out of their way to spend time with me; or, in the generalized professional sense, I get some forms of recognition for my work). But I find myself ignoring the good and only seeing the bad.

Comment author: [deleted] 17 July 2010 01:44:27AM 4 points [-]

Update for the curious: did talk to a friend (the same one mentioned above, who, I think, is a better "shrink" than some real shrinks) and am now resolved to kick this thing, because sooner or later, excessive approval-seeking will get me in trouble.

I'm starting with what I think of as homebrew CBT: I will not gratuitously apologize or verbally belittle myself. I will try to replace "I suck, everyone hates me" thoughts with saner alternatives. I will keep doing this even when it seems stupid and self-deluding. Hopefully the concrete behavioral stuff will affect the higher-level stuff.

After all. A mathematician I really admire gave me career advice -- and it was "Believe in yourself." Yeah, in those words, and he's a logical guy, not very soft and fuzzy.

Comment author: WrongBot 15 July 2010 06:41:55PM 2 points [-]

For what it's worth, this is often known as Imposter Syndrome, though it's not any sort of real psychiatric diagnosis. Unfortunately, I'm not aware of any reliable strategies for defeating it; I have a friend who has had similar issues in a more academic context and she seems to have largely overcome the problem, but I'm not sure as to how.

Comment author: khafra 15 July 2010 05:56:36PM *  2 points [-]

Alicorn's Living Luminously series covers some methods of systematic mental introspection and tweaking like this. The comments on alief are especially applicable.

Comment author: ciphergoth 15 July 2010 07:52:23AM *  3 points [-]

An object lesson in how not to think about the future:

http://www.futuretimeline.net/

(from Pharyngula)

Comment author: xamdam 09 July 2010 02:58:10PM 3 points [-]

Very interesting story about a project that involved massive elicitation of expert probabilities. Especially of interest to those with Bayes Nets/Decision analysis background. http://web.archive.org/web/20000709213303/www.lis.pitt.edu/~dsl/hailfinder/probms2.html

Comment author: whpearson 11 July 2010 06:27:58PM *  5 points [-]

How facts Backfire

Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.

In the end, truth will out. Won’t it?

Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information.

There are a number of ways you can run with this article. It is interesting seeing it in the major press. It is also a little ironic that it is presenting facts to try and overturn an opinion (that information cannot be good for trying to overturn an opinion).

In terms of existential risk and thinking better in general. Obviously sometimes facts can overturn opinions but it makes me wonder, where is the organisation that uses non-fact based methods to sway opinion about existential risk. It would make sense if they were seperate, the fact based organisations (SIAI, FHI) need to be honest so that people that are fact-phillic to their message will trust them. I tend to ignore the fact-phobic (with respect to existential risk) people. But if it became sufficiently clear that foom style AI was possible, engineering society would become necessary.

Comment author: Kaj_Sotala 12 July 2010 12:05:31AM 5 points [-]

Interesting tidbit from the article:

One avenue may involve self-esteem. Nyhan worked on one study in which he showed that people who were given a self-affirmation exercise were more likely to consider new information than people who had not. In other words, if you feel good about yourself, you’ll listen — and if you feel insecure or threatened, you won’t.

I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen. These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.

Comment author: MBlume 12 July 2010 11:31:21PM *  5 points [-]

Data point: After years of having the correct arguments in my hand, having indeed generated many of them myself, and simply refusing to update, Eliezer, Cectic, and Dan Meissler ganged up on me and got the job done.

I think Jesus and Mo helped too, now I think of it. That period's already getting murky in my head =/

Anyhow, point is, none of the above are what you'd call gentle.

ETA: I really do think humor is incredibly corrosive to religion. Years before this, the closest I ever came to deconversion was right after I read "Kissing Hank's Ass"

Comment author: whpearson 12 July 2010 11:41:51PM *  4 points [-]

I'd guess aggression would have a polarising affect, depending upon ingroup or outgroup affiliation.

Aggression from an member of your own group is directed at something important that you ought to take note of. Aggression from an outsider is possibly directed at you so something to be ignored (if not credible) or countered.

We really need some students to do some tests upon, or a better way of searching psych research than google.

Comment author: cupholder 12 July 2010 11:52:11PM 3 points [-]

These results seem to support that, though there have also been contradictory reports from people saying that the very aggressiveness was what made them actually think.

Presumably there's heterogeneity in people's reactions to aggressiveness and to soft approaches. Most likely a minority of people react better to aggressive approaches and most people react better to being fed opposing arguments in a sandwich with self-affirmation bread.

Comment author: twanvl 13 July 2010 05:11:21PM 2 points [-]

I have long been thinking that the openly aggressive approach some display in promoting atheism / political ideas / whatever seems counterproductive, and more likely to make the other people not listen than it is to make them listen.

I believe aggressive debates are not about convincing the people you are debating with, that is likely to be impossible. Instead it is about convincing third parties who have not yet made up their mind. For that purpose it might be better to take an overly extreme position and to attack your opponents as much as possible.

Comment author: Christian_Szegedy 13 July 2010 12:30:37AM *  2 points [-]

I think one of the reasons this self-esteem seeding works is that identifying your core values makes other issues look less important.

On the other hand, if you e.g. independently expressed that God is an important element of your identity and belief in him is one of your treasured values, then it may backfire and you will be even harder to move you away from that. (Of course I am not sure: I have never seen any scientific data on that. This is purely a wild guess.)

Comment author: JoshuaZ 14 July 2010 01:58:04PM 2 points [-]

The primary study in question is here. I haven't been able to locate online a copy of the study about self-esteem and corrections.

Comment author: cerebus 17 July 2010 02:11:24PM *  4 points [-]

Nobel Laureate Jean-Marie Lehn is a transhumanist.

We are still apes and are fighting all around the world. We are in the prisons of dogmatism, fundamentalism and religion. Let me say that clearly. We must learn to be rational ... The pace at which science has progressed has been too fast for human behaviour to adapt to it. As I said we are still apes. A part of our brain is still a paleo-brain and many of reactions come from our fight or flight instinct. As long as this part of the brain can take over control the rational part of the brain (we will face these problems). Some people will jump up at what I am going to say now but I think at some point of time we will have to change our brains.

Comment author: gwern 29 July 2010 10:19:37AM 2 points [-]

Sparked by my recent interested in PredictionBook.com, I went back to take a look at Wrong Tomorrow, a prediction registry for pundits - but it's down. And doesn't seem to have been active recently.

I've emailed the address listed on the original OB ANN for WT, but while I'm waiting on that, does anyone know what happened to it?

Comment author: Matt_Simpson 29 July 2010 06:06:04AM 2 points [-]

UDT/TDT understanding check: Of the 3 open problems Eliezer lists for TDT, the one UDT solves is counterfactual mugging. Is this correct? (A yes or no is all I'm looking for, but if the answer is no, an explanation of any length would be appreciated)

Comment author: Eliezer_Yudkowsky 29 July 2010 06:01:48PM 3 points [-]

Yes.

Comment author: NancyLebovitz 17 July 2010 09:38:35AM 2 points [-]

Do the various versions of the Efficient Market Hypothesis only apply to investment in existing businesses?

The discussions of possible market blind spots in clothing makes me wonder how close the markets are to efficient for new businesses.

Comment author: ellx 14 July 2010 10:30:16AM 2 points [-]

I'm curious what peoples opinions are of Jeff Hawkins' book 'on intelligence', and specifically the idea that 'intelligence is about prediction'. I'm about halfway through and I'm not convinced, so I was wondering if anybody could point me to further proofs of this or something, cheers

Comment author: nhamann 15 July 2010 07:41:39PM *  2 points [-]

With regards to further reading, you can look at Hawkins' most recent (that I'm aware of) paper, "Towards a Mathematical Theory of Cortical Micro-Circuits". It's fairly technical, however, so I hope your math/neuroscience background is strong (I'm not knowledgeable enough to get much out of it).

You can also take a look at Hawkins' company Numenta, particularly the Technology Overview. Hierarchical Temporal Memory is the name of Hawkins' model of the neocortex, which IIRC he believes is responsible for some of the core prediction mechanisms in the human brain.

Edit: I almost forgot, this video of a talk he presented earlier this year may be the best introduction to HTM.

Comment author: Alexandros 14 July 2010 10:06:42AM *  2 points [-]

I was examining some of the arguments for the existence of god that separate beings into contingent (exist in some worlds but not all) and necessary (exist in all worlds). And it occurred to me that if the multiverse is indeed true, and its branches are all possible worlds, then we are all necessary beings, along with the multiverse, a part of whose structure we are.

Am I retreating into madness? :D

Comment author: Matt_Simpson 13 July 2010 06:06:55AM 2 points [-]

I just finished polishing off a top level post, but 5 new posts went up tonight - 3 of them substantial. So I ask, what should my strategy be? Should I just submit my post now because it doesn't really matter anyway? Or wait until the conversation dies down a bit so my post has a decent shot of being talked about? If I should wait, how long?

Comment author: [deleted] 13 July 2010 08:43:21AM 3 points [-]

Definitely wait. My personal favorite timing is one day for each new (substantial) post.

Comment author: SilasBarta 11 July 2010 01:32:09AM *  5 points [-]

More on the coming economic crisis for young people, and let me say, wow, just wow: the essay is a much more rigorous exposition of the things I talked about in my rant.

In particular, the author had similar problems to me in getting a mortgage, such as how I get told on one side, "you have a great credit score and qualify for a good rate!" and on another, "but you're not good enough for a loan". And he didn't even make the mistake of not getting a credit card early on!

Plus, he gives a lot of information from his personal experience.

Be warned, though: it's mixed with a lot of blame-the-government themes and certainty about future hyperinflation, and the preservation of real estate's value therein, if that kind of thing turns you off.

Edit: Okay, I've edited this comment about eight times now, but I left this out: from a rationality perspective, this essay shows the worst parts of Goodhart's Law: apparently, the old, functional criteria that would correctly identify some mortgage applicants is going to be mandated as the standard on all future mortgages. Yikes!

Comment author: NancyLebovitz 11 July 2010 11:33:20AM 5 points [-]

I've seen discussion of Goodhart's Law + Conservation of Thought playing out nastily in investment. For example, junk bonds started out as finding some undervalued bonds among junk bonds. Fine, that's how the market is supposed to work. Then people jumped to the conclusion that everything which was called a junk bond was undervalued. Oops.

Comment deleted 11 July 2010 12:38:39PM *  [-]
Comment author: nhamann 12 July 2010 08:17:45PM *  6 points [-]

If anyone is interested in seeing comments that are more representative of a mainstream response than what can be found from an Accelerating Future thread, Metafilter recently had a post on the NY Times article.

The comments aren't hilarious and insane, they're more of a casually dismissive nature. In this thread, cryonics is called an "afterlife scam", a pseudoscience, science fiction (technically true at this stage, but there's definitely an implied negative connotation on the "fiction" part, as if you shouldn't invest in cryonics because it's just nerd fantasy), and Pascal's Wager for atheists (The comparison is fallacious, and I thought the original Pascal's Wager was for atheists anyways...). There are a few criticisms that it's selfish, more than a few jokes sprinkled throughout the thread (as if the whole idea is silly), and even your classic death apologist.

All in all, a delightful cornucopia of irrationality.

ETA: I should probably point out that there were a few defenses. The most highly received defense of cryonics appears to be this post. There was also a comment from someone registered with Alcor that was very good, I thought. I attempted a couple of rebuttals, but I don't think they were well-received.

Also, check out this hilarious description of Robin Hanson from a commenter there:

The husband in that article sounded like an annoying nerd. Would I want to be frozen and wake up in a world run by these annoying douchebags? His 'futurecracy' idea seems idiotic (and also unworkable)

I guess that the fatal problem with cryonics is all the freaking nerds interested in it.

Comment author: RobinZ 12 July 2010 08:43:38PM *  6 points [-]

The responses are interesting. I think this is the most helpful to my understanding:

I'm getting sort of tired arguing about the futility of current cryogenics, so I won't.

I will state that, if my spouse fell for some sort of afterlife scam that cost tens of thousands of dollars, I WOULD be angry.

"This is not a hobby or conversation piece,” he wrote in 1968, adding, “it is the struggle for survival. Drive a used car if the cost of a new one interferes. Divorce your wife if she will not cooperate.

Scientology urges the exact same thing.
posted by muddgirl at 5:52 PM on July 11

I think this is the biggest PR hurdle for cryonics: it resembles (superficially) a transparent scam selling the hope of immortality for thousands of dollars.

Comment author: [deleted] 13 July 2010 12:22:13AM 6 points [-]

um... why isn't it? There's a logically possible chance of revival someday, yeah. But with no way to estimate how likely it is, you're blowing money on mere possibility.

We don't normally make bets that depend on the future development of currently unknown technologies. We aren't all investing in cold fusion just because it would be really awesome if it panned out.

Sorry, I know this is a cryonics-friendly site, but somebody's got to say it.

Comment author: Christian_Szegedy 13 July 2010 12:40:58AM *  4 points [-]

There are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level.

I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it.

However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.

Comment author: RobinZ 13 July 2010 12:52:43AM 3 points [-]

Well, right off the bat, there's a difference between "cryonics is a scam" and "cryonics is a dud investment". I think there's sufficient evidence to establish the presence of good intentions - the more difficult question is whether there's good evidence that resuscitation will become feasible.

Comment author: lsparrish 13 July 2010 12:33:32AM 3 points [-]

That's ok, it's a skepticism friendly site as well.

I don't see a mechanism whereby I get a benefit within my lifetime by investing in cold fusion, in the off chance that it is eventually invented and implemented.

Comment author: JoshuaZ 13 July 2010 12:31:49AM 3 points [-]

But with no way to estimate how likely it is, you're blowing money on mere possibility.

There isn't no way to estimate it. We can make reasonable estimations of probability based on the data we have (what we know about nanotech, what we know about brain function, what we know about chemical activity at very low temperatures, etc.).

Moreover, it is always possible to estimate something's likelyhood, and one cannot simply say "oh, this is difficult to estimate accurately, so I'll assign it a low probability." For any statement A that is difficult to estimate, I could just as easily make the same argument for ~A. Obviously, both A and ~A can't both have low probabilities.

Comment author: [deleted] 13 July 2010 12:47:27AM 2 points [-]

That's true; uncertainty about A doesn't make A less likely. It does, however, make me less likely to spend money on A, because I'm risk-averse.

Comment author: lsparrish 13 July 2010 01:31:02AM 3 points [-]

Have you decided on a specific sum that you would spend based on your subjective impression of the chances of cryonics working?

Comment author: [deleted] 13 July 2010 01:34:44AM 2 points [-]

Maybe $50. That's around the most I'd be willing to accept losing completely.

Comment author: lsparrish 13 July 2010 01:54:57AM 2 points [-]

Nice. I believe that would buy you indefinite cooling as a neuro patient, if about a billion other individuals (perhaps as few as 100 million) are also willing to spend the same amount.

Would you pay that much for a straight-freeze, or would that need to be an ideal perfusion with maximum currently-available chances of success?

Comment author: EStokes 17 July 2010 12:08:08AM *  2 points [-]

There's always a way to estimate how likely something is, even if it's not a very accurate prediction. And mere used like seems kinda like a dark side word, if you'll excuse me.

Cryonics is theoretically possible, in that it isn't inconsistant with science/physics as we know it so far. I can't really delve into this part much, as I don't know anything about cold fusion and thus can't understand the comparison properly, but it sounds as if it might be inconsistant with physics?

Possibly relevant: Is Molecular Nanotechnology Scientific?

Also, the benefits of cryonics working if you invested in it would be greater than those of investing in cold fusion.

And this is just the impression I get, but it sounds like you're being a contrarian contrarian. I think it's your last sentence: it made me think of Lonely Dissent.

Comment author: [deleted] 17 July 2010 12:54:56AM 3 points [-]

The unfair thing is, the more a community (like LW) values critical thinking, the more we feel free to criticize it. You get a much nicer reception criticizing a cryonicist's reasoning than criticizing a religious person's. It's easy to criticize people who tell you they don't mind. The result is that it's those who need constructive criticism the most who get the least. I'll admit I fall into this trap sometimes.

Comment author: jimrandomh 13 July 2010 12:57:06PM *  3 points [-]

But with no way to estimate how likely it is, you're blowing money on mere possibility.

You seem to be under the assumption that there is some minimum amount of evidence needed to give a probability. This is very common, but it is not the case. It's just as valid to say that the probability that an unknown statement X about which nothing is known is true is 0.5, as it is to say that the probability that a particular well-tested fair coin will come up heads is 0.5.

Probabilities based on lots of evidence are better than probabilities based on little evidence, of course; and in particular, probabilities based on little evidence can't be too close to 0 or 1. But not having enough evidence doesn't excuse you from having to estimate the probability of something before accepting or rejecting it.

Comment author: FAWS 13 July 2010 02:19:31PM *  9 points [-]

I'm not disputing your point vs cryonics, but 0.5 will only rarely be the best possible estimate for the probability of X. It's not possible to think about a statement about which literally nothing is known (in the sense of information potentially available to you). At the very least you either know how you became aware of X or that X suddenly came to your mind without any apparent reason. If you can understand X you will know how complex X is. If you don't you will at least know that and can guess at the complexity based on the information density you expect for such a statement and its length.

Example: If you hear someone whom you don't specifically suspect to have a reason to make it up say that Joachim Korchinsky will marry Abigail Medeiros on August 24 that statement probably should be assigned a probability quite a bit higher than 0.5 even if you don't know anything about the people involved. If you generate the same statement yourself by picking names and a date at random you probably should assign a probability very close to 0.

Basically it comes down to this: Most possible positive statements that carry more than one bit of information are false, but most methods of encountering statements are biased towards true statements.

Comment author: Will_Newsome 16 July 2010 12:32:26AM *  2 points [-]

I wonder what the average probability of truth is for every spoken statement made by the human populace on your average day, for various message lengths. Anybody wanna try some Fermi calculations?

I'm guessing it's rather high, as most statements are trivial observations about sensory data, performative utterances, or first-glance approximations of one's preferences. I would also predict sentence accuracy drops off extremely quickly the more words the sentence has, and especially so the more syllables there are per word in that sentence.

Comment author: FAWS 16 July 2010 11:21:12AM *  2 points [-]

Once you are beyond the most elementary of statements I really don't think so, rather the opposite, at least for unique rather than for repeated statements. Most untrue statements are probably either ad hoc lies ("You look great." "That's a great gift." "I don't have any money with me.") or misremembered information.

In the case of of ad hoc lies there is not enough time to invent plausible details and inventing details without time to think it through increases the risk of being caught, in the case of misremembered information you are less likely to know or remember additional information you could include in the statement than someone who really knows the subject and wouldn't make that error. Of course more information simply means including more things even the best experts on the subject are simply wrong about as well as more room for misrememberings, but I think the first effect dominates because there are many subjects the second effect doesn't really apply to, e. g. the content of a work of fiction or the constitution of a state (to an extent even legal matters in general).

Complex untrue statements would be things like rehearsed lies and anecdotes/myths/urban legends.

Consider the so called conjunction fallacy, if it was maladaptive for evaluating the truth of statements encountered normally it probably wouldn't exist. So in every day conversation (or at least the sort of situations that are relevant for the propagation of the memes and or genes involved) complex statements, at least of those kinds that can be observed to be evaluated "fallaciously", are probably more likely to be true.

Comment author: Armok_GoB 30 July 2010 10:00:40PM *  3 points [-]

(So this is just about the first real post I made here and I kinda have stage fright posting here, so if its horribly bad and uninteresting and so please tell me what I did wrong, ok? Also, I've been frying to figure out the spelling and grammar and failed, sorry about that.) (Disclaimer: This post is humorous, and not everything should be taken all to seriously! As someone (Boxo) reviewing it put it: "it's like a contest between 3^^^3 and common sense!")

1) My analysis of http://lesswrong.com/lw/kn/torture_vs_dust_specks/

Lets say 1 second of torture is -1 000 000 utilions. Because there are about 100 000 seconds in a day, and about 20 000 days in 50 years, that makes -2*10^15 utilions.

Now, I'm tempted to say a dust speck has no negative utility at all, but I'm not COMPLETELY certain I'm right. Let's say there's a 1/1000 000 chance I'm wrong*, in which case the dust speck is -1 utilion. That means the the dust speck option is -1 * 10^-6 * 3^^^3, which is approximately -3^^^3.

-3^^^3 < -10^15, therefore I chose the torture.

2) The ant speck problem.

The ant speck problem is like the dust speck problem, except instead of being 3^^^3 humans that get specks in their eyes, it's 3^^^3 ordinary ants, and it's a billion humans being tortured for a millennia.

Now, I'm bigoted against ants, and pretty sure I don't value them as much as humans. In fact, with 99.9999% certain I don't value ants suffering at all. The remaining probability space is dominated by that moral value is equal to 1000^[the number of neurons in the entity's brain] for brains similar to earth type animals. Humans have about 10^11, ants have about 10^4 That means an ant is worth about 10^(-10^14) as much as a human, if it's worth anything at all.

Now lets multiply this together... -1 utilions * 10^(-10^14) discount * 1/10^6 that ants are worth anything at all * 1/10^6 that dust specks are bad * 3^^^3... That's about -3^^^3!

And for the other side: -10^15 for 50 years. Multiply that with 20, and then with the billion... about -10^25.

-3^^^3 < -10^25, therefore I chose the torture!

((*I do not actually think this, the numbers are for the sake of argument and have little to do with my actual beliefs at all.))

3) Obvious derived problems: There are variations of the ant problem, can you work out and post what if...

  • The ants will only be tortured if also all the protons in the earth decays within one second of the choice, the torture however is certain?

  • Instead of ants, you have bacteria, with behaviour as complicated as to be equivalent of 1/100 neurons?

  • The source you get the info from is unreliable, there's only a 1/googol chance the specks could actual happen, while the torture, again, is certain?

  • All of the above?

Comment author: Vladimir_Nesov 30 July 2010 11:18:30PM *  3 points [-]

Lets say 1 second of torture is -1 000 000 utilions. Because there are about 100 000 seconds in a day, and about 20 000 days in 50 years, that makes -2*10^15 utilions.

Given some heavy utilitarian assumptions. This isn't an argument, it's more plausible to just postulate disutility of torture without explanation.

Comment author: WrongBot 26 July 2010 09:12:53PM 3 points [-]

Given all the recent discussion of contrived infinite torture scenarios, I'm curious to hear if anyone has reconsidered their opinion of my post on Dangerous Thoughts. I am specifically not interested in discussing the details or plausibility of said scenarios.

Comment author: ata 24 July 2010 04:53:43AM 3 points [-]

Has anyone been doing, or thinking of doing, a documentary (preferably feature-length and targeted at popular audiences) about existential risk? People seem to love things that tell them the world is about to end, whether it's worth believing or not (2012 prophecies, apocalyptic religion, etc., and on the more respectable side: climate change, and... anything else?), so it may be worthwhile to have a well-researched, rational, honest look at the things that are actually most likely to destroy us in the next century, while still being emotionally compelling enough to get people to really comprehend it, care about it, and do what they can about it. (Geniuses viewing it might decide to go into existential risk reduction when they might otherwise have turned to string theory; it could raise awareness so that existential risk reduction is seen more widely as an important and respectable area of research; it could attract donors to organizations like FHI, SIAI, Foresight, and Lifeboat; etc.)

Comment author: SilasBarta 27 July 2010 08:53:02PM 2 points [-]

Something weird is going on. Every time I check, virtually all my recent comments are being steadily modded up, but I'm slowly losing karma. So even if someone is on an anti-Silas karma rampage, they're doing it even faster than my comments are being upvoted.

Since this isn't happening on any recent thread that I can find, I'd like to know if there's something to this -- if I made a huge cluster of errors on thread a while ago. (I also know someone who might have motive, but I don't want to throw around accusations at this point.)

Comment author: NancyLebovitz 28 July 2010 01:23:40AM 4 points [-]

This reminds me of something I mentioned as an improvement for LW a while ago, though for other reasons-- the ability to track all changes in karma for one's posts.

Comment author: Rain 30 July 2010 05:36:25PM *  10 points [-]

I tend to vote down a wide swath of your comments when I come across them in a thread such as this one or this one, attempting to punish you for being mean and wasting peoples' time. I'm a late reader, so you may not notice those comments being further downvoted; I guess I should post saying what I've done and why.

In the spirit of your desire for explanations, it is for the negative tone of your posts. You create this tone by the small additions you make that cause the text to sound more like verbal speech, specifically: emphasis, filler words, rhetorical questions, and the like. These techniques work significantly better when someone is able to gauge your body language and verbal tone of voice. In text, they turn your comments hostile.

That, and you repeat yourself. A lot.