Open Thread: February 2010, part 2

10 Post author: CronoDAS 16 February 2010 08:29AM

The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (857)

Comment author: Sniffnoy 28 February 2010 11:56:40PM 4 points [-]

Just saw this over at Not Exactly Rocket Science: http://scienceblogs.com/notrocketscience/2010/02/quicker_feedback_for_better_performance.php

Quick summary: They asked a bunch of people to give a 4-minute presentation, had people judging, and told the presenter how long it would be before they heard their assessment. Anticipating quicker feedback resulted in better performance, but predictions of worse performance, and anticipating slower feedback had the reverse effect.

Comment author: Jack 27 February 2010 08:41:13PM 2 points [-]

This is pretty self-important of me but I'd just like to warn people here that someone is posting at OB under "Jack" that isn't me so if anyone is forming a negative opinion of me on the basis of those comments- don't! Future OB comments will be under the name Jack (LW). The recent string of comments about METI are mine though.

This is what I get for choosing such a common name for my handle.

Apologies to those who have read this whole comment and don't care.

Comment author: AndyWood 27 February 2010 05:19:28AM 4 points [-]

Here's a question that I sure hope someone here knows the answer to:

What do you call it when someone, in an argument, tries to cast two different things as having equal standing, even though they are hardly even comparable? Very common example: in an atheism debate, the believer says "atheism takes just as much faith as religion does!"

It seems like there must be a word for this, but I can't think what it is. ??

Comment author: BenAlbahari 27 February 2010 07:13:47AM 1 point [-]

This is a great example of a "pitch". I've added it just now to the database of pitches:
http://www.takeonit.com/pitch/the_equivalence_pitch.aspx

Comment author: PhilGoetz 27 February 2010 06:33:05AM 2 points [-]
Comment author: Document 27 February 2010 06:25:05AM 2 points [-]

False equivalence?

Comment author: AndyWood 27 February 2010 07:24:57AM 3 points [-]

Aha! I think this one is closest to what I have in mind. Thanks.

It's interesting to me that "false equivalence" doesn't seem to have nearly as much discussion around it (at least, based on a cursory google survey) as most of the other fallacies. I seem to see this used for rhetorical mischief all the time!

Comment author: Eliezer_Yudkowsky 27 February 2010 06:20:29AM 0 points [-]

Closest I know is "tu quoque".

Comment author: AndyWood 27 February 2010 07:55:41AM *  4 points [-]

That is pretty close. If I understand them right, I think the difference is:

Tu Quoque: X is also guilty of Y, (therefore Z).

False Equivalence: (X is also guilty of Y), therefore Z.

where the parentheses indicate the major location of error.

Comment author: AdeleneDawner 26 February 2010 08:03:15PM 0 points [-]

This is thoroughly hypothetical, but if there was going to be an unofficial, more social sister site for LW, what name would you suggest for it?

Comment author: Morendil 26 February 2010 08:08:02PM 3 points [-]

Untrusted Hardware - to serve as a constant reminder.

Comment author: AdeleneDawner 26 February 2010 08:17:53PM 0 points [-]

I like that, though I think there are other variations on the theme that I'd like better. "Faulty Hardware", perhaps.

Comment author: Alicorn 24 February 2010 09:13:42PM 2 points [-]

An inquiry regarding my posting frequency:

While I'm at the SIAI house, I'm trying to orient towards the local priorities so as to be useful. Among the priorities is building community via Less Wrong, specifically by writing posts. Historically, the limiting factor on how much I post has been a desire not to flood the place - if I started posting as fast as I can write up my ideas, I'd get three or four posts out a week with (I think) no discernible decrease in quality. I have the following questions about this course of action:

  1. Will it annoy people? Building community by being annoying seems very unlikely to work.

  2. Will it affect voting behavior noticeably? I rely on my post's karma scores to determine what to do and not do in the future, and SIAI people who decide whether I'm useful enough to keep use it as a rough metric too. I'd rather post one post that gets 40 karma in a week than two that get 20, and so on.

Comment author: PeerInfinity 25 February 2010 05:36:22PM 2 points [-]

one obvious idea that I didn't notice anyone else mention:

Another option is to go ahead and write the posts as fast as you think is optimal, but if you think this is too fast to actually post the stuff you've written, then you can wait a few days after you wrote it before posting.

LW has a handy "drafts" feature that you can use for that.

This also has the advantage that you have more time to improve the article before you post it, but the disadvantage that you may be tempted to spend too much time making minor, unimportant improvements. Another disadvantage is that feedback gets delayed.

Comment author: Alicorn 25 February 2010 05:37:54PM 1 point [-]

If I sit on posts for too long, I start second-guessing myself and often wind up deleting them.

Comment author: Alicorn 24 February 2010 11:09:32PM *  1 point [-]

A related question: If I have a large topic to cover, should I cover it in one post, or split it up along convenient cleavage planes and make it a sequence? (If I make sequences, I think I'll learn my lesson from the last one I tried and write it all before posting anything, so I don't post 2/3 of it and then stop.)

Comment author: Eliezer_Yudkowsky 25 February 2010 01:12:10PM 1 point [-]

Posting 2/3 of a sequence and stopping is fine if people turn out not to be interested. I recommend fast posting and fast feedback.

Comment author: ciphergoth 25 February 2010 12:00:28PM 3 points [-]

I really like the "sequences" approach - it's easier to read and digest a chunk at a time, and it focusses discussion well, too.

Comment author: RobinZ 24 February 2010 11:38:02PM 3 points [-]

Long posts are more offputting than short ones, and individual steps are more likely to be correct than entire theorems - both of these points would suggest posting sequences preferentially.

As for a specific reference on length: thirty-three hundred words sharply focused on a single, vivid subject is pushing the upper limit of what I find comfortable to attack in a single sitting.

Comment author: byrnema 24 February 2010 10:06:20PM *  7 points [-]

As your goal is to build community, I would time new posts based on posting and commenting activity. For example, whenever there is a lull, this would be an excellent time to make a new post. (I noticed over the weekend there were some times when 45 minutes would pass between subsequent comments and wished for a new post to jazz things up.)

On the other hand, if there are several new posts already, then it would be nice to wait until their activity has waned a bit.

I think that it is optimal to have 1 or 2 posts 'going on' at a time. I prefer the second post when one of them is technical and/or of focused interest to a smaller subset of Less Wrongers.

(But otherwise no limit on the rate of posts.)

Comment author: Eliezer_Yudkowsky 24 February 2010 09:43:05PM *  5 points [-]

I'd say damn the torpedoes, full speed ahead. If people are annoyed, let them downvote. If posts start getting downvoted, slow down.

Your posts have generally been voted up. If now is the golden moment of time where you can get everything said, then for the love of Cthulhu, say it now!

Comment author: Alicorn 24 February 2010 10:44:36PM *  1 point [-]

I don't anticipate being so obnoxiously prolific that people collectively start voting my posts negative such that they stay that way. But people already sometimes register individual downvotes on posts that I make, and I don't want that to happen on a larger fraction of posts due to increased frequency, because I can't reliably distinguish between "you must have had an off day, this post is not up to scratch" and "please, please, please shut up".

Comment author: wedrifid 25 February 2010 02:28:47AM 2 points [-]

Post away.

The best signal to anticipate from the audience in this case is "how many votes in total do I expect if I post at full speed vs how many votes I expect if I posted less frequently and so ended up writing less posts overall". Increased frequency may give you less votes per post. Frequent posts from the same author may be less desired and if you post less you may only be giving the best posts. But if the net expectation is higher for more prolific posting then that can be interpreted as "the lesswrong.com community would prefer you to post faster than a spambot".

Even if you expected a total of less karma for more posts I wouldn't say that means you ought not post more. So long as your posts are still breaking the 10 mark we clearly don't mind your contribution. There are probably other benefits to you from posting than maximising the benefit to lesswrong. I find writing helps clarify my thinking for example. So as long as you are still being received somewhat positively you are free to type away.

Comment author: ciphergoth 24 February 2010 10:45:56PM 1 point [-]

Post as much as you like, if you think it's good quality; I promise to say if I start to think slowing down would be a good idea.

Comment author: Eliezer_Yudkowsky 24 February 2010 10:45:30PM 1 point [-]

I don't mean "downvoted negative" just "downvoted relative to other posters".

Comment author: thomblake 24 February 2010 09:28:01PM 0 points [-]

It seems to me that any strategy that does not end up with three posts by you on "Recent Posts" seems fine, as a rule of thumb.

Comment author: Alicorn 24 February 2010 09:39:15PM *  1 point [-]

Respondents, please upvote thomblake's comment if this seems like an acceptable rule of thumb.

Edit: And likewise for other things people say if those seem like good ideas.

Comment author: Eliezer_Yudkowsky 24 February 2010 07:52:13PM 2 points [-]

http://www.guardian.co.uk/global/2010/feb/23/flat-earth-society

Yeah, so... I'm betting if we could hook this guy up to a perfect lie detector, it would turn out to be a conscious scam. Or am I still underestimating human insanity by that much?

Comment author: MichaelHoward 28 February 2010 11:32:25AM 1 point [-]

That you see this as a particularly extreme case of insanity (even in an apparently intelligent, lucid, fully-functioning person) is far more shocking to me than this guy.

Maybe I've just seen too many Louis Theroux documentaries.

Comment author: Unknowns 24 February 2010 08:18:34PM 0 points [-]

At any rate, if it may not be quite a conscious scam, it sounds a lot like belief in belief, i.e. he may be telling himself that he believes the earth is flat because there is no conclusive proof that it isn't, but he secretly knows quite well that the evidence for this is as close to conclusive as it is for anything.

Comment author: thomblake 24 February 2010 08:04:48PM 3 points [-]

Or am I still underestimating human insanity by that much?

Yes.

People dismiss the scientific evidence weighing similarly against them on many issues in the news every day. There's nothing spectacular about finding someone who does it regarding the Earth being flat, especially given that an entire society has existed for hundreds of years to promote the idea.

Comment author: Cyan 24 February 2010 03:40:04PM 3 points [-]

The prosecutor's fallacy is aptly named:

Barlow and her fellow counsel, Kwixuan Maloof, were barred from mentioning that Puckett had been identified through a cold hit and from introducing the statistic on the one-in-three likelihood of a coincidental database match in his case—a figure the judge dismissed as "essentially irrelevant."

Comment author: thomblake 24 February 2010 02:12:11PM 0 points [-]

I've realized that having my and others' karma listed feels very similar to when Gemstone III started listing everyone's experience level.

The question remains: how much karma to level up?

Comment author: Kevin 24 February 2010 10:42:23AM *  0 points [-]

Montana's No Speed Limit Safety Paradox

http://news.ycombinator.com/item?id=1146684

Comment author: Kevin 24 February 2010 10:41:26AM 0 points [-]
Comment author: Morendil 24 February 2010 10:44:38AM 1 point [-]

What makes this relevant to LW participants?

Comment author: Kevin 24 February 2010 11:00:39AM *  -2 points [-]

Maybe an Italian court could find that CEV is a violation of local privacy laws.

Also, it could serve as general notice to keep your internet related businesses out of Italy and the Italian court system.

Comment author: Kevin 24 February 2010 09:45:20AM 0 points [-]
Comment author: bgrah449 24 February 2010 03:03:04AM 1 point [-]

I just failed the Wason selection task. Does anyone know any other similarly devilish problems?

Comment author: wedrifid 24 February 2010 04:25:32AM 0 points [-]

Fun task. I'll second the request.

Comment author: Cyan 24 February 2010 04:12:01AM 3 points [-]

Here's a classic:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

Answer here.

Comment author: ciphergoth 23 February 2010 07:59:55PM 3 points [-]

One thing that I got from the Sequences is that you can't just not assign a probability to an event - I think of this as a core insight of Bayesian rationality. I seem to remember an article in the Sequences about this where Eliezer describes a conversation in which he is challenged to assign a probability to the number of leaves on a particular tree, or the surname of the person walking past the window. But I can't find this article now - can anyone point me to it? Thanks!

Comment author: Wei_Dai 23 February 2010 08:59:10PM 0 points [-]

This may be related to the recent post Study: Making decisions makes you tired. It seems plausible that we don't assign probabilities to events until we have to, in order to make a decision, and that's why making decisions is tiring.

Comment author: Vladimir_Nesov 23 February 2010 08:14:39PM *  4 points [-]
Comment author: ciphergoth 23 February 2010 09:43:02PM 1 point [-]

That's exactly it - thanks!

Comment author: DanArmak 23 February 2010 06:20:48PM *  3 points [-]

How do people decide what comments to upvote? I see two kinds of possible strategies:

  1. Use my approval level of the comment to decide how to vote (up, down or neutral). Ignore other people's votes on this comment.
  2. Use my approval level to decide what total voting score to give the comment. Vote up or down as needed to move towards that target.

My own initial approach belonged to the first class. However, looking at votes on my own comments, I get the impression most people use the second approach. I haven't checked this with enough data to be really certain, so would value more opinions & data.

Here's what I found: I summed the votes from the last 4 pages of my own comments (skipping the most recent page because recent comments may yet be voted on):

  • Score <0: 2
  • Score =0: 36
  • Score =0: 39
  • Score =2: 14
  • Score =3: 5
  • Score >3: 6

35% of my comments are voted 0, and 52% are voted 1 or 2. There are significantly more than 1 or 2 people participating in the same threads as me. It is not likely that for each of these comments, just one or two people happened to like it, and the rest didn't. It is even less likely that for each of these comments, up- and down-votes balanced so as to leave +1 or +2.

So it's probable that many people use the second approach: they see a comment, think "that's nice, deserves +1 but no more", and then if it's already at +1, they don't vote.

How do you vote? And what do you see as the goal of the voting process?

Comment author: RobinZ 23 February 2010 10:18:28PM *  0 points [-]

It is worth noting that people have explicitly claimed to be following strategy 2 here. Edit: This is far from the only example; just the one I found by searching for "upvoted to".

It would be also interesting to check the difference between comments on quotes threads and comments on substantive posts - at least one person has proposed that quotations are disproportionately subject to strategy 1 voting over strategy 2.

Comment author: GuySrinivasan 23 February 2010 06:54:01PM 2 points [-]

I self-identify as using the first one, with a caveat.

The second is obviously awful for communicating any sort of information given that only the sum of votes is displayed rather than total up and total down. The second is order dependent and often means you'll want to change your vote later based purely on what others think of the post.

My "strategy" is to vote up and down based on whether I'd have wanted others with more insight than me to vote to bring my attention to or away from a comment, unless I feel I have special insight, in which case it's based on whether I want to bring others' attention to or away from a comment.

This is because I see the goal of the voting process that readers' independent opinions on how much a comment is worth readers' attention be aggregated and used to bring readers' attention to or away from a comment. As a side effect, the author of a comment can use the aggregated score to determine whether her readers felt the comment was worth their collective attention.

Furthermore since each reader's input comes in distinct chunks of exactly -1, 0, or +1, it's wildly unlikely that voting very often results in the best aggregation: instead I leave a comment alone unless I feel it was(is) significantly worth or not worth my(your) attention.

The caveat: there is a selection effect in which comments I vote on, since my attention will be drawn away from comments with very negative karma. There is also undoubtedly an unconscious bias away from voting up a comment with very high karma: since I perceive the goal to be to shift attention, once a comment has very high karma I know it's going to attract attention so my upvote is in fact worth fewer attention-shift units. But I haven't yet consciously noticed that kick in until about +10 or so.

Comment author: Morendil 23 February 2010 06:35:55PM 1 point [-]

At home I use the Anti-Kibitzer, which enforces 1. I've been on vacation for a couple days and noticed the temptation to use 2. Gave in on one occasion, I'm afraid. On balance I'll stick to 1, as 2 seems too vulnerable to information cascades.

Comment author: Psy-Kosh 22 February 2010 05:52:26AM *  3 points [-]

Am I/are we assholes? I posted a link to the frequentist stats case study to reddit:

The only commenter seems to have come to a conclusion from us that Bayesians are assholes.

Is it just that commenter, or are we really that obnoxious? (now that I think about it, I think I've actually seen someone else note something similar about Bayesians.) So... have we gone into happy death spiral "we get bonus points for acting extra obnoxious about those that are not us"?

Comment author: [deleted] 22 February 2010 03:23:57AM 8 points [-]

The Believable Bible

This post arose when I was pondering the Bible and how easy it is to justify. In the process of writing it, I think I've answered the question for myself. Here it is anyway, for the sake of discussion.

Suppose that there's a world very much like this one, except that it doesn't have the religions we know. Instead, there's a book, titled The Omega-Delta Project, that has been around in its current form for hundreds of years. This is known because a hundreds-of-years-old copy of it happens to exist; it has been carefully and precisely compared to other copies of the book, and they're all identical. It would be unreasonable, given the evidence, to suspect that it had been changed recently. This book is notable because it happens to be very well-written and interesting, and scholars agree it's much better than anything Shakespeare ever wrote.

This book also happens to contain 2,000 prophecies. 500 of them are very precise predictions of things that will happen in the year 2011; none of these prophecies could possibly be self-fulfilling, because they're all things that the human race could not bring about voluntarily (e.g. the discovery of a particular artifact, or the birth of a child under very specific circumstances). All of these 500 prophecies are relatively mundane, everyday sorts of things. The remaining 1,500 prophecies are predictions of things that will happen in the year 2021; unlike the first 500, these prophecies predict Book-of-Revelations-esque, magical things that could never happen in the world as we know it, essentially consisting of some sort of supreme being revealing that the world is actually entirely different from how we thought it was.

The year 2011 comes, and every single one of the 500 prophecies comes true. What is the probability that every single one of the remaining 1,500 prophecies will also come true?

Comment author: Jack 22 February 2010 03:55:05AM 1 point [-]

For the two examples of the mundane prophecies that you gave it seems possible some on-going conspiracy could have made them true... but it sounds like you're trying to rule that out.

Comment author: FAWS 22 February 2010 04:04:14AM 1 point [-]

I understood those to be negative examples, in that the actual prophecies don't share that characteristic with those examples.

Comment author: [deleted] 22 February 2010 04:33:52AM 0 points [-]

I did mean those to be positive examples. There's no way we can guarantee that we'll discover an ancient Greek goblet that says "I love this goblet!" on March 22, 2011. There's also no way we can guarantee that a woman born on October 15, 1985 at 5 in the morning in room 203 of a certain hospital will have a baby weighing 8 pounds and 6 ounces on January 8, 2011 at 6 in the afternoon in room 117 of a certain other hospital.

Comment author: Document 26 February 2010 01:25:47PM 1 point [-]

That's not clear to me, but I acknowledge that it doesn't affect the original question.

Comment author: Eliezer_Yudkowsky 22 February 2010 03:43:33AM 6 points [-]

Pretty darned high, because at this point we already know that the world doesn't work the way we think it did.

Comment author: Document 26 February 2010 01:02:35PM 0 points [-]

But not necessarily over .99, since the prophecies could have been altered by another author sometime before the beginning of modern records.

Comment author: [deleted] 23 February 2010 06:59:48PM 1 point [-]

So it sounds like even though there are 2,000 separate prophecies, the probability of every prophecy coming true is much greater than 2^(-2000).

Comment author: Jack 23 February 2010 07:24:50PM 0 points [-]

Maybe you just need to explain more but I don't see that.

Comment author: [deleted] 23 February 2010 08:14:57PM *  4 points [-]

Let P(2,000) be the probability that all 2,000 prophecies come true, and P(500) be the probability that the initial 500 all come true. Suppose P(2,000) = 2^(-2000) and P(500) = 2^(-500). We know that P(500|2,000) = 1, so P(2,000|500) = P(2,000)*P(500|2,000)/P(500) = 2^(-2000)*1/2^(-500) = 2^(-1500). A probability of 2^(-1500) is not pretty darned high, so either P(2,000) is much greater than we supposed, or P(500) is much lower than we supposed. The latter is counterintuitive; one wouldn't expect the Believable Bible's existence to be strong evidence against the first 500 prophecies.

Comment author: Jack 23 February 2010 09:09:52PM *  -1 points [-]

Edit: Yeah, I was being dumb.

Comment author: Nick_Tarleton 23 February 2010 09:26:57PM 0 points [-]

Where A = "events occur" and B = "events are predicted", you're saying P(A and B) < P(A). Warrigal is saying it would be counterintuitive if P(A|B) < P(B).

Comment author: Unknowns 23 February 2010 08:31:15PM *  1 point [-]

And this doesn't depend on prophecies in particular. Any claims made by the religion will do. For example, the same sort of argument would show that according to our subjective probabilities, all the various claims of a religion should be tightly intertwined. Suppose (admittedly an extremely difficult supposition) we discovered it to be a fact that 75 million years ago, an alien named Xeno brought billions of his fellow aliens to earth and killed them with hydrogen bombs. Our subjective probability that Scientology is a true religion would immediately jump (relatively) high. So one's prior for the truth of Scientology can't be anywhere near as low as one would think if one simply assigned an exponentially low probability based on the complexity of the religion. Likewise, for very similar reasons, komponisto's claim elsewhere that Christianity is less likely to be true than that a statue would move its hand by quantum mechanical chance events, is simply ridiculous.

Comment author: Nick_Tarleton 23 February 2010 08:55:42PM *  1 point [-]

So one's prior for the truth of Scientology can't be anywhere near as low as one would think if one simply assigned an exponentially low probability based on the complexity of the religion.

If nobody had ever proposed Scientology, though, learning Xenu existed wouldn't increase our probabilities for most other claims that happen to be Scientological. So it seems to me that our prior can be that low (to the extent that Scientological claims are naturally independent of each other), but our posterior conditioning on Scientology having been proposed can't.

Comment author: Unknowns 23 February 2010 08:59:20PM *  0 points [-]

Right, because that "Scientology is proposed" has itself an extremely low prior, namely in proportion to the complexity of the claim.

Comment author: Nick_Tarleton 23 February 2010 09:23:11PM 1 point [-]

In proportion to the complexity of the claim given that humans exist, which is much lower (=> higher prior) than its complexity in a simple encoding, since Scientology is the sort of thing that a human would be likely to propose.

Comment author: Vladimir_Nesov 23 February 2010 08:18:59PM 1 point [-]

Use \* to get stars * instead of italics.

Comment author: [deleted] 24 February 2010 12:36:51AM 0 points [-]

Oops! It seems I assumed everything would come out right instead of checking after I posted.

Comment author: FAWS 22 February 2010 04:09:27AM 0 points [-]

Could be simple time travel, though. AFAICT time travel isn't per se incompatible with the way we think the world works. Not to the degree sufficiently fantastic prophecies might be at least.

Comment author: Document 26 February 2010 01:23:19PM *  1 point [-]

If someone just observed events in 2011 and planted a book describing them in 1200, the 2011 resulting from the history where the book existed would be different from the 2011 he observed.

Comment author: ciphergoth 26 February 2010 02:10:56PM *  2 points [-]

Depends if it's type one time travel. Fictional examples: Twelve Monkeys, The Hundred Light-Year Diary.

Comment author: thomblake 26 February 2010 03:19:55PM 3 points [-]

I think the important bit here is that even if you could just "play time backwards" and watch again, there's no reason to think you'd end up in the same Everett branch the next time around.

Comment author: Document 26 February 2010 02:25:40PM 1 point [-]

Insofar as I understand that page, that would mean that the world worked even less the way we thought it did.

Comment author: FAWS 26 February 2010 11:16:30PM 1 point [-]

Makes perfect sense to me if you assume a single time-line. (This might be a big assumption, but probably less big than the truth of sufficiently strange prophecies.) You can think of this time line as having stabilized after a very long sequence of attempts at backward time travel under slightly different conditions. Any attempt at backward time travel that changes its initial conditions means a different or no attempt at time travel happens instead. Eventually you end with a time-line where all attempts at backward time travel exactly reproduce their initial conditions. We know that we live in that stabilized time-line because we exist (though the details of this timeline depend on how people who don't exist, but would have thought they exist for the same reasons we think we exist, would have acted, had they existed).

Comment author: FAWS 26 February 2010 11:48:16PM *  2 points [-]

By the way, that sort of time-travel gives rise to Newcomb-like problems:

Suppose you have access to a time-machine and want to cheat on a really important exam (or make a fortune on the stock marked or save the world or whatever. The cheating example is the simplest). You decide to send yourself at a particular time a list with the questions after taking the exam. If you don't find the list at the time you decided you know that somehow your attempt at sending the list failed (you changed your mind, the machine exploded in a spectacular fashion, you were caught attempting to send the list ...). But if you now change your mind and don't try to send the list there never was any possibility of receiving the list in the first place! The only way to get the list if for you to try to send the list even if you already know you will fail, so that's what you have to do if you really want to cheat. And if you really would do that, and only then, you will probably get the list at the specified time and never have to do it without knowing you succeed, but only if your pre-commitment is strong enough to even do it in the face of failure.

And if you would send yourself back useful information at other times even without having either received the information yourself or pre-commited to sending that particular information you will probably receive that sort of information.

Comment author: FAWS 27 February 2010 01:11:46PM 1 point [-]

Why was this post voted back down to 0 after having been at 2? Newcomb-like problems are on-topic for this site and I would think having examples of such problems in a scenario not specifically constructed for them is a good thing? If it was because time travel is off topic wouldn't the more prudent thing have been voting down the parent? The same if the time travel mechanics are considered incoherent (though I'd be really interested in learning why?) . If you think this post doesn't actually describe anything Newcomb-like I would like to know why. Maybe I misunderstood the point of earlier examples here, or maybe I didn't explain things sufficiently? Or is it just that the post was written badly? I'm not really happy with it, but I don't see how I could have made it much clearer.

Comment author: wedrifid 27 February 2010 01:14:51PM 1 point [-]

It's an interesting point. It actually came up in the most recent Artemis Fowl novel, when he managed to 'precommit' himself out of a locked trunk in a car. :)

Comment author: Sticky 22 February 2010 11:30:38PM *  2 points [-]

Anyone who can travel through time can mount a pretty impressive apocalypse and announce whatever it is about the nature of reality he cares to. He might even be telling the truth.

Comment author: byrnema 21 February 2010 05:18:30AM *  0 points [-]

This comment is a response to the claim that Gould's separate magesteria idea is not conceptually coherent. While I don't view reality parsed this way, I thought I would make an effort to establish its coherence and self-consistency (and relevance under certain conditions).

In this comment, by dualism, I'll mean the world view of two separate magisteria; one for science and one for faith. There are other, related meanings of dualism but I do not intend them here.

Physical materialism assumes monism -- there is a single, external reality that we have a limited knowledge and awareness of. Awareness and knowledge of this reality come through our senses, by interaction with reality. Dualism is rejected with a straight-forward argument: you cannot have awareness of something without interaction with it. If you interact with it, then it is part of the one reality we were already talking about.

Dualists persist: The empirical reality X that physical materialists recognize is only part of everything that matters. There is also a dual reality -- X', which is in some way independent of (or outside of) X. The rules in X' are different than the rules in X. For example, epistemology (and sometimes even logic) appears to work differently, or less directly.

Some immediate questions in response to dualism are:

(1) If we are located in X, how does interaction with X' work?

(2) Is it actually coherent to think of some component X' being outside of X? Why don't we just have X expand to absorb it?

Relation to the Simulation Hypothesis

An immediate, possibly too-quick answer to the second question is 'yes, dualism is coherent because it is structurally isomorphic to the simulation hypothesis'. If we were in a simulation, X and X' would be a natural way to parse reality. X would be the simulation and X' would be the reality outside the simulation. Clearly, the rules could be different within X compared to within X'. People simulated in X could deduce the existence of X' in a variety ways:

(a) by observing the incompleteness of X (for example, the inexplicable deus ex machina appearance of random numbers)

(b) by observing temporal, spatial or logical inconsistencies in X

(c) Privileged information given to them directly about X', built into the simulation in ways that don't need to be consistent with other rules in X

While dualists aren't claiming that empirical reality is a simulation, by analogy we could consider that (a), (b) or (c) would be cause for deducing X' and having a dualistic world view. I will visit each of these in reverse order.

Re: (c) Privileged information given to them directly about X', built into the simulation in ways that don't need to be consistent with other rules in X

Many (most?) religions are based on elements of divine revelation; special ways that God has of communicating directly to us in some way separate and independent of ordinary empirical experience. Being saved, speaking in tongues, visions, etc. I've heard it argued here on LW that this sort of experience would be the most rational reason for theism; they might be delusional but at least they are basing their beliefs on empirical sense experience. They would be justified in having a dualistic world view if they perceived their visions as distinct from (for example, having different rules than or existing in a different plane than) empirical reality. However, many theists (including myself) do not claim experience of divine revelation.

Re: (b) by observing temporal, spatial or logical inconsistencies in X

I think that in the past, this was a big reason for belief in the spiritual realm. However, the success of the scientific world view has shot this completely out of the water. No one believes that X is inconsistent; while there are 'gaps' in our knowledge, we have limitless faith in science to resolve everything in X that can be resolved, one way or another. Outside X is another matter of course, which brings us to (a). I proceed to (a) with the counter-argument to (b) firmly in hand: reality is explainable and whether we know the rules are not, there are rules for the phenomena in X, and rules for the rules in X and, if not rules, than a necessary logical deduction that can be made.

Re: (a) by observing the incompleteness of X

Can everything in X, in theory, be explained within X? If you believe this, then you have no reason to be dis-satisfied with monism. (It happens that I am a monist.) But what if we could point to just one thing that could not be explained in X? Just one thing that could not even be explained in theory because to do so would result in some contradiction in X? Would that give us cause to deduce X'?

Example 1: True Randomness

There are many processes that are approximated as random. The diffusion of a dye in a liquid, the search path of an amoeba looking for food, the collapse of a symmetric structure to one direction or another. However, all of these processes are considered deterministic -- if we knew all the relevant states of the system and had sufficient computing power we could accurately predict the outcome via simulation; no random numbers needed.

Nevertheless, there are some processes that appear as though they could be truly "random". That is, occurring spontaneously independent of any mechanism determining the outcome. For example, the 'spontaneous' creation of particles in a vacuum, or any other phenomenon described in an advanced physical journal with 'spontaneous' in the title. I think that you are a self-consistent physical materialist, you should deny the possibility of random or spontaneous events. I do: I think there must be a mechanism for everything, whether we have access to knowledge of it or not.

To the best of my knowledge, our understanding of these 'spontaneous' phenomena leaves room for mechanical explanations. Maybe this and that are involved, we just don't know.

Yet quantum mechanics is beginning to reveal ways in which a scientific theory could predict the inconsistency of non-randomness. Bell's theorem is close, proving that information in some cases is exchanged without a local mechanism. Fortunately, there is still room for other interpretations, including non-local mechanisms and many-worlds.

Example 2: Objective Value

Of any kind, including objective morality. This remains an unsolved problem in physical materialism, if you insist upon it, because it's existence seems dependent upon some authority (e.g., a book) that we have no evidence of in X. If a person believes in objective morality a priori, they may be a dualist since they deduce the existence of such an authority, embedded within X, but distinct from X in that it cannot be directly observed or interacted with. (Its existence is only inferred.)

Example 3: Consciousness

Another unsolved problem in physical materialism. I'm not familiar with them, but I understand that some dualists have arguments for why consciousness could not be explained within X.


My Position

It is often logistically difficult to defend a position you don't represent. The reason for this is that criticisms against the position will be directed at you personally, even though you hold you do not hold the position, and then further you might be tempted to continue defending the position with counter-arguments, which further confuses your identity. I am sympathetic to the dualist worldview as coherent and rational, but not globally scientific. I greatly prefer the physical materialist, scientific worldview. I have a very strong faith that everything in X can be explained within X; this faith is so strong that I consider it theistic, and call myself a theist.

Comment author: SilasBarta 22 February 2010 10:40:12PM 0 points [-]

This is thorough enough and long enough to merit posting as a top-level, IMO.

Comment author: Jack 21 February 2010 11:08:38AM 1 point [-]

Re: Your definitions.

You appear to be conflating ontological views (physicalism and dualism usually refer to these sorts of views, views about what kinds of things exist) with epistemological views. There is nothing in the definition of physicalism that requires us to have knowledge of the external world and nothing in dualism that requires us to give up rationality or science. You can be a physicalist and still think someone is deceiving your senses, for example. Also, this might just be me but 'materialism' should be jettisoned as outdated. "Materialism" means that you believe everything that exists is matter. But there is no reason to think that word is even meaningful in our fundamental physics. Thus I prefer "physicalism" the belief that what exists is what physics tells us exists.

Re: the relation to the simulation hypothesis

If you haven't you ought to read "Brains in a vat" by Hilary Putnam. It's just twenty pages or so. He argues that we cannot claim to be brains in vats (or in any kind of extreme skeptical scenario) because our language does not have the ability to refer to vats and computers outside our level of reality. When a brain in a vat says "vat" he is referring to some feature of the computer program that is being run for his brain. Thus he cannot refer to what we call the vat (the thing that holds his brain). I can explain further if that isn't clear. But one thing I got from the article is that we can understand the bizarre, muddled writings of substance dualists as trying to describe the vat! If you don't have any language that lets you refer to the vats you're going to sound pretty confusing. I find this pretty funny because the way Descarte's supposedly gets out of extreme skepticism is partly by trying to prove substance dualism! Irony!

Anyway, I'm a little confused by the invocation of the simulation hypothesis because while I'm willing to look at it as kind of metaphysical dualist hypothesis I can't see how our tools for learning the answer to this question would be in anyway different from our general scientific tools. Metaphysics, such as we can say anything at all about it, is just an extension of science.

(a) by observing the incompleteness of X (for example, the inexplicable deus ex machina appearance of random numbers) (b) by observing temporal, spatial or logical inconsistencies in X c) Privileged information given to them directly about X', built into the simulation in ways that don't need to be consistent with other rules in X

Why not just assume these were features of X to begin with? If I see an temporal, spatial or logical inconsistency I'm going to revise my understanding of space, time and logic in X. Not posit X'.

But what if we could point to just one thing that could not be explained in X? Just one thing that could not even be explained in theory because to do so would result in some contradiction in X?

We would revise our theory of X to remove the contradiction. I know you know this happens all the time in science.

I'm having a hard time dealing with the rest given the conflation between epistemology and ontology. Yes, if there are properties (like value and consciousness) that cannot be reduced to the fundamental entities of physics, then physicalism is wrong. However, it does not follow that Bayesianism is wrong, that empiricism is wrong or that the scientific method is invalid in certain magesteria.

Comment author: byrnema 22 February 2010 10:37:21PM 0 points [-]

I can't see how our tools for learning the answer to this question would be in anyway different from our general scientific tools. Metaphysics, such as we can say anything at all about it, is just an extension of science.

Depending upon what you mean by 'science', this statement could range from trivially true to ... not true.

If by science you mean 'ways of knowing', then it is true; metaphysics is just an extension of science. However, scientific principles we've learned in X don't necessarily apply to X'. The rules in X' could be very strange, and not logical in physically logical ways. (My opinion is that they still need to be mathematically logical.)

Why not just assume these were features of X to begin with? If I see an temporal, spatial or logical inconsistency I'm going to revise my understanding of space, time and logic in X. Not posit X'.

It has to be an inconsistency that is not resolvable in X.

Comment author: Jack 22 February 2010 11:43:32PM 0 points [-]

Depending upon what you mean by 'science', this statement could range from trivially true to ... not true.

I mean scientific epistemology of which I take Bayesian epistemology to be an idealized form. We update our probability distributions for all logically consistent hypotheses based on predictive accuracy, capacity, parsimony and some other pragmatic tie-breaker criteria. This is the same formula we should apply to metaphysics. However, the nature of the beast is that most of the work that should done in metaphysics involves clearing the way for physics, biology, chemistry, psychology etc., not advancing a view with particular predictions that should be tested. Basically we're asking: what is a good way to think about the world?

Now it could be that there is some place, domain or mode where physicalist metaphysics is bad, counterproductive, unexplanatory etc. Then it makes sense to try to give an account of metaphysics that makes sense of this place domain or mode while not losing the advantages physicalism provides elsewhere. A kludgey way of doing this is just to claim that there are different 'magisterium', one where physics defines our most basic ontology and another which is better described with some other theory. This other theory could be surprising and strange. But we still determine what that other theory looks like based on our scientific epistemology and the fact that we are using two different theories needs to be justified by our scientific epistemology.

It has to be an inconsistency that is not resolvable in X.

I have a lot of trouble imagining how this could happen. Our physical concepts are incredibly flexible. Would asking for an example be insane of me?

Comment author: byrnema 23 February 2010 12:46:33AM *  0 points [-]

Nevermind. I got part (b) and part (c) confused. Example of temporal, spatial or logical inconsistencies (possible but not actual) forthcoming.


I have a lot of trouble imagining how this could happen. Our physical concepts are incredibly flexible. Would asking for an example be insane of me?

I gave 3. (The existence of truly random phenomenon, objective value, or a dual component to consciousness would all be inconsistent with X.) Did you even read my comment? I realize it was really long..

Comment author: byrnema 23 February 2010 04:24:15AM 0 points [-]

First example:

I leave the keys to my office on the counter, and realize this when I get to work. Damn, I need my keys! Maybe I left them in my car. Phew, there they are. I get home after work and there are my keys on the counter, just where I left them. So how did I get in my office? Well, shrug, I did.

More whimsical example:

Your name is Mario and it's your job to save the princess. You've got 323 coins and then you see: a black pixel.

Comment author: byrnema 22 February 2010 10:24:40PM *  0 points [-]

You appear to be conflating ontological views (physicalism and dualism usually refer to these sorts of views, views about what kinds of things exist) with epistemological views.

I'm not surprised I am doing this, since my intention is to compare world views, which include ontological and epistemological views together. Is this a big deal?

My writing style must have been confusing, because you seem to be systematically misinterpreting my use of clauses. Anyway, three things that I didn't intend to write or even imply:

  • There is something in the definition of physicalism that requires us to have knowledge of the external world.

  • dualism requires us to give up rationality or science

  • it follows that Bayesianism is wrong, that empiricism is wrong or that the scientific method is invalid in certain magesteria. (the coherence o one world view doesn't negate the others, and, anyway I haven't mentioned anything about a dichotomy between Bayesianism and dualism).

I don't know if this helps any. If it's a mess you can just drop this comment, I'll be leaving other ones.

Comment author: Jack 22 February 2010 11:04:38PM *  1 point [-]

If you didn't imply

dualism requires us to give up rationality or science

and that

it follows that Bayesianism is wrong, that empiricism is wrong or that the scientific method is invalid in certain magesteria. (the coherence o one world view doesn't negate the others, and, anyway I haven't mentioned anything about a dichotomy between Bayesianism and dualism).

Then I am really confused by what you define as the dualism thesis in the original comment:

In this comment, by dualism, I'll mean the world view of two separate magisteria; one for science and one for faith.

...

I'm not surprised I am doing this, since my intention is to compare world views, which include ontological and epistemological views together. Is this a big deal?

Well I'm a physicalist but I'm a physicalist because I think that is the right view to hold given the evidence and my epistemology. So I'd have no problem at all adjusting my metaphysical view based on new evidence. But when a metaphysical view says that my epistemology ceases to apply in certain domains I get really cranky and confused. Maybe that isn't what you're suggesting or maybe I don't hold one of the worldviews you are comparing. If I were going to describe my world view I would probably stop at my epistemology and only if prompted would I continue with an ontology.

Comment author: byrnema 23 February 2010 12:35:10AM *  0 points [-]

Well, my whole comment is just about whether dualism (as the two-separate-magesteria-hypothesis) is coherent. Does that help?

Coherent doesn't mean correct, and certainly doesn't mean actual.

If you didn't imply dualism [implies negative things about monism] then I am really confused by what you define as the dualism thesis.

Again, I'm trying to determine if dualism is logically possible, not make any of the claims that dualism would make. Yet, what would be relevant is this question: does dualism make any implications that are logically impossible?

Comment author: Jack 23 February 2010 02:54:56AM *  0 points [-]

If you didn't imply dualism [implies negative things about monism] then I am really confused by what you define as the dualism thesis.

Again, I'm trying to determine if dualism is logically possible, not make any of the claims that dualism would make. Yet, what would be relevant is this question: does dualism make any implications that are logically impossible?

No my problem wasn't with the fact that you didn't mean to imply negative things about monism. My confusion arises from from the fact that your definition of dualism says that there is some domain/space/mode i.e. magisterium which we do not learn about through science. Specifically you say "two separate magisteria; one for science and one for faith." The obvious interpretation of this is that dualism implies a limit on science. It seems to imply that Bayesianism or empiricism or the scientific method or some other aspect of "SCIENCE" is not valid in the "faith" magisterium. But you say you are not implying this. Thus my confusion.

Now I'm actually okay with magisteria where science isn't involved but these aren't domains where the term "propositional knowledge" meaningfully applies. Like art or a game. Gould appeared to suggest that there are religious facts (in a non-anthropological sense) which I do think is nonsense. But I'm actually pretty sympathetic to so-called non-realist theology (though a lot of it seems to have a pretty obnoxious post-modern undertone that suggests non-realism about everything).

Comment author: byrnema 23 February 2010 03:45:45AM *  0 points [-]

Oh, I see! You were confused by my statement that one magisterium is for science and one is for faith when I simultaneously seemed not to object in any way if you wanted to assert that science applies everywhere.

In the statement, 'one magesterium is for science', 'science' must be meant in some limited sense. Specifically, I guess, the set of scientific facts and principles we've learned that apply to X.

Maybe this could happen in Flatland. X is a two-dimensional world and the people there learn rules that apply to 2D. But Flatland is embedded in a 3D world X'. I'm not saying the people in flatland can't comprehend X' with a different set of rules, but they would be justified in parsing their world as X and X' -- especially if they experience 2D things usually but encounter understanding of 3D things only exactly when they happen to collect in a square with a plus sign affixed to one side.

Comment author: Jack 23 February 2010 04:09:06AM 0 points [-]

So here is something that looks like it would qualify as reason for the flatlanders to reject their two-dimensional science. In Flatland an object that is trapped in a square cannot escape. To a flatlander seeing an object escape a box is going to look like magic. They will be forced to question their most basic beliefs about the nature of the world. Would this count as an inconsistency that cannot be resolved with their scientific facts and principles... the kind of thing that would make it reasonable to believe in an additional magisterium?

Comment author: byrnema 23 February 2010 04:35:48AM *  0 points [-]

Yeah.

So if they wanted to be monists, they would reject their 2D-science and say that while 2D-science apparently seems to be a good approximation of most things, it's only an approximation as apparently reality enables square-escape. They try to look for extensions of 2D science that make sense and are consistent with what they observe about square-escape, but just haven't solved the problem yet.

If they wanted to be dualists, they would say that in one magisterium, 2D science applies. Any non-2D stuff that goes on belongs to that separate, independent magisterium they'll call Xhi, a word which is really just a placeholder for 'the third dimension' until they discover it.

Comment author: Jack 23 February 2010 04:56:42AM 0 points [-]

Will the Flatlanders theorize about Xhi? Will they have knowledge of it? Are there facts about Xhi?

Comment author: Sniffnoy 21 February 2010 07:46:34AM 2 points [-]

I don't understand why true randomness is a problem. Is there something so wrong with probabilistic determinism?

Comment author: byrnema 22 February 2010 10:15:56PM 0 points [-]

I think so. If a process is truly random, does this mean there was no mechanism for it? How was it determined? It seems to me that picking a random number is something a closed system cannot possibly do.

Comment author: RobinZ 22 February 2010 10:24:10PM 0 points [-]

"Cannot possibly" is a very strong claim - I would hesitate to say anything much stronger than "should not be expected to".

Comment author: byrnema 22 February 2010 10:29:48PM 1 point [-]

You're correct of course.

But I'm 'sticking my neck out' on this one -- my intention was to signal this.

Comment author: RobinZ 22 February 2010 10:31:36PM *  0 points [-]

Admirable! I will read it as a rhetorical flourish, then.

Comment author: Cyan 20 February 2010 11:52:25PM *  13 points [-]

Are people interested in reading an small article about a case of abuse of frequentist statistics? (In the end, the article was rejected, so the peer review process worked.) Vote this comment up if so, down if not. Karma balance below.

ETA: Here's the article.

Comment author: RobinZ 21 February 2010 03:22:33AM 0 points [-]

Will the case be feasibly anonymous? I would vote that the article be left unwritten if it would unambiguously identifies the author(s), either explicitly or through unique features of the case (e.g. details of the case which are idiosyncratic to only one or a very few research groups).

Comment author: Cyan 21 February 2010 03:48:07AM 1 point [-]

I don't know who the authors were or the specific scientific subject matter of the paper. (I didn't need to know that to spot their misuse of statistics.)

Comment author: RobinZ 21 February 2010 03:55:32AM 0 points [-]

Understood!

Comment author: byrnema 21 February 2010 03:44:15AM 1 point [-]

Good point. Also, they might wish to rewrite and resubmit... in any case, you can't reveal anything they would want to lay original claim to of feel afraid of being scooped of.

Comment author: Douglas_Knight 21 February 2010 02:18:00AM 1 point [-]

If it's really frequentism that caused the problem, please spell this out. I find that "frequentist" is used a lot around here to mean "not correct." (but I'm interested whether or not it's about frequentism)

Comment author: Technologos 21 February 2010 03:05:37AM *  2 points [-]

My understanding is that one primary issue with frequentism is that it can be so easily abused/manipulated to support preferred conclusions, and I suspect that's the subject of the article. Frequentism may not have "caused the problem," per se, but perhaps it enabled it?

Comment author: lunchbox 20 February 2010 11:09:25PM 0 points [-]

Exercising "rational" self-control can be very unpleasant, therefore resulting in disutility.

Example 1: When I come buy an interesting-looking book on Amazon, I can either have it shipped to me in 8 days for free, or 2 days for a few bucks. The naive rational thing to do is to select the free shipping, but you know what? That 10-day wait is more unpleasant than spending a few bucks.

Example 2: When I come home from the grocery store I'm tempted to eat all the tastiest food first. It would be more "emotionally intelligent" to spread it out over the course of the week. But that requires a lot of unpleasant resistance to temptation. Also, the plain food seems more appealing when I'm hungry and it's the only thing in my fridge.

Of course, exercising restraint probably builds willpower, a good thing in the long run. But in some cases we should admit that our willpower is only so elastic, and that the most rational thing to do is to give in to our impulses.

What are some other seemingly "irrational" things we do that are in fact rational when we factor in the pleasantness of doing them?

Comment author: Nick_Tarleton 21 February 2010 11:49:48PM *  1 point [-]

What are some other seemingly "irrational" things we do that are in fact rational when we factor in the pleasantness of doing them?

Relevant paper: Lay Rationalism and Inconsistency between Predicted Experience and Decision

Comment author: lunchbox 22 February 2010 01:40:01AM 0 points [-]

Thanks Nick. That paper looks very interesting.

Comment author: ata 20 February 2010 10:35:03AM *  5 points [-]

Could anyone recommend an introductory or intermediate text on probability and statistics that takes a Bayesian approach from the ground up? All of the big ones I've looked at seem to take an orthodox frequentist approach, aside from being intolerably boring.

Comment author: Cyan 20 February 2010 09:06:23PM *  4 points [-]

(All of the below is IIRC.)

For a really basic introduction, there's Elementary Bayesian Statistics. It's not worth the listed price (it has little value as a reference text), but if you can find it in a university library, it may be what you need. It describes only the de Finetti coherence justification; on the practical side, the problems all have algebraic solutions (it's all conjugate priors, for those familiar with that jargon) so there's nothing on numerical or Monte Carlo computations.

Data Analysis: A Bayesian Approach is a slender and straighforward introduction to the Jaynesian approach. It describes only the Cox-Jaynes justification; on the practical side, it goes as far as computation of the log-posterior-density through a multivariate second-order Taylor approximation. It does not discuss Monte Carlo methods.

Bayesian Data Analysis, 2nd ed. is my go-to reference text. It starts at intermediate and works its way up to early post-graduate. It describes justifications only briefly, in the first chapter; its focus is much more on "how" than "why" (at least, for philosophical "why", not methodological or statistical "why"). It covers practical numerical and Monte Carlo computations up to at least journeyman level.

Comment author: Kevin 20 February 2010 04:19:02PM *  1 point [-]

I'm not intending to put this out as a satisfactory answer, but I found it with a quick search and would like to see what others think of it.

Introduction to Bayesian Statistics by William M. Bolstad

http://books.google.com/books?id=qod3Tm7d7rQC&dq=bayesian+statistics&source=gbs_navlinks_s

Good reviews on Amazon, and available from $46 + shipping... http://www.amazon.com/Introduction-Bayesian-Statistics-William-Bolstad/dp/0471270202

Comment author: Cyan 20 February 2010 07:36:28PM *  1 point [-]

It's hard to say from the limited preview, which only goes up to chapter 3 -- the Bayesian stuff doesn't start until chapter 4. The first three chapters cover basic statistics material -- it looks okay to my cursory overview, but will be of limited interest to people looking for specifically Bayesian material. As to the rest of the book, the section headings look about right.

Comment author: Eliezer_Yudkowsky 20 February 2010 02:37:04PM 1 point [-]

I second the question. "Elements of Statistical Learning" is Bayes-aware though not Bayesian, and quite good, but that's statistical learning which isn't the same thing at all.

Comment author: AngryParsley 19 February 2010 09:33:57PM 7 points [-]

The FBI released a bunch of docs about the anthrax letter investigation today. I started reading the summary since I was curious about codes used in the letters. All of a sudden on page 61 I see:

c. Godel, Escher, Bach: the book that Dr. Ivins did not want investigators to find

The next couple of pages talk about GEB and relate some parts of it to the code. It's really weird to see literary analysis of GEB in the middle of an investigation on anthrax attacks.

Comment author: spriteless 19 February 2010 08:45:00PM 0 points [-]

Is there a facebook group I can spam my friends to join to save the world via Craiglist ads yet?

Comment author: Kevin 19 February 2010 10:04:05PM 1 point [-]

We are meticulously planning our approach in private now. It'll be a while before we start a Facebook group but I will definitely let all of LW know when we get there.

Comment author: Eliezer_Yudkowsky 19 February 2010 10:31:50PM 3 points [-]

Er, who's planning this? Is Michael Vassar in on it?

Comment author: Kevin 20 February 2010 02:37:04AM 1 point [-]

Yes

Comment author: CronoDAS 19 February 2010 06:40:11PM *  0 points [-]

Suppose I wanted to convince someone that signing up for cryonics was a good idea, but I had little confidence in my ability to persuade them in a face-to-face conversation (or didn't want to drag another discussion too far off-topic) - what is the one link you would give someone that is most likely to save their life? I find the pro-cryonics arguments given by Eliezer and others on this site + Overcoming Bias to be persuasive (I'm convinced that if you don't want to die, it's a good idea to sign up) but all the arguments are in pieces and in different places. There's no one, single, "This is why you should sign up for cryonics" persuasive essay that I've found here that I've found that I can simply link someone to and hope for the best. Can you direct me to one?

Comment author: thomblake 19 February 2010 06:43:57PM 2 points [-]

It's tricky. As Socrates noted in the Apology, it's much easier to convince someone in a one-on-one conversation than to have a general argument to convince anyone in general.

Comment author: CronoDAS 19 February 2010 06:56:00PM 0 points [-]

Yeah, it is. I would like a good link to use as a conversation starter, at least.

I wanted to evangelize on other message boards/blogs/etc., and having a single, ready-made "No, you really can cheat death!" link I can post would be a big help.

Comment author: Leafy 19 February 2010 02:20:09PM 3 points [-]

It is common practice, when debating an issue with someone, to cite examples.

Has anyone else ever noticed how your entire argument can be undermined by stating a single example or fact which is does not stand up to scrutiny, even though your argument may be valid and all other examples robust?

Is this a common phenomenon? Does it have a name? What is the thought process that underlies it and what can you do to rescue your position once this has occurred?

Comment author: wnoise 19 February 2010 11:10:16PM *  3 points [-]

It takes effort to evaluate examples. Revealing that one example is bad raises the possibility that others are bad as well, because the methods for choosing examples are correlated with the examples chosen. The two obvious reasons for a bad example are:

  1. You missed that this was a bad example, so why should I trust your interpretation or understanding of your other examples?
  2. You know this is a bad example, and included it anyway, so why should I trust any of your other examples?
Comment author: ciphergoth 19 February 2010 09:12:08AM 4 points [-]

More cryonics: my friend David Gerard has kicked off an expansion of the RationalWiki article on cryonics (which is strongly anti). The quality of argument is breathtakingly bad. It's not strong Bayesian evidence because it's pretty clear at this stage that if there were good arguments I hadn't found, an expert would be needed to give them, but it's not no evidence either.

Comment author: RichardKennaway 19 February 2010 11:51:28AM 10 points [-]

I have not seen RationalWiki before. Why is it called Rational Wiki?

Comment author: CronoDAS 19 February 2010 08:18:33PM *  8 points [-]

From http://rationalwiki.com/wiki/RationalWiki :

RationalWiki is a community working together to explore and provide information about a range of topics centered around science, skepticism, and critical thinking. While RationalWiki uses software originally developed for Wikipedia it is important to realize that it is not trying to be an encyclopedia. Wikipedia has dominated the public understanding of the wiki concept for years, but wikis were originally developed as a much broader tool for any kind of collaborative content creation. In fact, RationalWiki is closer in design to original wikis than Wikipedia.

Our specific mission statement is to:

  1. Analyze and refute pseudoscience and the anti-science movement, ideas and people.
  2. Analyze and refute the full range of crank ideas - why do people believe stupid things?
  3. Develop essays, articles and discussions on authoritarianism, religious fundamentalism, and other social and political constructs

So it's inspired by Traditional Rationality.

Comment author: RichardKennaway 19 February 2010 10:33:29PM *  18 points [-]

A fine mission statement, but my impression from the pages I've looked at is of a bunch of nerds getting together to mock the woo. "Rationality" is their flag, not their method: "the scientific point of view means that our articles take the side of the scientific consensus on an issue."

Comment author: Eliezer_Yudkowsky 19 February 2010 11:24:38PM *  29 points [-]

Voted up, but calling them "nerds" in reply is equally ad-hominem, ya know. Let's just say that they don't seem to have the very high skill level required to distinguish good unusual beliefs from bad unusual beliefs, yet. (Nor even the realization that this is a hard problem, yet.)

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.

Also, one person on RationalWiki saying silly things is not a good reason to launch an aggressive counterattack on a whole wiki containing many potential recruits.

Comment author: Will_Newsome 18 May 2011 11:15:51AM 7 points [-]

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors.

(As an extreme example, a few weeks idly checking out RationalWiki led me to the quote at the top of this page and only a few months after that I was at SIAI.)

Comment author: David_Gerard 12 May 2012 09:11:16PM 1 point [-]

I only just noticed this. Good Lord. (I put that quote there, so you're my fault.)

Comment author: komponisto 20 February 2010 03:25:52AM 16 points [-]

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.

I guess I should try harder to remember this, in the context of my rather discouraging recent foray into the Richard Dawkins Forums -- which, I admit, had me thinking twice about whether affiliation with "rational" causes was at all a useful indicator of actual receptivity to argument, and wondering whether there was much more point in visiting a place like that than a generic Internet forum. (My actual interlocutors were in fact probably hopeless, but maybe I could have done a favor to a few lurkers by not giving up so quickly.)

But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.

Comment author: Eliezer_Yudkowsky 20 February 2010 02:41:36PM 8 points [-]

You understand this is more or less exactly the problem that Less Wrong was designed to solve.

Comment author: TimFreeman 18 May 2011 08:28:38PM *  4 points [-]

You understand this is more or less exactly the problem that Less Wrong was designed to solve.

Is there any information on how the design was driven by the problem?

For example, I see a karma system, a hierarchical discussion that lets me fold and unfold articles, and lots of articles by Eliezer. I've seen similar technical features elsewhere, such as Digg and SlashDot, so I'm confused about whether the claim is that this specific technology is solving the problem of having a ton of clueless followers, or the large number of articles from Eliezer, or something else.

Comment author: h-H 20 February 2010 04:25:45PM 4 points [-]

not to detract, but does Richard Dawkins really posses such 'high quality'? IMO his arguments are good as a gateway for aspiring rationalists, not that far above the sanity water line

that, or it might be a problem of forums in general ..

Comment author: komponisto 20 February 2010 04:41:55PM *  13 points [-]

Dawkins is a very high-quality thinker, as his scientific writings reveal. The fact that he has also published "elementary" rationalist material in no way takes away from this.

He's way, way far above the level represented by the participants in his namesake forum.

(I'd give even odds that EY could persuade him to sign up for cryonics in an hour or less.)

Comment author: MichaelBishop 02 November 2011 04:29:36PM 0 points [-]

Convincing Dawkins would be a great strategy for promoting cryonics... who else should the community focus on convincing?

Comment author: CarlShulman 20 August 2010 05:39:00AM *  6 points [-]

Here's Dawkins on some non socially-reinforced views: AI, psychometrics, and quantum mechanics (in the last 2 minutes, saying MWI is slightly less weird than Copenhagen, but that the proliferation of branches is uneconomical).

Comment author: ciphergoth 21 February 2010 11:02:33AM 5 points [-]

Obviously the most you could persuade him of would be that he should look into it.

Comment author: Jack 20 February 2010 09:13:47PM 13 points [-]

(I'd give even odds that EY could persuade him to sign up for cryonics in an hour or less.)

Bloggingheads are exactly 60 minutes.

Comment author: h-H 20 February 2010 05:31:14PM 0 points [-]

you're absolutely right, I didn't consider his scientific writings, though my argument still weakly stands since I wasn't talking about that, he's a good scientist, but a rationalist of say Eliezer's level? I somehow doubt that.

(my bias is that he hasn't gone beyond the 'debunking the gods' phase in his not specifically scientific writings, and here I'll admit I haven't read much of him.)

Comment author: CronoDAS 20 February 2010 11:54:15AM 16 points [-]

But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.

This is actually one of Niven's Laws: "There is no cause so right that one cannot find a fool following it."

Comment author: Morendil 20 February 2010 09:31:15AM 5 points [-]

it really is frustrating how little of the quality of a person [...] actually manages to rub off

Wait, you have a model which says it should?

You don't learn from a person merely by associating with them. And:

onto the legions of Internet followers of said person or cause.

I would bet a fair bit that this is the source of your frustration, right there: scale. You can learn from a person by directly interacting with them, and sometimes by interacting with people who learned from them. Beyond that, it seems to me that you get "dilution effects", kicking in as soon as you grow faster than some critical pace at which newcomers have enough time to acculturate and turn into teachers.

Communities of inquiry tend to be victims of their own success. The smarter communities recognize this, anticipate the consequences, and adjust their design around them.

Comment author: Jack 20 February 2010 05:22:38AM 0 points [-]

Interesting. Hom many places have you brought this issue up? Is there any forum which has responded rationally? What seem to be the controlling biases?

Comment author: komponisto 20 February 2010 07:13:44AM *  6 points [-]

Hom many places have you brought this issue up?

LW is thus far the only forum on which I have personally initiated discussion of this topic; but obviously I've followed discussions about it in numerous other places.

Is there any forum which has responded rationally?

You're on it.

I mean, there are plenty of instances elsewhere of people getting the correct answer. But basically what you get is either selection bias (the forum itself takes a position, and people are there because they already agree) or the type of noisy mess we see at RDF. To date, LW is the only place I know of where an a priori neutral community has considered the question and then decisively inclined in the right direction.

What seem to be the controlling biases?

In the case of RDF, I suspect compartmentalization is at work: this topic isn't mentally filed under "rationality", and there's no obvious cached answer or team to cheer for. So people there revert to the same ordinary, not-especially-careful default modes of thinking used by the rest of humanity, which is why the discussion there looks just like the discussions everywhere else.

It's noteworthy that my references and analogies to concepts and arguments discussed by Dawkins himself had no effect; apparently, we were just in a sort of separate magisterium. Particularly telling was this quote:

You are claiming that the issue of gods existence has been the subject of a major international trial, where a jury found that god existed? When did that happen?

Now on the face of it this seems utterly dishonest: I hardly think this fellow would actually be tempted to convert to theism upon hearing the news that eight Perugians had been convinced of God's existence. But I suspect he's actually just trying to express the separation that apparently exists in his mind between the kind of reasoning that applies to questions about God and the kind of reasoning that applies to questions about a criminal case.

Comment author: wedrifid 21 February 2010 01:23:29PM 1 point [-]

I know of where an a priori neutral community

Technical nitpick on the use of 'a priori' in the context. Subject to possible contradiction if I have missed a nuance in the meaning in the statistics context).

I would have just gone with 'previously'.

Comment author: RichardKennaway 19 February 2010 11:31:53PM 2 points [-]

Point taken.

Comment author: Bindbreaker 19 February 2010 03:29:34AM 1 point [-]

What's an easy way to explain the paperclip thing?

Comment author: Alicorn 19 February 2010 04:48:37AM 3 points [-]

We happen to like things like ice cream and happiness. But we could have liked paperclips. We could have liked them a lot, and not liked anything else enough to have it instead of paperclips. If that had happened, we'd want to turn everything into paperclips - even ourselves and each other!

Comment author: GreenRoot 19 February 2010 12:17:57AM 3 points [-]

What do you have to protect?

Eliezer has stated that rationality should not be end in itself, and that to get good at it, one should be motivated by something more important. For those of you who agree with Eliezer on this, I would like to know: What is your reason? What do you have to protect?

This is a rather personal question, I know, but I'm very curious. What problem are you trying to solve or goal are you trying to reach that makes reading this blog and participating in its discourse worthwhile to you?

Comment author: knb 20 February 2010 08:00:41AM *  3 points [-]

I'm trying to apply LW-style hyper-rationality to excelling in what I have left of grad school and to shepherding my business to success.

My mission (I have already chosen to accept it) is to make a pile of money and spend it fighting existential risk as effectively as possible. (I'm not yet certain if SIAI is the best target). The other great task I have is to persuade the people I care about to sign up for cryonics.

Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.

Comment author: ata 20 February 2010 09:39:12AM 5 points [-]

Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.

Alcor addresses some of the 'spiritual' objections in their FAQ. ("Whenever the soul departs, it must be at a point beyond which resuscitation is impossible, either now or in the future. If resuscitation is still possible (even with technology not immediately available) then the correct theological status is coma, not death, and the soul remains.") Some of that might be helpful.

However, that depends on you being comfortable persuading people to believe what are probably lies (which might happen to follow from other lies they already believe) in the service of leading them to a probably correct conclusion, which I would normally not endorse under any circumstances, but I would personally make an exception in the interest of saving a life, assuming they can't be talked out of theism.

It also depends on their being willing to listen to any such reasoning if they know you're not a theist. (In discussions with theists, I find they often refuse to acknowledge any reasoning on my part that demonstrates that their beliefs should compel them to accept certain conclusions, on the basis that if I do not hold those beliefs, I am not qualified to reason about them, even hypothetically. Not sure if others have had that experience.)

Comment author: RobinZ 19 February 2010 01:23:21AM 5 points [-]

I'm not quite sure I can answer the question. I certainly have no major, world(view)-shaking Cause which is driving me to improve my strength.

For what it's worth, I've had this general idea that being wrong is a bad idea for as long as I can remember. Suggestions like "you should hold these beliefs, they will make your life happier" always sounded just insane - as crazy as "you should drink this liquor, it will make your commute less boring". From that standpoint, it feels like what I have to protect is just the things I care about in the world - my own life, the lives of the people around me, the lives of humans in general.

That's it.

Comment author: UnholySmoke 19 February 2010 11:06:13AM 0 points [-]

This is a pretty good summary of my standpoint. While I agree with the overarching view that rationality isn't a value in its own right, it seems like a pretty good thing to practise for general use.

Comment author: h-H 19 February 2010 01:10:48AM 2 points [-]

OB then LW were the 'step beyond' to take after philosophy, not that I was seriously studying it.

to be honest I don't think there's much going on these day new-topic-wise, so I'm here less often. but I do come back whenever I'm bored, so at first "pure desire to learn" then "entertainment" would be my reasons ..

oh and a major part of my goals in life is formed by religion, ie. saving humanity from itself and whatever follows, this is more ideological than actual at this point in time, but anyway, that goal is furthered by learning more about AI/futurism, the rationality part less so, as I already had an intuitive grasp of it you could say, and really all it takes is reading the sequences with their occasional flaws/too strong assertions, the futurism part is more speculative-and interesting- so it's my main focus, along with the moral questions it brings, though there is no dichotomy to speak of if you consider this a personal blog rather than book or something similar.

hope this helped :)

Comment author: GreenRoot 19 February 2010 01:20:15AM 1 point [-]

Yes, this is what I was curious about, thanks. I've seen others cite humanity's existential risks as their motivations too (mostly uFAI, not as much nuclear war or super-flu or meteors). I'm like you in that for me it's definitely a mix of learning and entertainment.

Comment author: SilasBarta 18 February 2010 11:05:18PM 2 points [-]

Oh, look honey: more proof wine tasting is a crock:

A French court has convicted 12 local winemakers of passing off cheap merlot and shiraz as more expensive pinot noir and selling it to undiscerning Americans, including E&J Gallo, one of the United States' top wineries.

Cue the folks claiming they can really tell the difference...

Comment author: jpet 21 February 2010 06:25:12AM *  2 points [-]

If "top winery" means "largest winery", as it does in this story, I don't see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren't drinking Gallo in the first place.

They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long?

I think it's closer to say they were passing off as cheap, something that's actually even cheaper.

Switch the food item and see if your criticism holds:

Wonderbread, America's top bread maker, was conned into selling inferior bread. So-called "gourmets" never noticed the difference! Bread tasting is a crock.

Comment author: Douglas_Knight 21 February 2010 07:34:21AM 0 points [-]

If "top winery" means "largest winery", as it does in this story, I don't see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren't drinking Gallo in the first place.

If people who can tell the difference are a big enough demographic to sell to, then they are employed by all wineries, regardless of quality. But an alternate explanation is that Gallo was tacitly in on the scam - they got as much PN as Sideways demanded, without moving the market.

Comment author: jpet 22 February 2010 01:33:43AM 1 point [-]

Ah, I misunderstood the comment. I just assumed that Gallo was in on it, and the claim was that customers of Gallo failing to complain constituted evidence of wine tasting's crockitude.

If Gallo's wine experts really did get taken in, then yes, that's pretty strong evidence. And being the largest winery, I'm sure they have many experts checking their wines regularly--too many to realistically be "in" on such a scam.

So you've convinced me. Wine tasting is a crock.

Comment author: SilasBarta 21 February 2010 06:44:24AM 1 point [-]

If people made such a huge deal about the nuances in the taste of bread, while it also "happened" to have psychoactive effects that, gosh, always have to be present for the bread to be "good enough" for them, and cheap breads were still normally several times the cost of comparable-nutrition food, then yes, the cases would be parallel.

(Before anyone says it: Yes, I know bread as trace quantities of alcohol, we're all proud of what you learned in chemistry.)

Comment author: Morendil 20 February 2010 10:51:30AM 6 points [-]

There's plenty of hard evidence that people are vulnerable to priming effects and other biases when tasting wine.

There's also plenty of hard evidence that people can tell the difference between wine A and wine B, under controlled (blinded) conditions. Note that "tell the difference" isn't the same as "identify which would be preferred by experts".

So, while the link is factually interesting, and evidence that some large-scale deception is going on, aided by such priming effects as label, marketing campaigns and popular movies can have, it seems a stretch to call it "proof" that people in general can't tell wine A from wine B.

Rather, this strikes me as a combination of trolling and boo lights: cheaply testing who appears to be "on your side" in a pet controversy. How well do you expect that to work out for you, in the sense of "reliably entangling your beliefs with reality"?

Comment author: SilasBarta 20 February 2010 04:29:10PM 1 point [-]

I think I'm entangling my beliefs with reality very well, by virtue of extracting all available information from phenomena rather than retreat to evidence that agrees with me. (Let's not forget, I didn't start out thinking that it was all BS.)

For example, did you stop to notice the implications of this:

There's plenty of hard evidence that people are vulnerable to priming effects and other biases when tasting wine.

How does that compare to the priming effects for other drinks? Does it matter?

So, while the link is factually interesting, and evidence that some large-scale deception is going on, aided by such priming effects as label, marketing campaigns and popular movies can have, it seems a stretch to call it "proof" that people in general can't tell wine A from wine B.

But what would be the appropriate comparison? They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long? Normally, if you tried that, it would be noticed quickly, if not immediately, by virtually everyone.

What if you tried to pass off 16 oz of milk as 128? Or spoiled milk as milk expiring in a week?

Then, factor in how much difference is claimed to exist in wine vs. milks.

Who's optimally using evidence here?

Comment author: CronoDAS 21 February 2010 03:03:01AM *  5 points [-]

They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long?

Art forgeries. (Which shows that the value of the painting is determined by the status of the artist and not the quality of the art.)

If I can paint a painting that convinces experts that it was painted by [insert expert painter here], does that mean I'm as good an artist as said painter? (Assuming that my painting isn't a literal copy of someone else's.)

Comment author: Morendil 21 February 2010 07:56:14AM 0 points [-]

Often the worth of an artist stems from inventing new possibilities. Copycats are lesser.

Comment author: SilasBarta 21 February 2010 06:06:45AM 3 points [-]

Art forgeries. (Which shows that the value of the painting is determined by the status of the artist and not the quality of the art.)

Which, like wine, is another example of a path-dependent collective delusion that's not Truly Part of our values. (That is, our valuation of the work wouldn't survive deletion of the history that led to such a valuation.)

If I can paint a painting that convinces experts that it was painted by [insert expert painter here], does that mean I'm as good an artist as said painter? (Assuming that my painting isn't a literal copy of someone else's.)

Very nearly yes, it does, modulo a few factors. If you produced it after the artist, then you are benefiting from the artist's already having identified a region of conceptspace that you did not find yourself. (If the art is revered because of the artist's social status, that it wasn't even much of an accomplishment to begin with.) To put it another way, you produced the work after "supervised learning", while the artist didn't need that particular training.

If you can pass off a previous work of yours as being one of the artist's, that definitely makes you better.

Comment author: komponisto 21 February 2010 07:15:18AM 1 point [-]

Which, like wine, is another example of a path-dependent collective delusion that's not Truly Part of our values. (That is, our valuation of the work wouldn't survive deletion of the history that led to such a valuation.)

Who is "we", here?

The problem I have is not that you're wrong, for the people you're talking about; it's that you (probably) overestimate the size and/or importance of that population. You're not telling the whole truth, in effect. There are plenty of people who like paintings for the way they look, and would happily buy the work of a lesser-known artist at a cheap price if they liked it. Yes, some people use art to status-signal, but some people also actually like art. (There may even be a nonempty intersection!)

Comment author: SilasBarta 21 February 2010 11:14:19PM 0 points [-]

There are plenty of people who like paintings for the way they look, and would happily buy the work of a lesser-known artist at a cheap price if they liked it. Yes, some people use art to status-signal, but some people also actually like art. (There may even be a nonempty intersection!)

Sorry if I sound dodgy here, but I don't think I've said anything that contradicts this. My criticism is of these two things:

1) the idea that the elite-designated "high art" is non-arbitrary. (I claim it's a status-reinforced information cascade that wouldn't regain the designation of high-art if you deleted knowledge of which ones had been so classified.)

2) the excessive premiums paid for artworks based on both 1) and the fact that they are the originals (a "piece of history").

Never have I criticized or denied the existence of people who buy artworks because they simply like it and it appeals to them. I just criticize the way that we're expected to agree with the laurels attached to elite-designated high art. As I said before, I would have no problem if art were just a matter of "hey, I like this, now get on with your lives" (as it works in e.g. video games).

Comment author: Morendil 20 February 2010 07:13:20PM *  4 points [-]

Who's optimally using evidence here?

You seem to want a contest. The other option, where we are both "on the side of truth", appeals to me more.

We're fortunate in having different experiences in the domain of taste. I'm one of those people who like wine, and I'm confident I can identify some of its taste characteristics in blind tests. So, predictably I resent language which implies I'm an idiot, but I'm open to inquiry.

Our investigation should probably begin "at the crime scene", that is, close to what evidence we can gather about the sense of taste. So, yes, we could examine similar priming effects on other drinks.

I have a candidate in mind, but what I'd like to ask you first is, suppose I name the drink I have in mind and we then go look for evidence of fraud in its commerce. What would it count as evidence of if we found no fraud? If we did find it? Which one would you say counts as evidence that "people can't tell the difference" between wines?

Comment author: SilasBarta 21 February 2010 06:22:23AM *  0 points [-]

We're fortunate in having different experiences in the domain of taste. I'm one of those people who like wine, and I'm confident I can identify some of its taste characteristics in blind tests. So, predictably I resent language which implies I'm an idiot, but I'm open to inquiry.

If someone was long ago made aware of powerful evidence that an expensive pleasure he currently enjoys can be replicated with fidelity at a sliver of the cost, and he hasn't already done the experimentation necessary to properly rule this out, then you're right. There are explanations other than that he is an idiot. But they're not much more flattering either.

I can tell you that if I were in this position for another beverage, I would have already done the tests.

I have a candidate in mind, but what I'd like to ask you first is, suppose I name the drink I have in mind and we then go look for evidence of fraud in its commerce. What would it count as evidence of if we found no fraud? If we did find it? Which one would you say counts as evidence that "people can't tell the difference" between wines?

The no-fraud case is positive but weak evidence of for people telling the difference because it can be accounted for by honesty on the part of retailers, fear of whistleblowers, etc. Finding fraud is unlikely but strong evidence against the claim that people can tell the difference -- because it crucially depends on the priming effects (or some other not-truly-part-of-you effect) dominating.

I'd prefer a blind comparison on the cheap substitutes like Cyan suggested.

And I'd be glad that you've identified a product I'm overpaying for!

Comment author: Morendil 22 February 2010 11:11:57AM *  0 points [-]

an expensive pleasure he currently enjoys can be replicated with fidelity at a sliver of the cost

What would you suggest I do, to replicate the pleasure I get from wine?

Comment author: SilasBarta 22 February 2010 08:31:01PM *  0 points [-]

You mean, replicate the pleasure from expensive wine? (I'm going to assume you genuinely like the act of drinking wine.) Easy: accept that it's an illusion, then buy the cheap stuff (modulo social status penalties) and prime it the way good wines are. (This may require an assistant.) Gradually train yourself to regard them as the same quality. If you can trained to put it on a pedestal, you can probably be trained to take it off.

If it's unavoidable that you discount the taste due to your residual knowledge that the wine is low-class, then accept that you're overpaying because of a unremovable bias. (Not a slight against you, btw -- I admit I was the same way with CFLs for a while, in that I couldn't discard the knowledge, and thus negative affect, that they're CFLs rather than incandescent, and thereby pointlessly overpaid for lighting.)

On the more realistic assumption that you've simply been trained to like wine via a process that would equally well train you to like anything, get some friends together and train yourselves to like V8. (The veggie drink, not the engine.)

Comment author: Morendil 21 February 2010 06:53:27AM *  0 points [-]

inding fraud is unlikely but strong evidence against the claim that people can tell the difference -- because it crucially depends on the priming effects [...] dominating.

Wait, can you expand on how fraud in the trade of a non-alcoholic drink is "strong but unlikely" evidence that people cannot tell the differences between wines?

Such fraud might be evidence that people cannot tell the difference between tastes more generally, but that seems like a higher hurdle to clear.

Comment author: SilasBarta 21 February 2010 11:21:24PM *  -1 points [-]

Wait, can you expand on how fraud in the trade of a non-alcoholic drink is "strong but unlikely" evidence that people cannot tell the differences between wines?

Where did I say that? The claim you quoted from me was about "what would be evidence of people's ability to distinguish that non-alcoholic drink", not wines.

You'd check for people's ability to distinguish wines by fraud in wines, and people's ability to distinguish specific non-wine drinks by fraud in specific non-wine drinks.

I thought I made that very clear, and if not, common sense and the principle of charity should have sufficed for you not to infer that I would get the "wires crossed" like that.

(ETA: By the way: is there a shorter way to refer to a person's ability to distinguish foods/drinks? I've tried shorter expressions, but they make other posters go batty about the imprecision without even suggesting an alternative. Paul Birch suggests "taste entropy of choice", but that's obscure.)

And, to preempt a possible point you may be trying to make: yes, fruit drinks may be fraudulently labeled as real fruit juice, but that's not a parallel case unless people claim to be able to distinguish by taste the presence of real juice and purchase on that basis.

Comment author: Morendil 22 February 2010 11:08:27AM 0 points [-]

So we've averted misunderstanding, good. My question remains: what does fraud (or non-fraud) in non-alcoholic drinks tell us about whether people "really can tell the difference" between wines?

Just to be sure what you're claiming, btw: if I did a "triangle test", blinded, on two arbitrary bottles of wine from my supermarket, and I could tell them apart, would you retract that claim? Or is your claim restricted to some specific varietals?

Comment author: SilasBarta 22 February 2010 08:34:58PM 0 points [-]

So we've averted misunderstanding, good.

How could I have explained my position better so that you would not have inferred the point about fruit drinks?

My question remains: what does fraud (or non-fraud) in non-alcoholic drinks tell us about whether people "really can tell the difference" between wines?

It doesn't remain. If people can tell the difference, you don't gain from fraud. If they can't, you could gain from fraud. Where's the confusion?

Just to be sure what you're claiming, btw: if I did a "triangle test", blinded, on two arbitrary bottles of wine from my supermarket, and I could tell them apart, would you retract that claim? Or is your claim restricted to some specific varietals?

I accept that you could tell red from white, so it couldn't be completely random. I'd want a test over the two varieties they said were swapped in the story, or within the same variety, but significant cost difference.

Comment author: Morendil 24 February 2010 10:15:19PM 3 points [-]

How could I have explained my position better so that you would not have inferred the point about fruit drinks?

I have inferred nothing about fruit drinks. In this comment you replied to me with an allusion to "other drinks". Later in the same comment you referred to milk. In other words, you primed me to think about non-alcoholic drinks. Later in the exchange, you ruled out the possibility that fraud in non-alcoholic drinks will provide any evidence relevant to your original claim. So we're better off dropping this line of inquiry altogether.

I'd want a test over the two varieties they said were swapped in the story

The story mentioned three varietals: merlot, shiraz and pinot noir. It seems likely that in the next few days I will be able to procure half-bottles of a Merlot (vin de pays, 2005) at €6.5/l and a Pinot Noir (Burgundy, 2005) at 14€/l. The experiment will set me back about 10€.

Do you consider the experiment valid if I buy the bottles myself, and have a third person prepare the glasses? Would you care to stipulate any particular controls? Are you willing to trust my word when I report back with the results? Do I have to correctly identify the cheaper and the more expensive wine, or just to show I can tell the difference? What will you bet on the outcome?

Comment author: Cyan 20 February 2010 08:45:58PM 4 points [-]

I'm one of those people who like wine, and I'm confident I can identify some of its taste characteristics in blind tests.

You can easily test yourself if you have a confederate. I recommend a triangle test.

Comment author: knb 20 February 2010 09:33:23AM 0 points [-]

I love this kind of thing.

Shall we name this phenomenon the "Emperor's New Clothes Effect"?

Comment author: ata 20 February 2010 09:41:24AM *  0 points [-]

That could be a general name for the phenomenon. As it relates to wine tasting (and maybe we could stretch it a bit), I'd propose "the Nuances of Toast Effect", for a particularly memorable phrase in this Dave Barry column.

Comment author: CronoDAS 18 February 2010 11:43:15PM 0 points [-]

I can't even tell the difference between Coke and Pepsi.

Comment author: h-H 20 February 2010 04:40:37PM 0 points [-]

One's in a red can, the other in a blue one ;-)

oh well, me neither actually.

Comment author: LucasSloan 19 February 2010 04:46:21AM 0 points [-]

Really? How much soda do you drink? The difference is readily noted by me. I can even tell the difference between regular and diet Coke.

Comment author: CronoDAS 19 February 2010 05:35:51PM 0 points [-]

I don't like cola very much, so if you gave me a drink of fizzy black stuff, I wouldn't be able to identify which brand it was. Also, other sodas have a tendency to give me heartburn so I've been drinking them much less than I used to.

Comment author: Sniffnoy 19 February 2010 06:58:45AM 1 point [-]

Wait, is that supposed to be harder? I'm not sure I could tell the difference between Coke and Pepsi, but I think I could tell the difference between regular and diet.

Comment author: Jack 19 February 2010 05:24:17AM 2 points [-]

This seems to suggest that it is easier to tell the difference between Coke and Pepsi than it is to tell the difference between regular and diet. I can tell the difference between all of them, but the first is a lot harder and I think that experience is pretty common. Most diet drinks use aspartame as a sugar substitute and aspartame leaves a very distinctive aftertaste in my mouth.

Comment author: Nick_Tarleton 19 February 2010 04:57:48AM *  0 points [-]

I can even tell the difference between regular and diet Coke.

Is this unusual?

Comment author: LucasSloan 19 February 2010 05:01:55AM 0 points [-]

I have heard people remark they can't tell the difference. My father for example.

Comment author: Dean 18 February 2010 10:40:48PM 1 point [-]

first use of "shut up and calculate" ?

I liked learning about the bias called the "Matthew effect" The tendency to assign credit to the most eminent among all the plausible candidates from —Mattthew 25:29.

For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.

http://scitation.aip.org/journals/doc/PHTOAD-ft/vol_57/iss_5/10_1.shtml?bypassSSO=1

enjoy

Comment author: xamdam 18 February 2010 08:50:05PM 1 point [-]

Nice recap of psychological biases from the Charlie Munger school (of hard knocks and making a billion dollars).

http://www.capitalideasonline.com/articles/index.php?id=3251

Comment author: [deleted] 18 February 2010 08:34:40PM *  1 point [-]

For those Less Wrongians who watch anime/read manga, I have a question: What would you consider the top three that you watch/read and why?

Edit: Upon reading gwern's comment, I see how kinda far off topic that was, even for an open thread. So change the question to what anime/manga was most insightful into LW-style thinking and problems?

Comment author: knb 20 February 2010 09:46:11AM 1 point [-]

If Death Note counts, then Haruhi might count as well. Deals with anthropics and weird AIs in a tangential way. The anime is awesome, but not as good as it could have been.

Comment author: Eliezer_Yudkowsky 18 February 2010 10:57:27PM 3 points [-]

Hikaru no Go, of course.

Comment author: Jayson_Virissimo 25 January 2012 11:41:57AM 3 points [-]

Okay, so I'm 10 episodes into HnG and...where is the "LW-style thinking and problems"?

Comment author: Eliezer_Yudkowsky 25 January 2012 08:22:36PM 1 point [-]

Hence the origin of the phrase, "tsuyoku naritai".

Comment author: Anubhav 26 January 2012 02:31:02PM 0 points [-]

Hence the origin of the phrase, "tsuyoku naritai".

That seems to be about as likely as "hyakunen hayai" or "isshoukenmei" or "ninja" or "mahou-tsukau uchuujin" originating from an anime.

(OK... not as likely as the last one.)

Comment author: Eliezer_Yudkowsky 28 January 2012 02:20:25AM 2 points [-]

Well, that's where I got it.

Comment author: Jayson_Virissimo 26 January 2012 12:27:34AM *  1 point [-]

Wow, I can't believe I missed that. Although, if that is the only thing relevant to "LW-style thinking and problems" in HnG, then Death Note compares favorably to it.

Comment author: gwern 18 February 2010 09:30:48PM *  3 points [-]

If you mean in general (ie. 'I really liked Evangelion and thought that Sayonara Zetsubou-Sensei was hysterical!'), I think that's a wee bit too far off-topic. Might as well ask what's everyone's favorite poet.

If you mean, 'what anime/manga was most insightful into LW-style thinking and problems', that's a little more challenging.

Death Note comes to mind as a possible exemplar of what humans really can do in the realm of action & thought, and perhaps what an AI in a box could do. Otaku no Video is useful as a cautionary tale about geekdom. And to round it off, I have a personal theory that Aria depicts a post-Singularity sysop scenario with humans who have chosen to live a nostalgic low-tech lifestyle* because that turns out to be la dolce vita.

* The high tech is there when it's really needed. Like how the Amish make full use of modern medicine, surgery, and tech when they need to.

Comment author: i77 19 February 2010 12:06:07PM *  1 point [-]

Fullmetal Alchemist Brotherhood has (SPOILER):

an almost literally unboxed unfriendly "AI" as main bad guy. Made by pseudomagical ("alchemy") means, but still.