Comment author: SilasBarta 18 February 2010 11:05:18PM 2 points [-]

Oh, look honey: more proof wine tasting is a crock:

A French court has convicted 12 local winemakers of passing off cheap merlot and shiraz as more expensive pinot noir and selling it to undiscerning Americans, including E&J Gallo, one of the United States' top wineries.

Cue the folks claiming they can really tell the difference...

Comment author: jpet 21 February 2010 06:25:12AM *  2 points [-]

If "top winery" means "largest winery", as it does in this story, I don't see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren't drinking Gallo in the first place.

They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long?

I think it's closer to say they were passing off as cheap, something that's actually even cheaper.

Switch the food item and see if your criticism holds:

Wonderbread, America's top bread maker, was conned into selling inferior bread. So-called "gourmets" never noticed the difference! Bread tasting is a crock.

Comment author: jpet 15 February 2010 07:38:15PM *  11 points [-]

Part of the problem stems from different uses of the word "caution".

There are a range of possible outcomes for the earth's climate (and the resulting cost in lives and money) over the next century ranging from "everything will be fine" to "catastrophic"; there is also uncertainty over the costs and benefits of any given intervention. So what should we do?

Some say, "Caution! We don't know what's going to happen; let's not change things too fast. Keep our current policies and behaviors until we know more."

Others say, "Caution! We don't know what's going to happen, and we're already changing things (the atmosphere) very quickly indeed. We need to move quickly politically and economically in order to slow down that change."

For most people it seems that caution means: assume things will continue on more or less the same and be careful about changing your behavior, rather than seek to avoid a high risk of catastrophic loss.

Discussions about runaway AI often take a similar turn. People will come up with a list of reasons why they think it might not be a problem: maybe the humain brain already operates near the physical limit of computation; maybe there's some ineffable quantum magic thingy that you need to get "true AI"; maybe economics will continue to work just like it does in econ 101 textbooks and guarantee a soft transition; maybe it's just a really hard problem and it will be a very long time before we have to worry about it.

Maybe. But there's no good reason to believe any of those things are true, and if they aren't, then we have a serious concern.

Personally, I think it's like we're driving blindfolded with the accelerator pressed to the floor. There's a guy in the other seat who says he can see out the window, and he's yelling "I think there's a cliff up ahead--slow down!" We're suggesting he not be too hasty.

But I can see the other side, too: if we radically changed policy every time some crank declared that doom was at hand, we'd be much worse off.

In response to Logical Rudeness
Comment author: jpet 01 February 2010 11:49:35PM *  2 points [-]

Another form of argumentus interruptus is when the other suddenly weakens their claim, without acknowledging the weakening as a concession

I used to do this quite often. Usually in personal conversations rather than online, because I would get caught up in trying to win. I didn't really notice I was doing it until I heard someone grumbling about such behavior and realized I was among the guilty. Now I try to catch myself before retreating, and make sure to acknowledge the point.

So not much to add, other than the encouraging observation that people can occasionally improve their behavior by reading this sort of stuff.

Comment author: jpet 12 January 2010 11:00:16PM 3 points [-]

It seems like you missed one hypothesis: maybe you're mistaken about the people in question, and they actually never were all that intelligent. They achieved their status via other means. It's an especially plausible error because they have high status--surely they must have got where they are by dint of great intellect!

Comment author: jpet 24 December 2009 04:51:54PM 0 points [-]

Define a "representative" item sample as one coming from a study containing explicit statements that (a) a natural environment had been defined and (b) the items had been generated by random sampling of this environment.

Can you elaborate on what this actually means in practice? It doesn't make much sense to me, and the paper you linked to is behind a paywall.

(It doesn't make much sense because I don't see how you could rigorously distinguish between a "natural" or "unnatural" environment for human decision-making. But maybe they're just looking for cases where experimenters at least tried, even without rigor?)

Comment author: komponisto 13 December 2009 05:29:42AM 0 points [-]

Serious nitpicking going on here. The whole point of my post is that from the information provided, one should arrive at probabilities close to what I said.

I don't have appreciably more info than many who participated in my survey, and certainly not more than the jury in Perugia.

Comment author: jpet 14 December 2009 08:05:02AM *  2 points [-]

Serious nitpicking going on here. The whole point of my post is that from the information provided, one should arrive at probabilities close to what I said.

It's not "nitpicking" to calibrate your probabilities correctly. If someone was to answer innocent with probability 0.999, they should be wrong about one time in a thousand.

So what evidence was available to achieve such confidence? No DNA, no bloodstains, no phone calls, no suspects fleeing the country, no testimony. Just a couple of websites. People make stuff up on websites all the time. I wouldn't have assigned .999 probability to the hypothesis that there even was a trial if I hadn't heard of it (glancingly) prior to your post.

[edit: I'm referring only to responders who, like me, based their answer on a quick read of the links you provided. Of course more evidence was available for those who took the time to follow up on it, and they should have had correspondingly higher confidence. I don't think your answer was wrong based on what you knew, but it would have been horribly wrong based on what we knew.]

Comment author: gwern 12 December 2009 03:15:30AM 2 points [-]

I believe Hanson's paper on 'Bayesian wannabes' shows that even only partially rational agents must agree about a lot.

Comment author: jpet 13 December 2009 04:31:20AM 0 points [-]

I've seen the paper, but it assumes the point in question in the definition of partially rational agents in the very first paragraph:

If these agents agree that their estimates are consistent with certain easy-to-compute consistency constraints, then... [conclusion follows].

But peoples' estimates generally aren't consistent with his constraints, so even for someone who is sufficiently rational, it doesn't make any sense whatsoever to assume that everyone else is.

This doesn't mean Robin's paper is wrong. It just means that faced with a topic where we would "agree to disagree", you can either update your belief about the topic, or update your belief about whether both of us are rational enough for the proof to apply.

Comment author: jpet 12 December 2009 03:08:53AM *  4 points [-]

I think there's another, more fundamental reason why Aumann agreement doesn't matter in practice. It requires each party to assume the other is completely rational and honest.

Acting as if the other party is rational is good for promoting calm and reasonable discussion. Seriously considering the possibility that the other party is rational is certainly valuable. But assuming that the other party is in fact totally rational is just silly. We know we're talking to other flawed human beings, and either or both of us might just be totally off base, even if we're hanging around on a rationality discussion board.

Comment author: jpet 09 December 2009 09:34:43PM *  10 points [-]

I was unfamiliar with the case. I came up with: 1 - 20% 2 - 20% 3 - 96% 4 - probably in the same direction, but no idea how confident you were.

From reading other comments, it seems like I put a different interpretation on the numbers than most people. Mine were based on times in the past that I've formed an opinion from secondhand sources (blogs etc.) on a controversial issue like this, and then later reversed that opinion after learning many more facts.

Thus, about 1 time in 5 when I'm convinced by a similar story of how some innocent person was falsely convicted, then later get more facts, I change my mind about their innocence. Hence the 20%.

I don't think it's correct to put any evidential weight on the jury's ruling. Conditioning on the simple fact that thier ruling is controversial screens off most of its value.

View more: Prev