private_messaging comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: linkhyrule5 10 September 2013 10:52:27PM 2 points [-]

Yes, but that's really not that hard. For starters, you can do a better job of picking your targets.

The AI-box experiment often is run with intelligent, rational people with money on the line and an obvious right answer; it's a whole lot more impossible than picking the right uneducated family to sell your snake oil to.

Comment author: private_messaging 10 September 2013 10:58:50PM *  0 points [-]

Ohh, come on. Cyclical reasoning here. You think Yudkowsky is not a crank, so you think the folks that play that silly game with him are intelligent and rational (by the way a plenty of people who get duped by anti-vaxxers are of above average IQ), and so you get more evidence that Yudkowsky is not a crank. Cyclical reasoning doesn't persuade anyone who isn't already a believer.

You need non-cyclical reasoning. Which would generally be something where you aren't the one having to explain people that the achievement in question is profound.

Comment author: linkhyrule5 10 September 2013 11:04:30PM 1 point [-]

You need non-cyclical reasoning. Which would generally be something where you aren't the one having to explain people that the achievement in question is profound.

This bit confuses me.

That aside:

You think Yudkowsky is not a crank, so you think the folks that play that silly game with him are intelligent and rational

Non sequitur. From the posts they make, everyone on this site seems to me to be sufficiently intelligent as to make "selling snake oil" impossible, in a cut-and-dry case like the AI box. Yudowsky's own credibility doesn't enter into it.

Comment author: private_messaging 10 September 2013 11:41:14PM *  1 point [-]

Non sequitur.

I thought you wanted to persuade others.

From the posts they make, everyone on this site seems to me to be sufficiently intelligent as to make "selling snake oil" impossible, in a cut-and-dry case like the AI box.

So what do you think even happened, anyway, if you think the obvious explanation is impossible?

Comment author: linkhyrule5 10 September 2013 11:54:46PM 2 points [-]

I thought you wanted to persuade others.

Yes, but I don't see why this is relevant

So what do you think even happened, anyway, if you think the obvious explanation is impossible?

Ah, sorry. This brand of impossible.

Comment author: private_messaging 11 September 2013 09:02:26AM *  3 points [-]

Yes, but I don't see why this is relevant

Originally, you were hypothesising that the problem with persuading the others would be the possibility that Yudkowsky lied about AI box powers. I pointed out the possibility that this experiment is far less profound than you think it is. (Albeit frankly I do not know why you think it is so profound).

Ah, sorry. This brand of impossible.

What ever is the brand, any "impossibilities" that happen should lower your confidence in the reasoning that deemed them "impossibilities" in the first place. I don't think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.

edit: example. I would deem it quite unlikely that Yudkowsky could, for example, score highly on a programming contest with competent participants or in any other conventional, validated, reliable metric of technical expertise and ability, under good contest rules (i.e. excluding the possibility of externals assistance). So if he did something like that, I'd be quite surprised, and lower the confidence in what ever models deemed that impossible; good old Bayes. I'm far more confident in the validity of those conventional metrics (and in lack of alternate modes of passing, such as persuasion) than in my assessment so my assessment would change the most. Meanwhile, when it's some unconventional game, well, even if I thought that this game is difficult, I'd be much less confident in the reasoning "it looks hard so it must be hard" than the low prior of exceptional performance is low.

Comment author: nshepperd 12 September 2013 12:28:16PM *  1 point [-]

What ever is the brand, any "impossibilities" that happen should lower your confidence in the reasoning that deemed them "impossibilities" in the first place. I don't think IQ is so strongly protective against deception, for example, and I do not think that you can assess something based on how the postings look to you with sufficient reliability as to overcome Gaussian priors very far from the mean.

Further, in this case the whole purpose of the experiment was to demonstrate that an AI could "take over a gatekeeper's mind through a text channel" (something previously deemed "impossible"). As far as that goes it was, in my view, successful.

Comment author: Peterdjones 12 September 2013 12:48:15PM 0 points [-]

something previously deemed "impossible"

It's clearly possible for some values of "gatekeeper", since some people fall for 419 scams. The test is a bit meaningless without information about the gatekeepers

Comment author: linkhyrule5 11 September 2013 02:52:19PM 1 point [-]

Originally, you were hypothesising that the problem with persuading the others would be the possibility that Yudkowsky lied about AI box powers. I pointed out the possibility that this experiment is far less profound than you think it is. (Albeit frankly I do not know why you think it is so profound).

Still have no idea what you're talking about. What I originally said was: "the people who talk to Yudkowsky are intelligent" does not follow from "Yudkowsky is not a crank"; I independently judge those people to be intelligent.

What ever is the brand, any "impossibilities" that happen should lower your confidence in the reasoning that deemed them "impossibilities" in the first place.

"Impossible," here, is used in the sense that "I have no idea where to start thinking about where to start thinking about how to do this." It is clearly not actually impossible because it's been done, twice.

And point about the contest.

Comment author: private_messaging 12 September 2013 01:53:33PM *  0 points [-]

I thought your "impossible" at least implied "improbable" under some sort of model.

edit: and as of having no idea, you just need to know the shared religious-ish context. Which these folks generally keep hidden from a causal observer.

Comment author: linkhyrule5 12 September 2013 05:08:49PM 1 point [-]

Impossible is being used as a statement of difficulty. Someone who has "done the impossible" has obviously not actually done something impossible, merely done something that I have no idea where to start trying.

Seeing that "it is possible to do" doesn't seem like it would have much effect on my assessment of how difficult it is, after the first. It certainly doesn't have match effect on "It is very-very-difficult-impossible for linkhyrule5 to do such a thing."

and as of having no idea, you just need to know the shared religious-ish context. Which these folks generally keep hidden from a causal observer.

What?

First, I'm pretty sure you mean "casual." Second, I'm hardly a casual observer, though I haven't read everything either. Third, most religions don't let their leading figures (or much of anyone, really) change their minds on important things...

Comment deleted 12 September 2013 05:17:32PM *  [-]
Comment author: Juno_Watt 12 September 2013 06:53:17AM 0 points [-]

Some folks on this site have accidentally bought unintentional snake oil in The Big Hoo Hah That Shall not Be Mentioned. Only an intelligent person could have bought that particular puppy,

Comment author: linkhyrule5 12 September 2013 07:28:57AM 0 points [-]

Granted. And it may be that additional knowledge/intelligence makes yourself more vulnerable a Gatekeeper.

Comment author: Peterdjones 12 September 2013 08:24:20AM 0 points [-]

Trying to think this out in terms of levels of smartness alone is very unlikely to be helpful.

Comment author: linkhyrule5 12 September 2013 05:10:12PM 0 points [-]

Well yes. It is a factor, no more no less.

My point is, there is a certain level of general competence after which I would expect convincing someone with an OOC motive to let an IC AI out to be "impossible," as defined below.

Comment author: MugaSofer 11 September 2013 05:00:32PM 0 points [-]

plenty of people who get duped by anti-vaxxers are of above average IQ

But less than half of them, I'll wager. This is clearly an abuse of averages.

Comment author: private_messaging 11 September 2013 05:34:30PM *  6 points [-]

I wouldn't wager too much money on that one. http://pediatrics.aappublications.org/content/114/1/187.abstract .

Results. Undervaccinated children tended to be black, to have a younger mother who was not married and did not have a college degree, to live in a household near the poverty level, and to live in a central city. Unvaccinated children tended to be white, to have a mother who was married and had a college degree, to live in a household with an annual income exceeding $75 000, and to have parents who expressed concerns regarding the safety of vaccines and indicated that medical doctors have little influence over vaccination decisions for their children.

And in any case the point is that any correlation between IQ and not being prone to getting duped like this is not perfect enough to deem anything particularly unlikely.

Comment author: MugaSofer 12 September 2013 03:40:32PM *  1 point [-]

Hmm. Yeah, that's hardly conclusive, but I think I was actually failing to update there. Now that you mention it, I seem to recall that both conspiracy theorists and cult victims skew toward higher IQ. I was clearly quite overconfident there.

And in any case the point is that any correlation between IQ and not being prone to getting duped like this is not perfect enough to deem anything particularly unlikely.

Wasn't the point that

intelligent, rational people with money on the line and an obvious right answer

wasn't enough, actually? That seems like a much stronger claim than "it's really hard to fool high-IQ people".

Comment author: Nornagest 11 September 2013 05:45:55PM *  1 point [-]

I imagine that says more about the demographics of the general New Age belief cluster than it does about any special IQ-based appeal of vaccination skepticism.

There probably are some scams or virulent memes that prey on insecurities strongly correlated with high IQ, though. I can't think of anything specific offhand, but the fringes of geek culture are probably one of the better places to start looking.

Comment author: private_messaging 11 September 2013 05:50:28PM *  2 points [-]

Well, the way I see it, outside of very high IQ in combination with education that is multiple topics of biochemistry, effects of intelligence are small and are easily dwarfed by things like those demographical correlations.

There probably are some scams or virulent memes that prey on insecurities specific to high-IQ people, though. I can't think of anything specific offhand

Free energy scams. Hydrinos, cold fusion, magnetic generators, perpetual motion, you name it. edit: or in the medicine, counter intuitive stuff like sitting in an old uranium mine inhaling radon, then having so much radon progeny plate-out it sets nuclear material smuggling alarms off. Naturalistic fallacy stuff in general.

Comment author: Gurkenglas 11 September 2013 07:21:37PM *  0 points [-]

Cryonics. ducks and runs

Edit: It was a joke. Sorryyyyyy

Comment author: MugaSofer 12 September 2013 03:44:26PM 0 points [-]

That is more persuasive to high IQ people, but, I think, only insofar as intelligence allows one to gain better rationality skills. And if we're including that, there are plenty of other, facetious examples that come into play.

Also: ha ha. How hilarious. I would love to see why you class cryonics as a scam, but sadly I'm fairly certain it would be one of the standard mistakes.

Comment author: shminux 10 September 2013 11:30:27PM 0 points [-]

Cyclical reasoning here.

You probably mean "circular".