Mitchell_Porter comments on A cynical explanation for why rationalists worry about FAI - Less Wrong

25 Post author: aaronsw 04 August 2012 12:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread.

Comment author: Mitchell_Porter 06 August 2012 05:15:57AM 16 points [-]

Aaron, I currently place you in the category of "unconstructive critic of SI" (there are constructive critics). Unlike some unconstructive critics, I think you're capable of more, but I'm finding it a little hard to pin down what your criticisms are, even though you've now made three top-level posts and every one of them has contained some criticism of SI or Eliezer for not being fully rational.

Something else that they have in common is that none of them just says "SI is doing this wrong". The current post says "Here is my cynical explanation for why SI is doing this thing that I say is wrong". (Robin Hanson sometimes does this - introduces a new idea, then jumps to "cynical" conclusions about humanity because they haven't already thought of the idea and adopted it - and it's very annoying.) The other two posts introduce the criticisms in the guise of offering general advice on how to be rational: "Here is a rationality mistake that people make; by coincidence, my major example involves the founder of the rationality website where I'm posting this advice."

I suggest, first of all, that if your objective on this site is to give advice about how to be rational, then you need to find a broader range of examples. People here respect Eliezer, for very good reasons. If you do want to make a concentrated critique of how he has lived his life, then make a post about that, don't disguise it as a series of generic reflections on rationality which just happen to be all about him.

Personally I would be much more interested in what you have to say about the issue of AI. Do you even think AI is a threat to the human race? If so, what do you think we should do about it?

Comment deleted 06 August 2012 10:02:27PM *  [-]
Comment author: wedrifid 06 August 2012 11:09:18PM *  8 points [-]

OK, I believe we have more than enough information to consider him identified now:

  • Dmytry
  • private_messaging
  • JaneQ
  • Comment
  • Shrink
  • Allworkandnoplay

Those are the currently known sockpuppets of Dmytry. This one warrants no further benefit of the doubt. It is a known troll wilfully abusing the system. To put it mildly, this is something I would prefer not to see encouraged.

Comment author: gwern 06 August 2012 11:26:36PM 8 points [-]

I agree. Dmytry was OK; private_messaging was borderline but he did admit to it and I'm loathe to support the banning of a critical person who is above the level of profanity and does occasionally make good points; JaneQ was unacceptable, but starting Comment after JaneQ was found out is even more unacceptable. Especially when none of the accounts were banned in the first place! (Were this Wikipedia, I don't think anyone would have any doubts about how to deal with an editor abusing multiple socks.)

Comment author: wedrifid 07 August 2012 06:00:31AM 5 points [-]

private_messaging was borderline but he did admit to it

Absolutely, and he also stopped using Dmytry. My sockpuppet aversion doesn't necessarily have a problem with abandoning one identity (for reasons such as the identity being humiliated) and working to establish a new one. Private_messaging earned a "Do Not Feed!" tag itself through consistent trolling but that's a whole different issue to sockpuppet abuse.

JaneQ was unacceptable

And even used in the same argument as his other account, with them supporting each other!

Comment author: Kawoomba 07 August 2012 10:06:24PM 5 points [-]

Private_messaging earned a "Do Not Feed!" tag itself through consistent trolling

What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?

If I were to try to construct some repertoire model of him (e.g. signalling intellectual superiority by contradicting the alphas, seems like a standard contrarian mindset), it might be a good match. But frankly: Why care, his points should stand or fall on their own merit, regardless of why he chose to make them.

He raised some excellent points regarding e.g. Solomonoff induction that I've yet to see answered, (e.g. accepting simple models with assumed noise over complex models with assumed less noise, given the enourmously punishing discounting for length that may only work out in theoretical complexity class calculations and Monte Carlo approximations with a trivial solution) and while this is a CS dominated audience, additional math proficiency should be highly sought after -- especially for contrarians, since it makes their criticisms that much more valuable.

Is he a consistent fountain of wisdom? No. Is anyone?

I will not defend sockpuppet abuse here, though, that's a different issue and one I can get behind. Don't take this comment personal, the sentiment was spawned from when he just had 2 known accounts but was already met with high levels of "do not feed!", your comment just now seemed as good a place as any to voice it.

Comment author: Wei_Dai 08 August 2012 08:50:10AM 4 points [-]

He raised some excellent points regarding e.g. Solomonoff induction that I've yet to see answered

Can you link to the original post or comment? Your restatement of whatever he wrote is not making much sense to me.

Comment author: Kawoomba 08 August 2012 10:02:56AM *  2 points [-]

Well there is definitely some sort of a Will Newsome-like projection technique going on, i.e. his comments - those that are on topic - are sometimes sufficiently opaque so that the insight is generated by the reader filling in the gaps meaningfully.

The example I used was somewhat implicit in this comment:

You end up modelling a crackpot scientist with this. Pick simplest theory that doesn't fit the data, then distrust the data virtually no matter how much evidence is collected, and explain it as people conspiring, that's what the AI will do. Gets even worse when you are unable to determine minimum length for either theory (which you are proven unable).

The universal prior discount for length is so severe (just a 20 bits longer description = 2^20 discounting, and what can you even say with 20 bits?), that this quote from Shane Legg's paper comes at little surprise:

"However it is clear that only the shortest program for will have much affect (sp) on [the universal prior]."

If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.

How well does that argument hold up to challenges? I'm not sure, I haven't thought AIXI sufficiently through when taking into account the map-territory divide. But it sure is worthy of further consideration, which it did not get.

Here's some other comments that come to mind: This comment of his, which I interpreted to essentially refer to what I explained in my answering comment.

There's a variation of that point in this comment, third paragraph.

He also linked to this marvelous presentation by Marcus Hutter in another comment, which (the presentation) unfortunately did not get the attention it clearly deserves.

There's comments I don't quite understand on first reading, but which clearly go into the actual meat of the topic, which is a good direction.

My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting "truth" for "meaningful argument". Those deserve answers, not ignoring, regardless of their source.

Comment author: Wei_Dai 08 August 2012 09:38:15PM *  3 points [-]

If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.

It looks to me like you're reading your own interpretation into what he wrote, because the sentence he wrote before "You end up with" was

they are not uniquely determined and your c can be kilobits long, meaning, one hypothesis can be given prior >2^1000 larger than another, or vice versa, depending to choice of the language.

which is clearly talking about another issue. I can give my views on both if you're interested.

On the issue private_messaging raises, I think it's a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don't trust that too much.

On the issue you raised, a hypothesis of "simple model + random errors" must still match the past history perfectly to not be discarded, and the exact errors would have to be part of the hypothesis (i.e., program) and therefore count towards its length.

My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting "truth" for "meaningful argument". Those deserve answers, not ignoring, regardless of their source.

I defended private_messaging/Dmytry before for similar reasons, but the problem is that it's often not fun to argue with him. I do engage with him sometimes if I think I can draw out some additional insights or get him to clarify something, but now I tend not to respond just to correct something that I think is wrong.

Comment author: private_messaging 09 August 2012 11:31:00AM *  0 points [-]

On the issue privatemessaging raises, I think it's a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. privatemessaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don't trust that too much.

Are you picturing AI that has simulated multiverse from big bang up inside a single universe, and then it just uses camera sense data to very rapidly pick the right universe? Well yes that will dispose of 2^1000 prior very easily. Something that is instead e.g. modelling humans using minimum amount of guessing without knowing what's inside their heads, and which can't really run any reductionist simulations at the level of quarks to predict it's camera data, can have real trouble getting right the fine details of it's grand unified theory of everything, and most closely approximate a crackpot scientist. Furthermore, having to include a non-reductionist model of humans, it may even end up religious (feeding stuff into human mind model to build it's theory of everything by intelligent design).

How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem. edit: and the very strong intuition I have is that you can't just dismiss this sort of stuff out of hand. So many ways it can fail. So few ways it can work great. And no rigour what so ever in the speculations here.

Comment author: Vladimir_Nesov 08 August 2012 12:07:19PM 1 point [-]

Solomonoff induction never ignores observations.

Comment author: Kawoomba 08 August 2012 12:26:25PM 2 points [-]

One liners, eh?

It's not so much ignoring observations as testing models that allow for your sense data to be subject to both gaussian noise as well as systematic errors, i.e. explaining part of the observations as sensory fuzziness.

In such a case, an overly simply model that posits e.g. some systematic error in its sensors may have an advantage over an actually correct albeit more complex model, due to the way that the length penalty for the Universal Prior rapidly accumulates.

Imagine AIXI coming to the conclusion that the string it is watching is in fact partly output by a random string generator that intermittently takes over. If the competing (but potentially correct) model that works without such a random string generator needs just a megabit more space to specify, do the math.

I'll still have to think upon it further. It's just not something to be dismissed out of hand, and just one of several highly relevant tangents (since it pertains to real world applicability; if its a design byproduct it might well translate to any Monte Carlo or assorted formulations). It might well turn out to be a non-issue.

Comment author: Vladimir_Nesov 07 August 2012 10:23:20PM 3 points [-]

Is he a consistent fountain of wisdom? No. Is anyone?

The fallacy of gray.

Comment author: Kawoomba 07 August 2012 10:33:14PM 2 points [-]

An uncharitable reading, notice the "consistent" and referring to an acceptable ratio of (implied) signal/noise in the very first sentence.

Also, this may be biased, but I value relevant comments on algorithmic information theory particularly highly, and they are a rare enough commodity. We probably agree on that at least.

Comment author: wedrifid 08 August 2012 03:19:49AM *  2 points [-]

What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?

Exactly. I often lament that the word 'troll' contains motive as part of the meaning. I often try to avoid the word and convey "Account to which Do Not Feed needs to be applied" without making any assertion about motive. Those are hard to prove.

As far as I'm concerned if it smells like a troll, has amazing regenerative powers, creates a second self when attacked and loses a limb and goes around damaging things I care about then it can be treated like a troll. I care very little whether it is trying to rampage around and destroy things---I just want to stop it.

Comment deleted 09 August 2012 11:12:10AM *  [-]
Comment author: [deleted] 06 August 2012 11:41:05PM *  0 points [-]

I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses (if I think you guys are the Scientology 2.0 I want to see if I can falsify that, ok?), though at some point I was really curious to see what you do about two accounts in same place talking in exact same style, that was entirely unscientific, sorry about this.

Furthermore, the comments were predominantly rated at >0 and not through socks rating each other up (I would want to see if first-vote effect is strong but that would require far too much data). Sorry if there is any sort of disruption to anything.

I actually have significantly more respect for you guys now, with regards to considering the commentary, and subsequently non cultness. I needed a way to test hypotheses. That utterly requires some degree of statistical independence. I do still honestly think this FAI idea is pretty damn misguided (and potentially dangerous to boot), but I am allowing it much more benefit of the doubt.

edit: actually, can you reset the email of Dmytry to dmytryl at gmail ? I may want to post article sometime in the future (I will try to offer balanced overview as I see, and it will have plus points as well. Seriously.).

Also, on the Eliezer, I really hate his style but like his honesty, and its a very mixed feeling all around, i mean, its atrocious to just go ahead and say, whoever didn't get my MWI stuff is stupid, thats the sort of stuff that evaporates out a LOT of people, and if you e.g. make some mistakes, you risk evaporating meticulous people. On the other hand, if that's what he feels, that's what he feels, to conceal it is evil.

Comment author: gwern 06 August 2012 11:52:43PM *  8 points [-]

I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses

So presumably we can expect a post soon explaining the background & procedure, giving data and perhaps predictions or hash precommitments, with an analysis of the results; all of which will also demonstrate that this is not a post hoc excuse.

edit: actually, can you reset the email of Dmytry to dmytryl at gmail ?

I can't, no. I'd guess you'd have to ask someone at Trike, and I don't know if they'd be willing to help you out...

Comment author: [deleted] 06 August 2012 11:56:25PM *  2 points [-]

Well basically I did expect much more negative ratings, and then I'd just stop posting on those. I couldn't actually set up proper study without zillion socks, and that'd be serious abuse. I am currently quite sure you guys are not Eliezer cult. You might be a bit of an idea cult but not terribly much. edit: Also as you guys are not Eliezer cult, and as he actually IS pretty damn good at talking people into silly stuff, in so much it is also evidence he's not building a cult.

re: email address, doesn't matter too much.

edit: Anyhow, I hope you do consider content of the comments to be of the benefit, actually I think you do. E.g. my comment against the idea of overcoming some biases, I finally nailed what bugs me so much about the 'overcomingbias' title and the carried-over cached concept of overcoming them.

edit: do you want me to delete all socks? No problem either way.

Comment author: CarlShulman 25 December 2012 02:59:49PM 1 point [-]
Comment author: gwern 25 December 2012 10:30:03PM 2 points [-]

Agree; that's either Dmytry or someone deliberately imitating him.

Comment author: CarlShulman 15 September 2012 03:04:30AM *  1 point [-]

And here's one more (judging by content, style, and similar linguistic issues): Shrink. Also posting in the same discussions as private_messaging.

Comment author: gwern 15 September 2012 03:20:03AM 0 points [-]

It certainly does sound like him, although I didn't notice any of his most obvious tells like ghmm or obsession with complexity of updating Bayesian networks.

Comment author: CarlShulman 15 September 2012 04:08:09AM 1 point [-]

"For the risk estimate per se" "The rationality and intelligence are not precisely same thing." "To clarify, the justice is not about the beliefs held by the person." "The honesty is elusive matter, "

Characteristic misuse of "the."

"You can choose any place better than average - physicsforums, gamedev.net, stackexchange, arstechnica observatory,"

Favorite forums from other accounts.

Comment author: gwern 15 September 2012 04:19:25AM 1 point [-]

Ah yes, I forgot Dymtry had tried discussing LW on the Ars forums (and claiming we endorsed terrorism, etc. He got shut down pretty well by the other users.) Yeah, how likely is it that they would both like Ars forums...

Comment author: wedrifid 15 September 2012 04:43:59AM 0 points [-]

It certainly does sound like him, although I didn't notice any of his most obvious tells like ghmm or obsession with complexity of updating Bayesian networks.

He did open by criticising many worlds and in subsequent posts had an anti LW and SIAI chip on his shoulder that couldn't plausibly have been developed in the time from the account had existed.

Comment author: wedrifid 15 September 2012 03:19:08AM 0 points [-]

Well spotted. I hadn't even noticed the Shrink account existing, much less identified it by the content. Looking at the comment history I agree it seems overwhelmingly likely.

Comment author: NancyLebovitz 07 August 2012 12:42:52PM 4 points [-]

One reason I have respect for Eliezer is HPMOR-- there's a huge amount of fan fiction, and writing something which impresses both a lot of people who like fan fiction and a lot of people who don't like fan fiction is no small achievement.

Also, it's the only story I know of which gets away with such huge shifts in emotional tone. (This may be considered a request recommendations of other comparable works.)

Furthermore, Eliezer has done a good bit to convince people to think clearly about what they're doing, and sometimes even to make useful changes in their lives as a result.

I'm less sure that he's right about FAI, but those two alone are enough to make for respect.

Comment author: [deleted] 12 August 2012 12:16:38AM 6 points [-]

In the context of LessWrong and FAI, Yudkowsky's fiction writing abilities are almost entirely irrelevant.

Comment author: Bruno_Coelho 11 August 2012 12:59:10AM *  0 points [-]

Eliezer has done a good bit to convince people to think clearly about what they're doing

This is a source of disagreement. Think cleary and change behavior" is not a good slogan, is used for numerous groups. But-- and the inferential distance with is not clear from begining-- there are lateral beliefs: computational epistemology, especificity, humans as impefect machines etc.

In a broad context, even education in general could fit this phrase, specially for people with no training in gathering data.