Comment author: buybuydandavis 01 September 2015 03:58:19AM 6 points [-]

I suspect the anecdote about Eliezer only sidetracks your readers.

Typical Sneer Fallacy: When you ignore or are offended by criticism because you've mistakenly identified it as coming purely from sneer.

Hence the problem with sneer in actual criticism. Not that I'm opposed to sneering. Far from it. But you'd better be making solid points while you sneer. If you make a bunch of half ass points just to sneer, don't expect people to dig your one diamond out of that pile of crap. They will look elsewhere for criticism, if they're interested in it at all. And quite reasonably so.

EY writes:

But the thought that su3su2su1 could just walk through finding errors in every chapter is laughable, and since he's clearly making up most of it, you shouldn't be surprised that he's making up all of it. If somebody started posting a list of science errors by Scott Aaronson or Scott Alexander purporting to find errors in every post, I wouldn't expect to find even a single real one mixed in.

Yep. Don't expect to find diamonds in a pile of crap. Expect to find more crap.

Comment author: calef 01 September 2015 04:14:30AM 2 points [-]

I suspect how reader's respond to my anecdote about Eliezer will fall along party lines, so to speak.

Which is kind of the point of the whole post. How one responds to the criticism shouldn't be a function of one's loyalty to Eliezer. Especially when su3su2u1 explicitly isn't just "making up most of" his criticism. Yes, his series of review-posts are snarky, but he does point out legitimate science errors. That he chooses to enjoy HPMOR via (c) rather than (a) shouldn't have any bearing on the true-or-false-ness of his criticism.

I've read su3su2u1's reviews. I agree with them. I also really enjoyed HPMOR. This doesn't actually require cognitive dissonance.

(I do agree, though, that snarkiness isn't really useful in trying to get people to listen to criticism, and often just backfires)

Typical Sneer Fallacy

10 calef 01 September 2015 03:13AM

I like going to see movies with my friends.  This doesn't require much elaboration.  What might is that I continue to go see movies with my friends despite the radically different ways in which my friends happen to enjoy watching movies.  I'll separate these movie-watching philosophies into a few broad and not necessarily all-encompassing categories (you probably fall into more than one of them, as you'll see!):

(a): Movie watching for what was done right.  The mantra here is "There are no bad movies." or "That was so bad it was good."  Every movie has something redeeming about it, or it's at least interesting to try and figure out what that interesting and/or good thing might be.  This is the way that I watch movies, most of the time (say 70%).

 

(b): Movie watching for entertainment.  Mantra: "That was fun!".  Critical analysis of the movie does not provide any enjoyment.  The movie either succeeds in 'entertaining' or it fails.  This is the way that I watch movies probably 15% of the time.

 

(c): Movie watching for what was done wrong.  Mantra: "That movie was terrible."  The only enjoyment that is derived from the movie-watching comes from tearing the film apart at its roots--common conversation pieces include discussion of plot inconsistencies, identification of poor directing/cinematography/etc., and even alternative options for what could have 'fixed' the film to the extent that the film could even said to be 'fixed'.  I do this about ~12% of the time.

 

(d): Sneer. Mantra: "Have you played the drinking game?".  Vocal, public, moderately-drunken dog-piling of a film's flaws are the only way a movie can be enjoyed.  There's not really any thought put into the critical analysis.  The movie-watching is more an excuse to be rambunctious with a group of friends than it is to actually watch a movie.  I do this, conservatively, 3% of the time.

What's worth stressing here is that these are avenues of enjoyment.  Even when a (c) person watches a 'bad' movie, they enjoy it to the extent that they can talk at length about what was wrong with the movie. With the exception of the Sneer category, none of these sorts of critical analysis are done out of any sort of vindictiveness, particularly and especially (c).

So, like I said, I'm mostly an (a) person.  I have friends that are (a) people, (b) people, (c) people, and even (d) people (where being a (_) person means watching movies with that philosophy more than 70% of the time).

 

This can generate a certain amount of friction.  Especially when you really enjoy a movie, and your friend starts shitting all over it.

 

Or at least, that's what it feels like from the inside!  Because you might have really enjoyed a movie because you thought it was particularly well-shot, or it evoked a certain tone really well, but here comes your friend who thought the dialogue was dumb, boring, and poorly written.  Fundamentally, you and your friend are watching the movie for different reasons.  So when you go to a movie with 6 people who are exclusively (c), category (c) can start looking a whole lot like category (d) when you're an (a) or (b) person.

And that's no fun, because (d) people aren't really charitable at all.  It can be easy to translate in one's mind the criticism "That movie was dumb" into "You are dumb for thinking that movie wasn't dumb".  Sometimes the translation is even true!  Sneer Culture is a thing that exists, and while its connection to my 'Sneer' category above is tenuous, my word choice is intentional.  There isn't anything wrong with enjoying movies via (d), but because humans are, well, human, a sneer culture can bloom around this sort of philosophy.

Being able to identify sneer cultures for what they are is valuable.  Let's make up a fancy name for misidentifying sneer culture, because the rationalist community seems to really like snazzy names for things:

Typical Sneer Fallacy: When you ignore or are offended by criticism because you've mistakenly identified it as coming purely from sneer.  In reality, the criticism was genuine and actually true, to the extent that it represents someone's sincere beliefs, and is not simply from a place of malice.

 

This is the point in the article where I make a really strained analogy between the different ways in which people enjoy movies, and how Eliezer has pretty extravagantly committed the Typical Sneer Fallacy in this reddit thread.

 

Some background for everyone that doesn't follow the rationalist and rationalist-adjacent tumblr-sphere:  su3su2u1, a former physicist, now data scientist, has a pretty infamous series of reviews of HPMOR.  These reviews are not exactly kind.  Charitably, I suspect this is because su3su2u1 is a (c) kind of person, or at least, that's the level at which he chose to interact with HPMOR.  For disclosure, I definitely (a)-ed by way through HPMOR.

su3su2u1 makes quite a few science criticisms of Eliezer.  Eliezer doesn't really take these criticisms seriously, and explicitly calls them "fake".  Then, multiple physicists come out of the woodwork to tell Eliezer he is wrong concerning a particular one involving energy conservation and quantum mechanics (I am also a physicist, and su3su2u1's criticism is, in fact, correct.  If you actually care about the content of the physics issue, I'd be glad to get into it in the comments.  It doesn't really matter, except insofar as this is not the first time Eliezer's discussions of quantum mechanics have gotten him into trouble) (Note to Eliezer: you probably shouldn't pick physics fights with the guy whose name is the symmetry of the standard model Lagrangian unless you really know what you're talking about (yeah yeah, appeal to authority, I know)).

I don't really want to make this post about stupid reddit and tumblr drama.  I promise.  But I think the issue was rather succinctly summarized, if uncharitably, in a tumblr post by nostalgebraist.

 

The Typical Sneer Fallacy is scary because it means your own ideological immune system isn't functioning correctly.  It means that, at least a little bit, you've lost the ability to determine what sincere criticism actually looks like.  Worse, not only will you not recognize it, you'll also misinterpret the criticism as a personal attack.  And this isn't singular to dumb internet fights.

Further, dealing with criticism is hard.  It's so easy to write off criticism as insincere if it means getting to avoid actually grappling with the content of that criticism:  You're red tribe, and the blue tribe doesn't know what it's talking about.  Why would you listen to anything they have to say?  All the blues ever do is sneer at you.  They're a sneer culture.  They just want to put you down.  They want to put all the reds down.

But the world isn't always that simple.  We can do better than that.

Comment author: [deleted] 23 May 2015 01:44:21PM 1 point [-]

One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.

Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.

Comment author: calef 23 May 2015 11:30:05PM 2 points [-]

I mean, sure, but this observation (i.e., "We have tools that allow us to study the AI") is only helpful if your reasoning techniques allow you to keep the AI in the box.

Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).

I think that you think MIRI's claim is "This cannot be done safely." And I think your claim is "This obviously can be done safely" or perhaps "The onus is on MIRI to prove that this cannot be done safely."

But, again, MIRI's whole mission is to figure out the extent to which this can be done safely.

Comment author: [deleted] 21 May 2015 10:15:16PM *  0 points [-]

The worry is that there will be such a huge gulf between how superintelligences reason versus how we reason that it would take prohibitively long to understand them.

That may be a valid concern, but it requires evidence as it is not the default conclusion. Note that quantum physics is sufficiently different that human intuitions do not apply, but it does not take a physicist a “prohibitively long” time to understand quantum mechanical problems and their solutions.

As to your laptop example, I'm not sure what you are attempting to prove. Even if one single engineer doesn't understand how ever component of a laptop works, we are nevertheless very much able to reason about the systems-level operation of laptops, or the the development trajectory of the global laptop market. When there are issues, we are able to debug them and fix them in context. If anything the example shows how humanity as a whole is able to complete complex projects like the creation of a modern computational machine without being constrained to any one individual understanding the whole.

Edit: gaaaah. Thanks Sable. I fell for the very trap of reasoning by analogy I opined against. Habitual modes of thought are hard to break.

Comment author: calef 21 May 2015 11:06:11PM *  1 point [-]

As far as I can tell, you're responding to the claim, "A group of humans can't figure out complicated ideas given enough time." But this isn't my claim at all. My claim is, "One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality." This is trivially true once the number of machines which are "smarter" than humans exceeds the total number of humans. The extent to which it is difficult to predict/model the "smarter" machines is a matter of contention. The precise number of "smarter" machines and how much "smarter" they need be before we should be "worried" is also a matter of contention. (How "worried" we should be is a matter of contention!)

But all of these points of contention are exactly the sorts of things that people at MIRI like to think about.

Comment author: calef 21 May 2015 08:51:29PM *  9 points [-]

This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.

This may be retreating to the motte's bailey, so to speak, but I don't think anyone seriously thinks that a superintelligence would be literally impossible to understand. The worry is that there will be such a huge gulf between how superintelligences reason versus how we reason that it would take prohibitively long to understand them.

I think a laptop is a good example. There probably isn't any single human on earth that knows how to build a modern laptop from scratch. There's are computer scientists that know how the operating system is put together--how the operating system is programmed, how memory is written and retrieved from the various buses; there are other computer scientists and electrical engineers who designed the chips themselves, who arrayed circuits efficiently to dissipate heat and optimize signal latency. Even further, there are material scientists and physicists who designed the transistors and chip fabrication processes, and so on.

So, as an individual human, I don't know what it's like to know everything about a laptop all at once in my head, at a glance. I can zoom in on an individual piece and learn about it, but I don't know all the nuances for each piece--just a sort of executive summary. The fundamental objects with which I can reason have a sort of characteristic size in mindspace--I can imagine 5, maybe 6 balls moving around with distinct trajectories (even then, I tend to group them into smaller subgroups). But I can't individually imagine a hundred (I could sit down and trace out the paths of a hundred balls individually, of course, but not all at once).

This is the sense in which a superintelligence could be "dangerously" unpredictable. If the fundamental structures it uses for reasoning greatly exceed a human's characteristic size of mindspace, it would be difficult to tease out its chain of logic. And this only gets worse the more intelligent it gets.

Now, I'll grant you that the lesswrong community likes to sweep under the rug the great competition of timescales and "size"scales that are going on here. It might be prohibitively difficult, for fundamental reasons, to move from working-mind-RAM of size 5 to size 10. It may be that artificial intelligence research progresses so slowly that we never even see an intelligence explosion--just a gently sloped intelligence rise over the next few millennia. But I do think it's a maybe not a mistake but certainly naiive to just proclaim, "Of course we'll be able to understand them, we are generalized reasoners!".

Edit: I should add that this is already a problem for, ironically, computer-assisted theorem proving. If a computer produces a 10,000,000 page "proof" of a mathematical theorem (i.e., something far longer than any human could check by hand), you're putting a huge amount of trust in the correctness of the theorem-proving-software itself.

Comment author: Yvain 25 February 2015 09:44:40PM 9 points [-]

Why is Voldemort not getting rid of Harry in some more final way?

Even if he's worried killing Harry will rebound against him because of the prophecy somehow, he can, I don't know, freeze Harry? Stick Harry in the mirror using whatever happened to Dumbledore? Destroy Harry's brain and memories and leave him an idiot? Shoot Harry into space?

Why is "resurrect Harry's best friend to give him good counsel" a winning move here?

Comment author: calef 25 February 2015 09:54:36PM *  0 points [-]

Perhaps because this might all be happening within the mirror, thus realizing both Harry!Riddle's and Voldy!Riddle's CEVs simultaneously.

Comment author: calef 24 February 2015 08:16:46PM 5 points [-]

It seems like Mirror-Dumbledore acted in accordance with exactly what Voldemort wanted to see. In fact, Mirror-Dumbledore didn't even reveal any information that Voldemort didn't already know or suspect.

Odds of Dumbledore actually being dead?

Comment author: calef 23 February 2015 03:34:16AM *  16 points [-]

Honestly, the only "winning" strategy here is to not argue with people on the comments sections of political articles.

If you must, try and cast the argument in a way that avoids the standard red tribe / blue tribe framing. Doing this can be hard because people generally aren't in the business of having politics debate with an end goal of dissolving an issue--they just want to signal their tribe--hence why arguing on the internet is often a waste of time.

As to the question of authority: how would you expect the conversation to go if you were an economist?

Me: I think money printing by the Fed will cause inflation if they continue like this.

Random commenter: Are you an economist?

Me: Yes actually, I have a PhD in The Economy from Ivy League University.

Random commenter (possible response 1): I don't believe you, and continue to believe what I believe.

Random commenter (possible response 2): Oh well that's one of the (Conservative / Liberal) (pick one) schools, they're obviously wrong and don't know what they're talking about.

Random commenter (possible response 3): Economists obviously don't know what they're talking about.

Again, it's a mix of Dunning-Kruger and tribal signalling. There's not actually any direction an appeal-to-authority debate can go that's productive because the challenger has already made up their mind about the facts being discussed.

For a handful of relevant lesswrong posts: http://lesswrong.com/lw/axn/6_tips_for_productive_arguments/ http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ http://lesswrong.com/lw/3k/how_to_not_lose_an_argument/

Comment author: avichapman 17 February 2015 05:02:36AM 3 points [-]

I noticed that too. It's often a sign of obliviation. My secondary hypothesis is that it was a mistake and will be corrected in a later update.

Comment author: calef 17 February 2015 05:22:02AM 3 points [-]

Yeah, it's already been changed:

A blank-eyed Professor Sprout had now risen from the ground and was pointing her own wand at Harry.

Comment author: calef 16 February 2015 02:32:50AM 7 points [-]

So when Dumbledore asked the Marauder's Map to find Tom Riddle, did it point to Harry?

View more: Prev | Next