Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In praise of fake frameworks

14 Valentine 11 July 2017 02:12AM

Related to: Bucket errors, Categorizing Has Consequences, Fallacies of Compression

Followup to: Gears in Understanding


I use a lot of fake frameworks — that is, ways of seeing the world that are probably or obviously wrong in some important way.


I think this is an important skill. There are obvious pitfalls, but I think the advantages are more than worth it. In fact, I think the "pitfalls" can even sometimes be epistemically useful.


Here I want to share why. This is for two reasons:


  • I think fake framework use is a wonderful skill. I want it represented more in rationality in practice. Or, I want to know where I'm missing something, and Less Wrong is a great place for that.

  • I'm building toward something. This is actually a continuation of Gears in Understanding, although I imagine it won't be at all clear here how. I need a suite of tools in order to describe something. Talking about fake frameworks is a good way to demo tool #2.


With that, let's get started.

continue reading »
Comment author: Valentine 01 June 2017 03:36:15AM 4 points [-]

I think there's another option to look for.

Sometimes the way you're thinking makes assumptions you're not noticing about how to draw lines in the first place. And they see that, and they see that your assumptions are blocking you from understanding. But they don't know how to explain it in your paradigm. Which means it's easier to ask you to suspend your way of trying to understand and just try something different.

A silly example is with bike riding. Someone who never rode a bike before can study physics all they want to, and dive into neurology and see how learning works in principle and how that interacts with trying… but most of that basically doesn't matter. You just have to get on the bike.

But I see this show up in aikido too. People want to understand how to do a certain movement, but some people insist that they understand through words and principles, and those people are the hardest to teach. I have to slow them down, ask them to stop trying so hard to think of it that way, and draw their attention to their bodies instead. Then their bodies can teach them new nuances of experience.

I really respect the power of analysis. I find it annoying when people who think vague thoughts defend their status and worldview by trying to make me dumber.

But from the inside, sometimes that's what it'll look like when a person who is in fact seeing a dimension you're not even operating in is trying to point that fact out to you.

Comment author: komponisto 29 May 2017 04:10:27AM 3 points [-]

See this comment; most particularly, the final bullet point.

Comment author: Valentine 31 May 2017 01:04:55AM 0 points [-]
Comment author: komponisto 29 May 2017 04:06:09AM 17 points [-]

What convinced you of this?

A constellation of related realizations.

  • A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the "norms of discourse" of what I took to be my "ingroup" or "social context"; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.

  • A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.

  • A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible -- the epistemic usefulness of which should go without saying.

  • A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.

  • A sense that discourse norms, and norms of "civility" generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law -- the domains in which discourse is most tightly "regulated" and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!

Comment author: Valentine 31 May 2017 01:04:15AM 10 points [-]

Cool. Let's play.

I notice you make a number of claims, but that of the ones I disagree with, none of them have "crux nature" for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn't change my stance.

(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I'll focus on offering you a pathway by which you could convince me.)

But if I dig a bit, I think I see a hint of a possible double crux. You say:

A sense that discourse norms, and norms of "civility" generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information.

I agree with a steelman version of this. (I don't think it is literally entirely distinct — but I also doubt you do, and I don't want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply "…and that's bad." Whereas I would add instead "…and that's good."

In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that's mostly okay. But when working out social dynamics (like, say, whether a person who's proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.

At which point I cease caring about "efficient transmission of information", basically because I think (a) the information being sent is secretly laced with social subtext that'll affect future transmissions as well as its own perceived truthiness, and (b) the "efficient" transmission is emotionally harder to receive.

So to be succinct, I claim that:

  • (1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
  • (2) I am persuadable as per (1). It's a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn't preserve civility on Less Wrong.
  • (3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that's a point where I am persuadable.

Your turn!

Comment author: komponisto 28 May 2017 07:36:41AM 4 points [-]

norms of good discourse are more important than the content of arguments

In what represents a considerable change of belief on my part, this now strikes me as very probably false.

Comment author: Valentine 28 May 2017 07:37:12PM 1 point [-]

I'm open. Clarify?

Comment author: Elo 27 May 2017 09:50:36PM 7 points [-]

But unless and until I see evidence otherwise, I assume 18239018038528017428's intentions are not truth-seeking.

Evidence: time and energy put into the comment. Evidence: not staying silent when they could have.

I am not saying theee offending comments are valid, instead I am curious as to why you discounted what I identify as evidence?

Comment author: Valentine 27 May 2017 10:07:04PM 8 points [-]

Ah, I was using a more colloquial definition of evidence, not a technical one. I misspoke.

What goes through my mind here is, "Trolls spend a lot of time and energy making comments like this one too, and don't stay silent when they could, so I'm not at all convinced that those points are more consistent with a world where they're truth-seeking than they are with a world in which they're just trolling."

I still think that's basically true. So to me those points seem irrelevant.

I think what I mean is something more like, "Unless and until I see enough evidence to convince me otherwise…." I'll go back and edit for that correction.

Comment author: [deleted] 26 May 2017 08:43:41PM *  26 points [-]

a

Comment author: Valentine 27 May 2017 09:12:21PM *  35 points [-]

PSA:

Do not feed trolls.

In ages past, vitriol like this would be downvoted into oblivion. This was out of recognition that norms of good discourse are more important than the content of arguments. Failure to abide by this spreads rot and makes good communal epistemic hygiene even more difficult.

I notice downvoting is disabled now. Which, sadly, means that people will be tempted to engage with this. Which reinforces a norm of having one's dissent noticed by acting like an unapologetic asshole. Which burns the future of this garden.

So as a close second, I advise just thoroughly ignoring 18239018038528017428 unless and until they step up to meet more noble conversational norms. If there are good points to be made here, they should be converted into the truth-seeking style Less Wrong aspires to so that we can all engage with them in a more hygienic way.

I appreciate Duncan's attempts to do that conversion and speak to the converted form of the argument.

But unless and until I see enough evidence to convince me otherwise, I assume 18239018038528017428's intentions are not truth-seeking. I assume they are inflammatory and will not change via civil discourse.

Ergo, request to all:

Do not feed trolls.

PS: I will follow my own advice here and have no intention of replying to 18239018038528017428 unless and until they transpose their discourse into the key of decency. I expect them to reply to me here, probably with more vitriol and some kind of personal attack and/or attempt to discredit me personally. My ignoring them should be taken as my following my own policy. Note that if 18239018038528017428 does reply with vitriol, it will probably be in some way fashioned as an attempt to make my very refusal to engage look like confirmation of their narrative. Please filter your reading of any replies to my message here accordingly.

Comment author: Valentine 25 May 2017 11:54:25PM *  4 points [-]

I really like this. I enjoy your aesthetic and ambition.

[…]But something magical does accrue when you make the jump from 99% to 100%[…]

There's something about this whole section that nags me. I really, really like the aesthetic… and yet… there's something about how it's phrased here that inspires a wish in me to argue with you about what you said.

I think what you're trying to get at here is how, when you convert a "shades of grey" perspective into a "No, this either hits these standards or it doesn't" kind of discrete clarity, it's possible to switch from approximation to precision. And when you chain together steps that each have to work, you can tell what the output is much more clearly if you're getting each step to give a binary "Yes, I'm working properly" or "Nope, not quite meeting the defined standard."

And I think you're using this to suggest that Dragon Army should be a system with discretely clear standards and with each component of the system (i.e., each person) either (a) definitely meeting that standard or (b) recognizing where they don't and then building up to that standard. This makes the whole system dependable in a way you just cannot do if there are no clear discrete standards or if the system is lax about some component not meeting the standards (e.g., giving someone a pass for merely "trying").

I think this is what you mean when you say, "[…]the `absolute' part is important." The word "absolute" is standing in for having these standards and endeavoring for literally 100% of the (finitely many, discrete) components of the system 100% meeting those standards.

Confirm/deny/improve?

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.

Yep, I agree. Free markets are a terrible strategy for opposing Moloch.

It's worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like "does this guy actually think things through," "is this guy likely to be stupid or meta-stupid," "will this guy listen/react/update/pivot in response to evidence or consensus opposition," and "when this guy has intuitions that he can't explain, do they tend to be validated in the end?"

I just want to add public corroboration on this point. Yes, Duncan encouraged along these lines. My own answers round to "is good" in each case. I'm really just flat-out not worried about him leading Dragon Army.

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked. In short, if someone's building a coercive trap, it's everyone's problem.

I really like that you point out things like this.

Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to[…]

I like the list, overall. I can give you a more detailed commentary in person rather than digging in here. Let me know if you'd prefer it done in public here. (Just trying not to overly tax public attention with personal impressions.)

[…]we are trying not to fall prey to GOODHART'S DEMON.

Heh. That reference made me laugh. :-) I like that as a focus, as will surprise you not at all.

Comment author: Duncan_Sabien 25 May 2017 10:24:41PM *  7 points [-]

Hmm, interesting. My self-model is somewhat incapable of burning out during this, due to an ability to run forever on spite (that's only somewhat tongue-in-cheek).

It's a solid point, though. If I condition on burnout, I think that Eli manages or not based on the level of specificity and concreteness that we managed to get in place in the first few weeks. Like, I don't think Eli is competent (yet) to create the thing, but I do think he's competent to oversee its maintenance and preservation. So that seems to put a somewhat higher priority on early systemization and scaffold-building than might have otherwise been in my plan.

Good question.

Edit: also, probably the closest analogue to this in my past is being the sole functioning RA on a dorm hall of ~30 high schoolers in a high-stress school environment. That was probably within the same order of magnitude of juggling, once you account for the fact that my increase in skill since then is balanced by the increase in complexity/responsibility. I did a lot to try to manage the experience of those thirty people.

Comment author: Valentine 25 May 2017 11:19:30PM 7 points [-]

FWIW, my model of Duncan agrees with his model of himself here. I don't expect him to burn out doing this.

…and even if he does, I expect that the combo of Eli plus the sort of people I imagine being part of Dragon Army would pull it through. Not guaranteed, but with a strong enough chance that I'm basically not worried about a failure mode along the lines of "Flops due to Duncan burnout and subsequent systems failures."

Comment author: Duncan_Sabien 25 May 2017 09:57:50PM *  2 points [-]

"norms encouraging confession/absolution for sins" is a somewhat ... connotation-laden ... phrase, but that's a big part of it. For instance, one of the norms I want to build is something surrounding rewarding the admission of a mistake (the cliff there is people starting to get off on making mistakes to get rewarded, but I think we can dodge it), and a MAJOR part of the regular check-ins and circles and pair debugs will be a focus on minimizing the pain and guilt of having slipped up, plus high-status people leading the way by making visible their own flaws and failings.

+1 for noticing and concern. Do you have any concrete tweaks or other suggestions that you think might mitigate?

Also: "absolute" is probably the wrong word, yeah. What I'm gesturing toward is the qualitative difference between 99% and 99.99%.

Comment author: Valentine 25 May 2017 11:16:16PM 5 points [-]

I am aesthetically very skeptical of phrases like "absolutely reliable" (in Problem 4). I don't think it's possible for something to be absolutely reliable, and it seems dangerous/brittle to commit to achieving something unachievable. However, this may be primarily an aesthetic issue, since I think the solution presented in Problem 3 is very sensible.

[…]

Also: "absolute" is probably the wrong word, yeah. What I'm gesturing toward is the qualitative difference between 99% and 99.99%.

There's definitely a qualitative shift for me when something moves from "This is very likely to happen" to "This is a fact in the future and I'll stop wondering whether it'll happen."

While I think it's good to remember that 0 and 1 are not probabilities, I also think it's worthwhile to remember that in a human being they can be implemented as something kind of like probabilities. (Otherwise Eliezer's post wouldn't have been needed!) Even if in a Bayesian framework we're just moving the probability beyond some threshold (like Duncan's 99.99%), it feels to me like a discrete shift to dropping the question about whether it'll happen.

I think that's a fine time to use a word like "absolute", even if only aesthetically.

View more: Next