Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: 28 October 2017 07:55:39PM 0 points [-]

Agreed, causality is important!

Comment author: 31 October 2017 06:32:09PM 0 points [-]

What do you mean, in this context?

## Interactive model knob-turning

3 28 October 2017 07:42PM

(Please discuss on LessWrong 2.0)

(Cross-posted from my medium channel)

When you are trying to understand something by yourself, a useful skill to check your grasp on the subject is to try out the moving parts of your model and see if you can simulate the resulting changes.

Suppose you want to learn how a rocket works. At the bare minimum, you should be able to calculate the speed of the rocket given the time past launch. But can you tell what happens if Earth gravity was stronger? Weaker? What if the atmosphere had no oxygen? What if we replaced the fuel with Diet Coke and Mentos?

To really understand something, it's not enough to be able to predict the future in a normal, expected, ceteris paribus scenario. You should also be able to predict what happens when several variables are changed is several ways, or, at least, point to which calculations need to be run to arrived at such a prediction.

Douglas Hofstadter and Daniel Dennett call that "turning the knobs". Imagine your model as a box with several knobs, where each knob controls one aspect of the modeled system. You don't have to be able to turn all the possible knobs to all possible values and still get a sensible, testable and correct answer, but the more, the better.

Doug and Dan apply this approach to thought experiments and intuition pumps, as a way to explore possible answers to philosophical questions. In my experience, this skill is also effective when applied to real world problems, notably when trying to understand something that is being explained by someone else.

In this case, you can run this knob-turning check interactively with the other person, which makes it way more powerful. If someone says “X+Y = Z” and “X+W = Z+A”, it’s not enough to mentally turn the knobs and calculate “X+Y+W = Z+A+B”. You should do that, then actually ask the explainer “Hey, let me see if I get what you mean: for example, X+Y+W would be Z+A+B”?

This interactive model knob-turning has been useful to me in many walks of life, but the most common and mundane application is helping out people at work. In that context, I identify six effects which make it helpful:

## 1) Communication check: maybe you misunderstood and actually X+W = Z-A

This is useful overall, but very important if someone uses metaphor. Some metaphors are clearly vague and people will know that and avoid them in technical explanations. But some metaphors seem really crisp for some people but hazy to others, or worse, very crisp to both people, but with different meanings! So take every metaphor as an invitation to interactive knob-turning.

To focus on communication check, try rephrasing their statements, using different words or, if necessary, very different metaphors. You can also apply a theory in different contexts, to see if the metaphors still apply.

For example, if a person talks about a computer system as if it were a person, I might try to explain the same thing in terms of a group of trained animals, or a board of directors, or dominoes falling.

## 2) Self-check: correct your own reasoning (maybe you understood the correct premises, but made a logical mistake during knob turning)

This is useful because humans are fallible, and two (competent) heads are less likely to miss a step in the reasoning dance than one.

Also, when someone comes up and asks something, you’ll probably be doing a context-switch, and will be more likely to get confused along the way. The person asking usually has more local context than you in the specific problem they are trying to solve, even if you have more context on the surrounding matters, so they might be able to spot your error more quickly than yourself.

Focus on self-check means double checking any intuitive leaps or tricky reasoning you used. Parts of you model that do not have a clear step-by-step explanation have priority, and should be tested against another brain. Try to phrase the question in a way that makes your intuitive answer look less obvious.

For example: “I’m not sure if this could happen, and it looks like all these messages should arrive in order, but do you know how we can guarantee that?”

## 3) Other-check: help the other person to correct inferential errors they might have made

The converse of self-checking. Sometimes fresh eyes with some global context can see reasoning errors that are hidden to people who are very focused on a task for too long.

To focus on other-check, ask about conclusions that follow from your model of the situation, but seem unintuitive to you, or required tricky reasoning. It’s possible that your friend also found them unintuitive, and that might have lead them to a jump to the opposite direction.

For example, I could ask: “For this system to work correctly, it seems that the clocks have to be closely synchronized, right? If the clocks are off by much, we could have a difference around midnight.”

Perhaps you successfully understand what was said, and the model you built in your head fits the communicated data. But that doesn’t mean it is the same model that the other person has in mind! In that case, your knob-turning will get you a result that’s inconsistent with what they expect.

## 4) Alternative hypothesis generation: If they cannot refute your conclusions, you have shown them a possible model they had not yet considered, in which case it will also point in the direction of more research to be made

This is doesn't happen that much when someone is looking for help to something. Usually the context they are trying to explain is the prior existing system which they will build upon, and if they’ve done their homework (i.e. read the docs and/or code) they should have a very good understanding of that already. One exception here is with people who are very new to the job, which are learning while doing.

On the other hand, this is incredibly relevant when someone asks for help debugging. If they can’t find the root cause of a bug, it must be because they are missing something. Either they have derived a mistaken conclusion from the data, or they’ve made an inferential error from those conclusions. The first case is where proposing a new model helps (the second is solved by other-checking).

Maybe they read the logs, saw that a request was sent, and assumed it was received, but perhaps it wasn’t. In that case, you can tell them to check for a log on the receiver system, or the absence of such a log.

To boost this effect, look for data that you strongly expect to exist and confirm your model, where the absence of such data might be caused by relative lack of global context, skill or experience by the other person.

For example: “Ok, so if the database went down, we should’ve seen all requests failing in that time range; but if it was a network instability, we should have random requests failing and others succeeding. Which one was it?”

## 5) Filling gaps in context: If they show you data that contradicts your model, well, you get more data and improve your understanding

This is very important when you have much less context than the other person. The larger the difference in context, the more likely that there’s some important piece of information that you don’t have, but that they take for granted.

The point here isn’t that there something you don’t know. There are lots and lots of things you don’t know, and neither does your colleague. And if there’s something they know that you don’t, they’ll probably fill you in when asking the question.

The point is that they will tell you something only if they realize you don’t know it yet. But people will expect short inferential distances, underestimate the difference in context, and forget to tell you stuff because it’s just obvious to them that you know.

Focus on filling gaps means you ask about the parts of your model which you are more uncertain about, to find out if they can help you build a clearer image. You can also extrapolate and make a wild guess, which you don’t really expect to be right.

For example: “How does the network works on this datacenter? Do we have a single switch so that, if it fails, all connections go down? Or are those network interfaces all virtualized anyway?”

## 6) Finding new ideas: If everybody understands one another, and the models are correct, knob-turning will lead to new conclusions (if they hadn’t turned those specific knobs on the problem yet)

This is the whole point of having the conversation, to help someone figure something out they haven’t already. But even if the specific new conclusion you arrive when knob-turning isn’t directly relevant to the current question, it may end up shining light on some part of the other person’s model that they couldn’t see yet.

This effect is general and will happen gradually as both your and the other person's models improve and converge. The goal is to get all obstacles out of the way so you can just move forward and find new ideas and solutions.

The more global context and skill your colleague has, the lower the chance that they missed some crucial piece of data and have a mistaken model (or, if they do, you probably won't be able to figure that out without putting in serious effort). So when talking to more skilled or experienced people, you can focus more in replicating the model from their mind to yours (communication check and self-check).

Conversely, when talking to less skilled people, you should focus more on errors they might have made, or models they might not have considered, or data they may need to collect (other-check and alternative hypothesis generation).

Filling gaps depends more on differences of communication style and local context, so I don't have a person-based heuristic.

Comment author: 04 March 2017 11:16:30PM 0 points [-]

Potential problems with Idea number 1 (thanks to a chat w/ Romeo Stevens):

• A lot of ideas being explored are now several inferential distances from the baseline rationality literature, meaning that short summaries might not be great. (There'd be lots of dependencies / things.)

• The incentive structure as it stands doesn't seem to necessarily encourage this sort of compilation. (Coordination problem / poor returns on effort spent on summaries, esp. if it's all being summarized by one person?)

• In addition to there being lots of rationality blogs, the emergence of rationality hubs in meatspace like Berkeley mean that there's progress that's likely happening offline, and writing things up in a way that bridges inferential gaps is costly.

Comment author: 29 March 2017 11:06:24AM *  0 points [-]

I follow several programming newsletters, and I don't have context to fully understand and appreciate most links they share (although I usually have a general idea about what they are talking about). It's still very valuable to me find out about new stuff in the field.

I'd patreon a few dollars for something like this.

Comment author: 08 November 2015 11:18:44AM 0 points [-]

Going!

Comment author: 21 September 2015 12:42:36AM 1 point [-]

What sounds better: "Slightly Less Wrong Every Day" or "Striving To Be Less Wrong"?

Comment author: 21 September 2015 07:22:31PM 1 point [-]

Second one!

Comment author: [deleted] 24 August 2015 12:47:11AM 1 point [-]

Really fantastic idea. I would suggest adding a way to tag predictions so that you can see how accurate you are within a particular domain. I would also suggest adding a way to see your accuracy over different timeframes (in order to view improvement). With those two features, this would definitely be an app I'd be willing to buy.

In response to comment by [deleted] on Predict - "Log your predictions" app
Comment author: 31 August 2015 04:25:39PM 0 points [-]

Thanks! I'll look into adding tags and timeframes. I'm not sure how to do that without the layout getting too crowded.

Comment author: 23 August 2015 03:22:05PM 1 point [-]

Sounds nice. Making predictions about personal events makes more sense to me than predicting e.g. elections or sport events (beauce a) I don't know anything about it, and b) I don't care about it). But I don't like the idea of making them (all) public, like on prediction book. Though a PredictionBook integration sounds like an obvious fancy feature.

And I liked what I saw the one second I could use the app ;-)

After installing, it crashed pressing "save" on the first prediction. Now it chrashed right on startup. I get to see the app for a moment, but I can't do anything. After deleting the data (from the android setting) I can make a new prediction, but again, it crashes after pressing "save".

I installed from the apk-link you provided.

I've got a Moto G (2. Generation) with Android 5.0.2.

Hope that helps. And if anyone can tell me how to diagnose the problem in more detail, I'd be interested in that, too.

Comment author: 26 August 2015 05:04:39PM 1 point [-]

Check the link below, v0.2. Should be working now!

https://www.dropbox.com/s/59redws46ncdiax/predict_v0.2.apk?dl=0

Comment author: 25 August 2015 12:40:06AM *  2 points [-]

You mean, instead of programming an AI in a real life computer and showing it a "Game of Life" table to optimize, you could build a turing machine inside a Game of Life table, program the AI inside this machine, and let it optimize the table in which it is? Makes sense.

Comment author: 23 August 2015 03:22:05PM 1 point [-]

Sounds nice. Making predictions about personal events makes more sense to me than predicting e.g. elections or sport events (beauce a) I don't know anything about it, and b) I don't care about it). But I don't like the idea of making them (all) public, like on prediction book. Though a PredictionBook integration sounds like an obvious fancy feature.

And I liked what I saw the one second I could use the app ;-)

After installing, it crashed pressing "save" on the first prediction. Now it chrashed right on startup. I get to see the app for a moment, but I can't do anything. After deleting the data (from the android setting) I can make a new prediction, but again, it crashes after pressing "save".

I installed from the apk-link you provided.

I've got a Moto G (2. Generation) with Android 5.0.2.

Hope that helps. And if anyone can tell me how to diagnose the problem in more detail, I'd be interested in that, too.

Comment author: 24 August 2015 12:27:00PM 0 points [-]

This is weird. I'll test to see if I can reproduce and report back (hopefully with a fix).

Comment author: 18 August 2015 12:22:34PM 1 point [-]

I like it! Thoughts from 30 seconds of playing around:

• There's some flickering in the text of the tabs while swiping between them.
• What is the difference between a "Yes" and a "No" prediction?
• Long presses are not particularly discoverable; perhaps there should be some buttons when you tap a prediction to expand it in the list view.

Design-wise, it's great apart from that. Both of your proposed features would be worthwhile too.

Comment author: 18 August 2015 02:41:21PM 0 points [-]

Thanks!

• I'm not getting the flickering here... are you on a low-end device? Which version of android are you on?
• No difference at all. I just thought it would make sense to phrase the predictions in the form of questions and answers - so you could e.g. pick a question from a pre-made list and just choose your answer.
• Good to know, I thought "long press to edit" was a common enough pattern that everybody would discover it.

View more: Next