Note: please write any answers to this prompt in spoiler-tags.

 

Recently I set out to deliberate practice at "reasoning about confusing intellectual problems." 

Eliezer's Class Project has a fictional group of rationality students try to find the true theory of quantum gravity in one month. This always seemed like a cool goal and test for rationality training to aspire to. If you're not solving difficult open problems faster than science, your Art of Rationality probably isn't complete.

Of course, our Art of Rationality isn't complete yet. But, I think there is something promising in this area, as a way to ground out "rationality training" in something concrete. It seems like good practice to take a given physics question you don't understand the theory behind, and try to invent the theory yourself.

I don't think we're anywhere close to the "definitively impressive" version of rationality practice/training. But, I think a good next step is "Solve Thinking Physics™"

Thinking Physics is a textbook teaching physics "question-first" – it presents a physics-y situation, and asks you to figure out what happens next. The questions are multiple choice, but often fairly tricky nonetheless. 

I think a good rationalist-training goal is aim for a goal of "be (correctly) 95% confident in the answer", as a rough proxy for "there were no major lingering confusions about the problem except for generic 'maybe I missed something?'". And, failing that, have the subgoal of at least being calibrated about how confused you. Every time you look at an answer, first log your probabilities for each of the multiple-choices in Fatebook.io (or prediction-tracking tool of your choice). 

The problems are set up in a way that you can probably reason about them from some basic background knowledge, without much math background. They're ideal for people who don't have much physics background (since the whole point of the book is to teach you physics), although I know people with some physics education who still find it fairly hard.

I spent two weeks working on Thinking Physics problems, and hosting meetups/workshops where other people could join me. With each question, I focused on learning as much as I could about how-to-think. 

My original hypothesis was that I could get significantly better at it in 6-8 weeks. I only spent two, and the result so far is I think I'm significantly better although didn't yet hit my goal of 95% accuracy. (In my final test-set, I got 1 out of 5 questions wrong, when I was aiming for zero. I do think I have a pretty clear sense of why I got that 1 question wrong, and what I should have done differently)

After workshopping some ideas for "the Thinking Physics rationality challenge", I now present you with three tiers of challenge. 

Challenge I: Solve three problems (and learn from them)

Step 1: Do an exercise.

Spend some time trying to solve three Thinking Physics question. Aim for 95% accuracy, fully deconfusing yourself about each exercise. 

Write down your probabilities for each answer. 

It's important to actually write down the probability for each answer – otherwise, you may get a vague sense of "yeah that's probably right", that doesn't allow me to cleanly say "I got this one wrong." And doing it for all the answers, not just your favorite one, gives you additional bits about whether your models made any sense. (i.e. having clearly stated "I think answer A is most likely and B is second most likely" gives you a harder update if it turns out that A and B were both wrong)

Step 2: Learn from it

Then, think about how you could have solved the problem better. 

Your primary goal is to learn as much as possible from each question. 

Babble as many new insights as you can about how to think. This can include explicit "strategies" (like "see if you can simplify the problem"), physiological things (like "I got tired and needed to take a break"), or psychological things ("something about this feels weirdly aversive and ughy, what's up with that?").

When you're done, submit your answer on this post for "what you learned." (Focus on your takeaways, not the object-level solution). 

Overall structure

This is more fun with a partner, although I recommend spending a chunk of time thinking independently before sharing your answers and thought-processes with each other. You might find it helpful to get some friends together as a weekend activity.

I've found a fairly good default approach is to do:

  • 20 minutes thinking about it by yourself
  • 20 minutes thinking about it with a friend
  • 20 minutes discussing your meta-reflections on how to solve the problem with a friend.

How to pick exercises

The exercises vary in difficulty. My recommendation is to flip to a random page, weighted towards the beginning of the book. If it feels "difficult but not impossible", then give it a try. 

If you're pretty confident you just know the answer, still try to come up with a clear explanation for why (but err on the side of checking the answer quickly rather than spending a lot of time doublechecking). 

If you end up feeling stuck, try to give it at least 10 minutes before giving up and switching to a different problem. (In most cases, I found it valuable to give it a solid 20 minutes of independent thought + 20 minutes of conversation-with-partner even if I felt really stuck).

Some particular exercises that seemed reasonably good for people I beta-tested this with (which is not to say they were easy or hard, but that I feel like I/others learned from making a good faith effort on:

  • Steam Locomotive
  • Cold Bath
  • Rare Air
  • The Expansion of Nothing
  • Landscape

(Page numbers for the exercises vary between editions of the book, but you can look them up in the table of contents)

Submission guidelines

Put your answers in spoiler tags (begin each line with ">!"), although first list (unspoiler-tagged) that it was a Tier 1 challenge, the name of the exercises you did, and whether you give them each an overall thumbs up or thumbs-down as having been a good exercise.


Challenge II: Design a training regimen

After you've done 3 exercises and gotten a rough sense of their shape, develop a training regime that helps you significantly improve at Thinking Physics exercises. 

If you started out not being able to reliably solve them at all, get to the point where you can at least semi-reliably solve them, given enough time. (Suggested target: solve 5 random questions in a row without getting any wrong, without help)

If you started out able to semi-reliably get the right answers given a lot of time, aim for speed – can you solve 10 problems in a row, relatively quickly, and only get between 0-1 question wrong?

Submission guidelines

You can submit your training regime before actually completing it (but flag whether you have actually employed it yet, and if you end up actually doing the training regimen, I suggest replying later with any updates you made). 

I think it's a fine use of this exercise to submit your training regime, then read other people's suggested regimens to get more ideas before going off to actually do it. 

Put your training description in spoiler-tags (although again list which challenge-tier you're doing in non-spoiler tags)

(Once you actually get started with the training, I recommend adjusting your approach as you learn more)


Challenge III: Fully "Solve" Thinking Physics

After you've significantly improved your skill-level, develop a thorough for solving Thinking Physics exercises, in generality. Write the instructions that would have helped past-you get to the point where you could solve them reliably and/or quickly.

(It's okay for this to include metagaming / psychologizing the author. This is "Solve 'Thinking Physics'", not "Solve 'Physics'")

Write your answer either as a spoiler-tagged comment here, or as a top-level post if it ends up feeling like a full essay (and then a quick comment here linking to it). Include a note about what concrete outcomes you achieved. 


Bonus Challenge:

Find different sets of exercises that are as different as possible from Thinking Physics (i.e. requiring a pretty different set of skills, while still being feeling relevant to becoming a "generalist researcher"), that would make for a good followup to this exercise.

New to LessWrong?

New Answer
New Comment

3 Answers sorted by

Morpheus

Aug 08, 2023

120
  • Challenge I
    • Exercises:
      • steam engine :-1:
      • cold bath :+1:
      • expansion of nothing :+1:
  • tldr (long thing contains all the babble, only included because seemed low cost, don't recommend reading):

    • Did exercises alone as didn't feel like setting up something with a partner. Felt I was excited enough that should work.
    • steered away from 95% for all exercises where I hadn't seen the puzzle before as was afraid that there's a trick.
    • I mostly noticed how extremely FUN I found this! Just today when reflecting that studying for university courses has kind of killed some of my enthusiasm and I didn't really remember last time being really excited while studying or in free time. Tried playing games (like chess), but even that felt more like doing the motions and got me more addicted and not actually in a flow state of mind. Somehow this just clicked.
    • Training regimen:
      • Probably opting for speed. This seemed on the easier side.
      • I will maybe try 5 minutes per question and see how that goes for 10 of them.
  • Journal

    • steam locomotive
      • Intuition: It seems like bigger wheels would be better for higher speed, but might be more wasteful.
      • abstract
        • This is in the momentum collum. It seems like one difference would be that the train with lower load, but speed, needs to efficiently maintain that speed. The one for freight needs better breaks.
        • I also am not sure if I am supposed to use other evidence. On the other hand, why would I have received.
        • Overall it seems the higher train is made in a way that is designed for slower speed (higher, less big Schornstein.) the thing stopping things in the front
        • There is something unintuitive about Gänge here as well! Having a big wheel means on rotation of the engine is covering more ground. This definitely seems like the thing that you want for the fast train with less load! On the other hand I think there is a high chance I have the direction backwards there.
        • I really don't like the trick of having two options that they are both trains of the same kind! I feel like that makes it hard for me to become confident!
        • Anything left confusing? I don't know lots about time or engines! Not 100% sure on direction of thing! Not sure how tricky questions are I think I want to not get overconfident (gpt-4 thing got me!)! I also haven't spent 20 minutes!
        • Tracks as hints? What about the stangen thing attached to wheels?
        • How does number of wheels matter?
        • I feel like I got most of the evidence I know how to interpret.
        • I am not sure if I should treat this as an exercise in not being too impatient, or in moving at the appropriate speed. I think I want to go with taking the appropriate time?
        • When practicing I am also not sure how well this went. I felt this exercise really didn't give me that much to work on?
      • Result:
        • b) (90%)
        • a) (5%)
        • c,d) (5%) passenger speed, freight load.
      • Looking back
        • I was right!
        • Gives me more confidence that this book is trying to be straightforward and not trying to trick me.
        • I could have explicitly thought about if the locomotive thing would have been recommended if it seemed like this thing wouldn't have been super object level.
        • I feel like I could easily taken this one on in 5 minutes if I had not expected really hard stuff.
        • I got the thing correct for exactly the right intuition. Nice. I want to check that in the future.
        • I want more books that are like that! I feel like I want to take Bryan caplan's exam that he gave gpt-3 like that (since gpt-4 still failed and I feel like I would remember the questions.)
        • I like how they didn't spoiler me by telling me whether this was an easy or hard one though.
        • What else do we learn from this? Not sure? I'd be interested how hard people thought this one was?
        • More reflection after more exercises?
        • I had a hard time figuring out how to feel about gaming. It feels like I could be more efficient. Something is chasing me. On the other hand, I have time!
        • I'd be interested to know how many other
        • I still feel a bit impulsive
        • It is fun to babble all of my notes in this document
        • at the same time i feel anxiety about later pruning to decide what to post on the forum
          • i feel i will either dump everything there, or i will just decide later! babble!
        • I feel in general I have a bit of a hard time balancing meta and object level. Maybe an adhd thing? Maybe I just have the separating babble and prune as too much of a doctrine in my head that I don't actually follow?
        • I notice that I love doing these artificial exercises. Fun! I feel way more motivated.
        • I think in general with adhd and everything I might be steering to much into not giving myself the artificial structure I need to really thrive by giving myself challenges that actually make me achieve great things!
        • I think I will switch to the next exercise before too much philosophizing.
      • Report:
        • I didn't feel like finding a partner and just wanted to start with 3 problems for now.
    • Cold Bath
      • before
        • Archimedian principle thing (knowing density of the ice not actually required ha!).
        • I know the answer. The density of ice is lower than the density of water (or at least 4 degrees is the most dense I believe.)
        • Thing I might be a bit confused about:
          • if it is getting hotter than 4 degrees, at some point we could reach a temperature again where the thing spills over. My assumption (given this book has been reasonable so far.)
        • This seems really unfair to get confident
        • Final answer: Will stay exactly brim full. Ice displaced exactly as much mass as there is water in the thing (confusing stuff about air and everything else is negligable.).
      • prediction
        • a) 3%
        • b) 2%
        • c) 95%
      • after
        • right!
        • I don't give myself too much credit as I had already encountered this.
        • Apparently I might have been eposed to too much of this. Probably lots of stuff out of this book was used by content creators I know.
        • I did end up needing to precisify my answer and I also didn't notice even without knowing ice density, you can solve this with archimedian principle.
        • I think I also want to prod internal physics simulation engine more (not only the verbal one.)
    • Rare Air
      • Dang! I ended up exactly on the wrong page and spoilering myself! I had thought a tad before that I hadn't written down how annoying it is to not spoiler yourself on the other exercises!
      • Lesson: stay careful to not get a page too far! (maybe precompute page disparity!) 9 pages!
    • The expansion of nothing.
      • Analysis
        • This one feels really interesting!
        • Intuitive model is very confused. I can see arguments for all three.
          • Slightly more intuitive if it would get smaller though.
          • Seems coincidental to just stay same (but eh... toy physics problems sometimes do this)
        • Intuition-pumps
          • What if we had the rod without the circle?
          • What happens without the circle?
          • What happens if we repeat?
          • What is the mechanism behind the expansion in the first place? I guess we have electrons in higher states. Everything is in higher energy and pushing away from each other?
          • Is there an analogy with other stuff that has force like this?
          • What if I imagine concrete points?
          • Making the thing really thin gives me the strong intuition that everything within the same radius is going to push each other apart, resulting in hole being bigger! Not what internal physics engine said!
            • I am also not sure if there is going to be some strain because the balance of material is not working out anymore?
          • Reminds me of the orange thing, that no matter if you have an orange or the earth, increasing your circumference is going to do the same to your radius. Means the shape would just stay the same. Everything just gets a bigger radius.
          • How to resolve remaining confusion?
            • I could try to dig deeper into how the stretching apart might work.
            • I could dig a bit deeper into ..j
          • I have learned some stuff about mechanics and lagrangians/Hamiltonians and going from normal to radial coordintees. Is that stuff any help here?
          • I feel if I would hit it from the top, it would still give me a different answer
            • Not sure if principiled to give 95% when I am still into other models? How confident in meta thing?
        • Noting that I have "SO" much fun doing this!
          • I remember just a few hours earlier feeling like I miss this feeling of just being really enthusiastic! Not sure if that was just me being not really reflective, or if that is really the case and I should attend to this. All the generic advice out there kind of tells me that I should perhaps not stop myself and just continue riding the wave for now?
          • For: follow your interests, there's this guy who just for fun did all the problems sets in one go. (Paul graham) I find them effortful
            • Overenthusiuams seems the only real way people with adhd operate
          • Against:
            • People who work on this not just for one evening but over extended periods might actually form longterm differences with their brainz.
          • Noting that I feel like actually applying the finding portends for and against thing explicitly so strongly since I have not made predictions for sometime.
      • takeaway
        • I also just notice that with this exercise I just felt entitled to start
        • With research on AI safety stuff I feel like I am waiting for this gatekeeper to tell me that I'll not be wasting peoples time by working on xyz. Not sure that is an actual problem in general. Specifically doesn't seem super productive compared to just getting excited and started on things though!
        • I was still using slightly more sketchy analysis this time! I did realize that you could take the ring apart, but then I threw this thought away before thinking about what would actually happen if take apart, heat, take back together.
          • In my mind I took things not really apart, but kept them in the same place when heating. I would not have expected to still get the same answer!
          • I did not come up with something close to the taking a photo and expanding the whole photo analogy
            • I do feel like I had something close to that!
        • I feel great because deliberation actually got me closer from my initial first guess. Kind of suspicion though, that I was in kinda modest mode and I took more interesting intuition, but if pressed I would have gone with the expansion. (Could be hindsight bias)
        • All in all I really liked this challenge! Very fun!
      • Prediction:
        • a) 80%
        • b) 10%
        • c) 10%

(The >! thing didn't work in markdown I just did a quick edit to change the comment to a LessWrong Docs comment where I was confident the spoiler tags would work, will look up how to do markdown later)

and, thanks and congrats! 

I couldn't quite tell where you journaling vs end takeaways started and stopped. I'm happy to read through the whole thing but you might want to edit for clarity for benefit of others.

1Morpheus9mo
I was copying it from my notes (with syntax for spoiler tag already in) and I belief that the lesswrong-docs mode didn't work for that reason. Took some time because I got confused because I looked in the "welcome&faq"-post instead of the actual faq for the markdown way.

Muireall

Aug 09, 2023

110

OK, a shot at Challenge I, with Poof and Foop, Steam Locomotive, and Expansion of Nothing. Felt like all three are in the sweet spot. I personally dislike Expansion of Nothing.

Poof and Foop:

The problem statement is a bit leading: there's some kind of inversion symmetry relationship between the two cases, so it should go the opposite direction, right?

Initially, definitely. The puncture means that there's less pressure on the right side—instead of colliding with the can, some particles go inside.

But those particles end up colliding with the interior left side anyway. So it seems like it should even out, and at the end the can won't be moving.

So my guess is (c). Can I make myself more confident?

Why doesn't an inversion argument go through? Well, the compressed air can is drawn in vacuum, but the vacuum can doesn't empty the environment.
So it's not simply time reversal. If the compressed air can were in air, then we might have some kind of symmetry between air particle and absence of air particle,
but then the compressed air can would slow down due to drag and stop in the limit. So that still points to (c). That also works as a thermodynamic argument—the first can isn't equilibrating with anything, so none of the work goes to heat. 95% confidence feels good.

*checks* OK, looks like I was thinking about it right, and my explanation for why the naive inversion is wrong is equivalent to sketch II.

Reflection: The main interesting thing here is the fake symmetry argument. My favorite problems have tempting solutions that don't work for subtle reasons. I think it's important not to count problems solved until you can pinpoint why those solutions fail.

What did I use here? If you're dealing with pressure, you can probably get an answer with forces or with thermodynamics. A net force can be thought of as a single force or as lack of a balancing force. That's the symmetry idea.

I'm not very good at babbling. I'm basically looking over what I wrote and renarrating it. Words to words.

Steam Locomotive:

We might want to think about torque and the height of the axle. 
Or maybe it's about wheel radius. One cycle takes you further with bigger wheels.

I think these both point to (b).
I'm a little confused because thinking about the wheel heights of sports cars and trucks would push me towards (a). But cars have gears. Directly driving small wheels is basically low gear.
Not sure how I'd know if the answer were (c) or (d). Seems like you'd need background knowledge not in the question.
I should think about actual forces to get to 95% confidence.

Let's say the engine puts out the same force in both cases. Then, in II, each wheel sees half as much force from the engine,
but the ground exerts force on twice as many wheels, so that part's a wash. But because the wheels are smaller, the ground
needs to exert more force per unit engine force to keep the wheel from slipping (same torque).

So for the same engine, II seems to give more accelerating force, while I gives higher top speed. I'd put 95% on (b).

*checks* OK, seems like I had the right thought. Could I have been as confident from the distance-per-cycle argument alone? Rather than look at forces,
the author's answer argues that we know the locomotive that goes a shorter distance in the same number of engine cycles must
be putting more energy into each mile it travels. I considered that, but I wasn't sure it was a valid step.
Why couldn't you just be getting less work from the engine? Well, it's the same piston with the same motion.
My force calculation already needs that assumption, it just makes the final connection with the acceleration.

Reflection: I feel like I don't know much about automotives. (Is a locomotive an automotive, by the way? I think so, it's just locomotives involve a track.) I can describe transmission and gears and engines and so on if I think about it, but I don't have much intuition. Like, I can't explain why it's one way and not another, or how different cars solve different problems.

I just feel like I should have been able to answer the question immediately. If I could drive stick, would that help? Probably not. I already ride a bike and didn't immediately see the analogy.

What did I use? Qualitative personal experience. I picked a misleading experience but reasonably didn't weight it above thinking through the problem. Identifying relevant considerations. Didn't stop at the first idea.

Expansion of Nothing:

Oh, this one's nasty. It has to expand, right?
If you took an iron disk and drew a circle where the hole is, the circle would expand.
If you cut that disk out and heat up the cutout, the disk expands the same amount.
So everything outside the circle can't be exerting any net force at the boundary, and the hole has to stay the same size as the disk.

I don't see any problems with this argument, but can I explain why other arguments don't work? Why can't thermal expansion generate stress instead of allowing uniform expansion? I guess in a sense I just gave the reason, but why does the gap shrink if you cut a gap in a rod instead? Well, when you have only one piece, it's like applying a magnification transformation, which requires an origin. But the origin is arbitrary—you can just recenter. With two separate pieces, the two origins of magnification are no longer arbitrary.

*checks* Yeah, the author's answer doesn't go there, unfortunately.

Reflection: This problem feels really annoying to me. Maybe I saw it a long time ago and got it wrong? Or maybe it's that you never have anything that's free to expand uniformly. It's braced against something, or it's sitting on something with a different coefficient of thermal expansion, and you do get stress and it does matter how the thing is patterned.

This feels like a problem where you're supposed to think about limiting cases. Like, if you have an atomic ring, obviously it expands. I don't know if you can justify jumping to the right answer from that, though. If the disk is thick and the cutout doesn't go all the way through, it expands. Ehh. You still need an argument that it expands the same.

I'm also generally curious how you found the exercise, whether it seemed worthwhile to you.

1Muireall9mo

I actually created a doc where people can add their own confusions and answers for Expansion of Nothing: https://docs.google.com/document/d/1cleM-QuO9R9_jRqDZMMKzobpWcf-k9KHBe91fUMWhuQ/edit 

I'll edit it into the OP.

I'd also add: a TODO item on my list is to make my own followup question for Expansion of Nothing that presents rings of different materials (i.e. something like a ring of water, a ring of jelly, a ring of concrete, something like that), and asks "in any of these cases, do you get a different answer than the Iron Ring?

I don't actually know

... (read more)

Raemon

Aug 01, 2023

40

My answer to "Designing a training regimen." 

I recently spent ~2 weeks on this. I iterated on the approach over time, and didn't really try to this "design training" exercise at the beginning. 

My starting approach was the "aim for 95% confidence" (now listed as a requirement in the OP), based on receiving that advice from a friend and finding the general idea compelling.  Initially I aimed at always giving myself at least a full day to answer a question. I eventually came back to this, but pretty quickly decided it wasn't actually the right approach.

I ended up with a separation between "training" from "testing." During training, I'm optimizing for learning quickly. This can include erring more on looking up answers, working with partners, etc. 

During testing, I focused on evaluating whether I-specifically-learned-things, so I didn't talk to friends about my thought process much to avoid spoilers. And I gave myself a very long time (sometimes spending more than a full day on each question).

I was experimenting with workshops throughout this time, and a lot of my effort ended up going towards managing other people and making sure they were having a good time. One of the things I'd go back-in-time and tell myself is "don't try to mix large workshops and doing-it-myself. Invite friends to partner with, but focus on a few people you know well."

One major update was I shouldn't just be trying to get the right answer, I should be trying to identify the explanation the author was primarily aiming at. (Sometimes the author's explanations are confusing or incomplete, but I think "generate lots of relevant explanations, at least one of which was the one the author generated" still seems useful for making sure I actually modeled the situation well)

I figured out partway through the process that I should be optimizing for "learning as much as I could from each question", and that suggested a followup strategy of "choose problems that feel like I will learn a lot from". (With the most obvious implication being 'not too easy or too hard', and a trickier implication being 'requires skills that I'd still benefit from focusing on improving')

One of the biggest problems was setting aside time to do it at all. This is a lot of cognitive work. I ultimately found I could only do this for a few hours a day and still was pretty exhausted in the evening. I think it's relatively achievable to set aside one weekend for this but the amount of time necessary to vet "you have meaningfully improved" is pretty expensive. 

I was lucky to be able to take a 2 week break where I was professionally focused on this. I think if rationality wasn't part of my Day Job, and I couldn't take a vacation for it, I think my approach would be to allocate one weekend-day each week towards this for a few weekends (aiming to look up the answer after an hour per question). And then, for testing... well, this feels fairly tricky. An obvious answer is just... keep allocating weekend time to it. This feels like it'd take a long time. Hrmm.

It'd be easier if "people's ability to solve Thinking Physics problems" was better studied, and it was, say, known that some given exercises generally take an average undergrad 2 hours to deconfuse themselves on. (Then, you set yourself a 2 hour timer and submit your best answer when you're done, rather than potentially spending days on it doublechecking yourself). 

I think, for the immediate future, "take as long as you want to thoroughly understand the scenario" is a better test of thinking-skill for people doing openended research, and the fact it is it mostly makes sense to do this if you're actually already planning to invest years in openended research with poor feedback loops.

I tried doing these exercises in my rationality group this week with 5 other people. Since we did this as part of our regular meetup, doing 1h for a single question would have taken too long (we could have done 2 questions max). Instead, we did 4 exercises in ~90 min (steam locomotive, poof and foop, expansion of nothing, rare air). We started out with relatively strong physics background (everyone knowing mechanics), so I think that wasn't too hasty, except for the reflection part, perhaps. I gave people the first 5 minutes to think for themselves and to

... (read more)

It'd be easier if "people's ability to solve Thinking Physics problems" was better studied, and it was, say, known that some given exercises generally take an average undergrad 2 hours to deconfuse themselves on. (Then, you set yourself a 2 hour timer and submit your best answer when you're done, rather than potentially spending days on it doublechecking yourself).

I think, for the immediate future, "take as long as you want to thoroughly understand the scenario" is a better test of thinking-skill for people doing openended research, and the fact it is

... (read more)
13 comments, sorted by Click to highlight new comments since: Today at 11:43 PM
[-]Max H9mo100

The Amazon link in this post is for Thinking Physics: Understandable Practical Reality. I also found Thinking Physics: Practical Lessons in Critical Thinking and Thinking Physics is Gendaken Physics.

AFAICT, these are just different editions of the same book, but I couldn't determine what the best or latest edition is. To save people the same Googling that I did, Archive.org has a version available online here, and the Harvard Book Store sells a paperback copy in stock here for $34. (Amazon doesn't appear to actually have any edition for sale at a reasonable price.)

The Amazon link in the post is for the third (and latest) edition, only $28. Your other links are for the second edition, except the Harvard link's dead.

Find different sets of exercises that are as different as possible from Thinking Physics (i.e. requiring a pretty different set of skills, while still being feeling relevant to becoming a "generalist researcher"), that would make for a good followup to this exercise.

I think my idea of investigating a recent (alleged) poker cheating scandal is a good exercise in this vein. It's certainly very different from Thinking Physics problems.

The main objections people had when I posted it were that it requires either already having or quickly absorbing a lot of background knowledge about the rules of poker and norms in the high stakes poker scene as a prerequisite, and that there is no way to know if you got the answer right. I continue to think these are not fatal flaws, and that if you're willing to invest some hours in learning the relevant background (which is itself a good rationality skill to practice, especially if you try to do it under time pressure), the payoff in the quality of the mystery is worth it.

There are a myriad of plausible competing hypotheses and piles of publicly available (but somewhat complex-to-think-about) evidence that make this a good test of your ability to make Bayesian updates about a real world situation. Also, the fact that there is no public consensus is actually a benefit in some ways - the exercise is un-spoilable, and you can research freely without fear of accidentally running into a consensus-accepted definitive conclusion.


Looking into other unsolved mysteries (e.g. murder mysteries, heists, or other famous cold cases) might provide a similar kind of challenge, and if you compile enough cases you could form a set of exercises in the "mystery solving" genre. But it can be hard to find suitable candidates with lots of publicly available evidence of different types, especially cases that still have multiple competing hypotheses and no clear / trivially obvious conclusion. Essentially, you want something that is actually unsolved (not just legally unsolved), but still interesting and not a total dead end due to lack of evidence. I haven't personally looked into it much, but the JonBenét Ramsey case (warning: gruesome murder / CSA case) comes to mind as one possibility that might suit.

I'm not sure how good this particular exercise is (hard to evaluate without having done it, and the comments in the other post seem to have some good points) but I do like the general idea.

Bonus Challenge

Inspired by this idea from Alex Turner's shortform, I tried to figure out which facts are truth or fiction based on prompting gpt-4 to mess with a Wikipedia article on Developmental Psychology. (First I let GPT-4 munch a big chunk of the article, and then I chose the first chunk I saw that contained lots of concrete claims.)

Crecedences are 0% if claim false, and 100% if the text written by gpt-4 is true/reflects the original article. Outcomes are on the line afterwards. Written more as personal notes (very rough).

Vision is sharper in infants than in older children.

  • Vision is probably not sharper for infants, but the opposite! (10%)
  • false

Infant sight tends to remain stable with little improvement over time.

  • Infant sight should rapidly improve! (at least at some point it has too!) (10%)
  • false

Color perception is limited in the first year, with infants primarily seeing in shades of gray [79]. Infants only begin to develop adult-like vision at about twelve months.[72]

  • is no color perception plausible? (70%)
  • false, In fact they learn it at 4 months!

Hearing is still evolving at the time of birth.

  • Accidentally skipped this claim

Newborns show no distinct preference for human speech over other sounds, and they can't distinguish their mother's voice from others'.

  • Newborns should probably pay more attention to their mother's voice! (It seems that this makes more sense if the latter parts are true. Not sure though!) (40%)
  • false

The belief that these features are learned in the womb has been debunked.

  • the debunking this seems pretty plausible! (70%) (on reflection that is not super sure that this is how it would be written on wikipedia)
  • false

By 18 months, infants' hearing ability is still not on par with adults.

  • not hearing on par is plausible. On the other hand, the opposite seems more likely to be mentioned? (30%) (seems plausible, at that time some babies start talking right?)
  • false

Smell and taste are rudimentary, with infants often unable to distinguish between pleasant and unpleasant odors and tastes

  • The smell seems very implausible to me! Especially for some of the more toxic things, I would expect them to be very ingrained. It seems like valence for a lot of the strongest smells is preprogrammed! (10%) (I give not 5%, because it could be for substances that are not really dangerous? In that case rudimentary would make sense as a description)
  • false

Newborns do not show a clear preference for the smell of human milk over that of formula.[72]: 150  Older infants, interestingly, do not show a preference for their mother's scent.[79] Human milk over formula? Seems like that could go either way with underpowered studies? 55%

  • true (Huh first positive result ... somehow I now want to see how well powered these actually were, or how you detect which smell a baby "likes" at all and whether that's a strong signal)

Touch and feel, while being one of the first senses to develop in the womb, are not as refined in infants as previously thought.[84] This contradicts the idea of primitive reflexes, which were believed to demonstrate advanced touch capabilities.

  • This section seemes perhaps a bit weird? Why would primitive reflexes be rather advanced? Is this saying that a baby needs to figure out most motor control and most is not preprogrammed? Seems plausible, I give (40%) that none of the claims above have been altered.
  • false (In hindsight of course a baby can figure out a lot of motor control before leaving the womb)

Pain perception in infants is believed to be less intense than in older children, indicating that they may not feel pain as acutely.

  • Not sure how long something is an infant. It seems like a plausible claim if a lot of pain is sorta more of a social thing and babies haven't developed that so much yet? On the other hand babies seems like they are crying a lot and that they are constantly suffereing. (30%)
  • false

There is also no substantial evidence that glucose can relieve pain in newborns.[87]

  • The glucose thing seems like a cointoss? Seems marginally more plausible to be mentioned if true so (45%)
  • false Wow a lot of these are giving across higher confidence than I would have expected. The sucrose thing is apparently a common thing and the randomized control trial doesn't seem to have too bad numbers (although I should at some point figure out how to get a useful estimate of the effect-size out of statistics like that). It seems plausible that blinding might be a bit hard.
  • It also gives me more confidence that Wikipedia is not listing lots of common misconceptions it wants to crush.
  • Overall, this whole field seems interesting! I think I also underestimated this field because it has psychology in its name (Yeah, I know that sounds dumb). I was not reflecting on my probabilities for long, and now feel like I could have done a lot better if I had (feedback and knowing how wrong my first impressions are is also valuable). Also reminds me of some section of hpmor where harry thinks about how it took a very long time until some human came up with the idea to investigate when children learn what. It also seems like a lot of the problems with testing that you would usually have in psychology studies, especially around surveys and self-report, is that you can't do that with infants, so you get higher quality data. You also wouldn't get infants that are trying to figure out what your experimental design is and whether they want to prove you right, wrong etc.

Cold Air isn't in my physical copy, the archive.org copy, or the pdf I found online, I think the problems vary by edition. 

[This comment is no longer endorsed by its author]Reply

Note it’s called Rare Air (and there's another one called "Cold Bath", doublechecking you didn't conflate them?)

Eliezer's Class Project has a fictional group of rationality students try to find the true theory of quantum gravity in one month. This always seemed like a cool goal and test for rationality training to aspire to. If you're not solving difficult open problems faster than science, your Art of Rationality probably isn't complete.

It's good for intelligent people to be audaciously ambitious. But is Art of Rationality enough to figure out quantum gravity, or solve "difficult open problems" in the sciences? If not, could you comment on what else is needed?

I mean, depends how you're defining art of rationality. I think it'll usually require some kind of domain expertise and skills in the relevant open problems. I also think "rationality" would be important for figuring out what skills to gain, and figuring out how to learn them as quickly as possible, if you were starting from scratch. 

As for "is this possible?", well, I'm not sure. This post is part of sequence (and a possible longterm research project) aimed at figuring out the answer.

I only ever flipped through Thinking Physics for fun, but what I remember is that I tended to miss easier problems more often. If I spent time thinking about one, really making sure I got it right, I'd probably get it. Outside those, there were some that really were elementary, but I'd often find myself thinking I'd looked at the author's answer too soon—a self-serving "well, I would have gotten this, if I were really trying." I might say the problem was that I couldn't tell when I needed to really try.

This does remind me a bit of how I studied for the physics GRE (do people still take that?), particularly getting calibrated on multiple-choice confidence and on how long to spend on problems. Unfortunately, but perhaps not surprisingly, very little of that study transferred to my PhD experience.

I am interested in 

  • how much deliberate effort you put into calibrating yourself on "how much effort to put into multiple choice questions"
  • whether you put any deliberate effort into transferring that into the PhD experience
  • what did you actually do in your PhD experience?
  • what do you think would have better prepared you for PhD experience?

For context if anyone needs it, the Physics GRE is (was?) a multiple-choice exam where you get penalized for wrong answers but not for blanks. It works out so that if you eliminate one answer there's no harm in guessing, in expectation. There's also considerable time pressure—something like 90 seconds per question on average.

how much deliberate effort you put into calibrating yourself on "how much effort to put into multiple choice questions"

Enough to get through all questions with some time left over, even if that meant guessing on some I could fully solve. I'd mark the questions I'd guessed on with different symbols that let me go back at the end and prioritize solving them. For three or so practice tests, I systematically went over every problem that I missed, guessed, or spent a long time on and did the metacognitive thing including questions like "how long did I think this would take? when was I 50% confident? when should I have decided to move on? how could I have decided faster?" (Using purely retrospective judgment—I wasn't actually timing individual questions or anything more complicated.)

whether you put any deliberate effort into transferring that into the PhD experience

Not really. I think I had some notion that being able to solve small problems quickly could lead to a sort of qualitatively better fluency, but in the end there just wasn't enough in common between test content/conditions and research (or even coursework) to prioritize that. I definitely didn't learn the lesson that I was generally underconfident.

what did you actually do in your PhD experience?

Pretty normal experimentalist route, maybe heavier on math and programming than typical. Coursework for 1-2 years shading into helping with senior students' experiments, then designing and running my own.

what do you think would have better prepared you for PhD experience?

In the end I was reasonably well prepared in terms of technical knowledge, problem solving, [meta]cognitive skills, and so on (irrespective of the GRE). I think I mostly lacked perspective, particularly in terms of choosing problems and working with a supervisor. I'd guess, starting with most helpful, one or more of these:

  1. Industry experience with a good manager
  2. More research experience in other subjects
  3. Research in the same subject
  4. Other industry experience

As far as things I could have done instead with the time I used to study, I don't know. Make friends with grad students?

I think it's important to note that, if you randomly solve thinking physics (or even make a decent breakthrough), then all the alignment researchers get to have it too.