How long would it take somebody to go from basic algebra/stats to being able to understand a technical MIRI paper?
Suppose the person:
- is a decent programmer
- is experienced with effective learning and productivity methods
- can dedicate two hours of focused study everyday
- has consumed a lot of non-technical resources, e.g. SuperIntelligence, 80K Hours interviews, FHI podcast, AI-Zombies, GEB, etc.
Sounds like me at the beginning of this year; I’m now able to make my way through logical induction. I’d be happy to help, by the way - feel free to message me.
Note: In markdown, the spoiler syntax is as follows:
::: spoiler
This is some spoiler text
:::
Which should render like this:
This is some spoiler text
Note to GreaterWrong users: GW now has full support for spoiler blocks (they will render correctly—mouse over one, or select its text, to reveal—and there’s a new button in the editor that will insert the correct spoiler syntax for you.)
The current implementation of spoiler tags is pretty experimental, and we will probably change how it renders, but the syntax should continue working for the indefinite future.
Do you have any recommended reading for learning enough math to do these exercises? I'm sort of using these as a textbook-list-by-proxy (e.g. google "Intermediate value theorem", check which area of math it's from, oh hey it's Analysis, get an introductory textbook in Analysis, repeat), though I also have little knowledge of the field and don't want to wander down suboptimal paths.
Sometimes people ask me what math they should study in order to get into agent foundations. My first answer is that I have found the introductory class in every subfield to be helpful, but I have found the later classes to be much less helpful. My second answer is to learn enough math to understand all fixed point theorems. These two answers are actually very similar. Fixed point theorems span all across mathematics, and are central to (my way of) thinking about agent foundations.
This post is the start of a sequence on fixed point theorems. It will be followed by several posts of exercises that use and prove such theorems. While these exercises aren't directly connected to AI safety, I think they're quite useful for preparing to think about agent foundations research. Afterwards, I will discuss the core ideas in the theorems and where they've shown up in alignment research.
The math involved is not much deeper than a first course in the various subjects (logic, set theory, topology, computability theory, etc). If you don't know the terms, a bit of googling, wikipedia and math.stackexchange should easily get you most of the way. Note that the posts can be tackled in any order.
Here are some ways you can use these exercises:
The first set of exercises is here.
Thanks to Sam Eisenstat for helping develop these exercises, Ben Pace for helping edit the sequence, and many AISFP participants for testing them and noticing errors.
Meta
Read the following.
Please use the (new) spoilers feature - the symbol '>' followed by '!' followed by space - in your comments to hide all solutions, partial solutions, and other discussions of the math. The comments will be moderated strictly to cover up spoilers!
I recommend putting all the object level points in spoilers and leaving metadata outside of the spoilers, like so:
And put your idea in here! (reminder: LaTex is cmd-4 / ctrl-4)