Apparently, MIRI has given up on their current mainline approach to understanding agency and are trying to figure out what to do next. It seems like it might be worthwhile to collect some alternative approaches to the problem -- after all, intelligence and agency feature in pretty much all areas of human thought and action, so the space of possible ways to make progress should be pretty vast. By no means is it exhausted by the mathematical analysis of thought experiments! What are people's best ideas?
(By 'understanding agency' I mean research that is attempting to establish a better understanding of how agency works, not alignment research in general. So IDA would not be considered agent foundations, since it takes ML capabilities as a black-box. )
ETA: I originally wrote 'agent foundations' in place of 'understanding agency' in the above, which was ambiguous between a broad sense of the term(any research aimed at obtaining a foundational understanding of agency) and a narrow sense(the set of research directions outlined in the agent foundations agenda document). See this comment by Rob re: MIRI's ongoing work on agent foundations(narrow sense).
(I work at MIRI.)
We're still pursuing work related to agent foundations, embedded agency, etc. We shifted a large amount of our focus onto the "new research directions" in early 2017 (post), and then we wrote a longer explanation of what we were doing and why in 2018 (post). The 2020 strategy update is an update that MIRi's scaling back work on the "new research directions," not scaling back work on the set of projects linked to agent foundations.