Meta-theory of rationality
Here I speculate about questions such as:
What makes a theory of rationality useful or useless?
When is a theory of rationality useful for building agents, describing agents, or becoming a better agent, and to what extent should the answers be connected?
How elegant should we expect algorithms for intelligence to be?
What concepts deserve to be promoted to the root/core design of an AGI versus discovered by AGI? Perhaps relatedly, does human cognition have such a root/core algorithm, and if so, what is it?