Surprisingly, I haven’t seen anything written about the ultimate limits of intelligence. This is a short post where I try to ponder this question.
Superintelligent agents are often portrayed as extremely powerful oracles capable of predicting long-term outcomes of actions and thus steering the future toward their goals. One problem with this understanding is that the future also contains the agent itself, its future actions, and their consequences. Since no system can have a perfect model of self there should be some maximum level of precision and quality of future prediction, unless this future does not depend on the agent’s reaction to events or actions of any other agents of comparable capacity.
This should... (read more)