John_Maxwell_IV comments on Irrationality Game II - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (380)
Irrationality Game
For reasons related to Godel's incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be. (90%)
I believe that human hardware can - in principle - be as intelligent as it is possible to be. (60%) To be clear, this doesn't actually occur in the real world we currently live in. I consider the putatively irrational assertion roughly isomorphic to asserting that AGI won't go FOOM.
If you voted already, you might not want to vote again.
If particles snap to grid once you get down far enough, then there are a finite, though very large, number of ways you could configure atoms and stuff them into a limited amount of space. Which trivially implies that the maximum amount of intelligence you could fit into a finite amount of space is bounded.
And of course you could also update perfectly on every piece of evidence, simulate every possibility, etc.. in this hypothetical universe. This is the theoretical maximum bound on intelligence.
If our universe can be well approximated by a snap to grid universe, or really can be well approximated by any Turing machine at all, then your statements seem trivially true.
It's called the Bekenstein bound and it doesn't require discreteness.