Epistemic status: very unsure, just got a possibly interesting philosophical concept to share from my thoughts.
Confidence: seems possible but hard to evaluate
Might not be a very original thought but never heard of it.
Traditional views see nothingness as the simplest possible state - a complete absence of everything. This is a very specific state - it lacks everything, starting from basic math concepts and ending with complex things including physics. We might consider an alternative perspective: perhaps nothingness actually contains a superposition of all logically possible states, models, and systems, with their probability weighted by the inverse of their complexity. In this view, simpler models have a higher probability in this superposition, but... (read 344 more words →)
In articles that I read, I often see a case made for optimization processes that tend to sacrifice as much as possible of the value on dimensions that the agent/optimizer does not care about for a very minuscule increase on dimensions that change the perceived total value. For example, AI that creates a dystopia that is very good on some measures, but really bad on some other just to refine those that matter for it.
What I don't see analyzed that much is that agents need to be self-referencing in their thought process, and on a meta level, also take their thought process itself and its limits and consequences as part of their... (read more)