Toggle comments on [MIRIx Cambridge MA] Limiting resource allocation with bounded utility functions and conceptual uncertainty - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (28)
For a sufficiently powerful superintelligence with infinite resources, all things are equally chocolate. Very zen. (As long as you model diminishing returns on investment as an exponential sigmoid.)
Your funky results may also have something to do with the use of the power law in formalizing conceptual uncertainty, since that kind of distribution tends to favor a strategy where you pay attention to exceptional cases. If nothing else, you're giving C_1 a fairly low prior chance of being correct- and a surprisingly serious consideration to the idea that 'real chocolate' is nothing like we think it is, which is an epistemologically confusing position to take. Is there perhaps a theoretical reason why you need to keep things scale-invariant?