polypubs
-15
4
polypubs has not written any posts yet.

Suppose there is a single A.I. with a 'Devote x % of resources to Smartening myself' directive. Suppose further that the A.I is already operating with Daid Lewis 'elite eligible' ways of carving up the World along its joints- i.e. it is climbing the right hill. Presumably, the Smartening module faces a race hazard type problem in deciding whether it is smarter to devote resources to evaluating returns to smartness or to just release resources back to existing operations. I suppose it could internally breed its own heuristics for Karnaugh map type pattern recognition so as to avoid falling into an NP problem. However, if NP hard problems are like predators, there... (read more)
You may be aware of the use of negative probabilities in machine learning and quantum mechanics and, of course, Economics. For the last, the existence of a Matrix Lord has such a large negative probability that it swamps his proffer (perhaps because it is altruistic?) and no money changes hands. In other words, there is nothing interesting here- it's just that some type of decision theory haven't incorporated negative probabilities yet. The reverse situation- Job's complaint against God- is more interesting. It shows why variables with negative probabilities tend to disappear out of discourse to be replaced by the difference between two independent 'normal' variables- in this case Cosmic Justice is replaced by the I-Thou relationship of 'God' & 'Man'.