Heh, I don't "have" anything yet; I'm just as the formalism stage. But the idea is that there are units (the control systems) operating within an environment, the latter of which is drawing its state from a lawful distribution (like nature does), which then affects what the units sense, as well as their structure and integrity. Depending on what the units do with the sensory data, they can be effective at controlling certain aspects of themselves, or instead go unstable. The plan is to also allow for modification of the structure of the control systems and their replication (to see evolution at work).
As for modeling the control systems, my focus is first on being able to express what's going on at the information-theoretic level, where it really gets interesting: there's a comparator, which must generate sufficient mutual information with the parameter it's trying to control, else it's "comparing" to a meaningless value. There are the disturbances, which introduce entropy and destroy mutual information with the environment. There's the controller, which must use up some negentropy source to maintain the system's order and keep it from equilibrium (as life and other dissipative systems must). And there's the system's implicit model of its environment (including the other control systems), whose accuracy is represented by the KL divergence between the distributions.
I don't expect I'll make something completely new, but at least for me, it would integrate my understanding of thermodynamics, life, information theory, and intelligence, and perhaps shed light on each.
I'm thinking about how to model an ecosystem of recursively self-improving computer programs. The model I have in mind assumes finite CPU cycles/second and finite memory as resources, and that these resources are already allocated at time zero. It models the rate of production of new information by a program given its current resources of information, CPU cycles, and memory; the conversion of information into power to take resources from other programs; and a decision rule by which a program chooses which other program to take resources from. The objective is to study the system dynamics, in particular looking for attractors and bifurcations/catastrophes, and to see what range of initial conditions don't lead to a singleton.
(A more elaborate model would also represent the fraction of ownership one program had of another program, that being a weight to use to blend the decision rules of the owning programs with the decision rule of the owned program. It may also be desirable to model trade of information. I think that modeling Moore's law wrt CPU speed and memory size would make little difference, if we assume the technologies developed would be equally available to all agents. I'm interested in the shapes of the attractors, not the rate of convergence.)
Problem: I don't know how to model power as a function of information.
I have a rough model of how information grows over time; so I can estimate the relative amounts of information in a single real historical society at two points in time. If I can say that society X had tech level T at time A, and society Y had tech level T at time B, I can use this model to estimate what tech level society Y had at time A.
Therefore, I can gather historical data about military conflicts between societies at different tech levels, estimate the information ratio between those societies, and relate it to the manpower ratios between the armies involved and the outcome of the conflict, giving a system of inequalities.
You can help me in 3 ways:
If you choose the last option, choose a historical conflict between sides of uneven tech level, and post here as many as you can find of the following details:
For example:
Using the two dates 1415 and 1346 leads to some tech-level (or information) ratio R. For example, under a simple model assuming that tech level doubled every 70 years in this era, we would give the English a tech-level ratio over the French of 2, and then say that the tech-level ratio enjoyed by the English produced a power multiplier greater than the manpower ratio enjoyed by the french: P(2) > 30000/5900. This ignores the many advances shared by the English and French between 1346 and 1415; but most of them were not relevant to the battle. It also ignores the claim that the main factor was that the French had heavy armour, which was a disadvantage rather than an advantage in the deep mud on that rainy day. Oh well. (Let's hope for enough data that the law of large numbers kicks in.)
After gathering a few dozen datapoints, it may be possible to discern a shape for the function P. (Making P a multiplying force that is a function of a ratio assumes P is linear, since eg. P(8) = P(8/4)*P(4/2)*P(2/1) = 4*P(2); the data can reject this assumption.) There may be a way to factor the battle duration and the casualty outcome into the equation as well; or at least to see if they correlate with the distance of the datapoint's manpower ratio from the estimated value of P(information ratio) for that datapoint.
(I tried to construct another example from the Battle of Little Bighorn to show a case where the lower-level technology won, but found that the Indians had more rifles than the Army did, and that there is no agreement as to whether the Indians' repeating rifles or the Army's longer-ranged single-shot Springfield rifles were better.)