Thomas Cover did a great many interesting things. His work on universal data compression and the universal portfolio could provide very efficient and useful optimization approaches for use in AI & AGI.
Cover’s universal optimization approaches grow out of the beginnings of information theory, especially John Kelly’s work at Bell Labs in the 1950s.
In his "universal" approaches, Cover developed the theoretical optimization framework for identifying, at successive time steps, the mean rank-weighting “portfolio” of agents/algorithms/performace from an infinite number of possible combinations of the inputs.
Think of this as a multi-dimensional regular simplex with rank weightings as a hyper-cap. One can then find the mean rank-weighted “portfolio” geometrically.
Cover proved that successively following that mean rank-weighted “portfolio” (shifting the portfolio allocation at each time step) converges asymptotically to the best single “portfolio” of agents at any future time step with a probability of 1.
Optimization without Monte Carlo. No requirements for any distribution of the inputs. Incredibly versatile.
I don’t know of anyone that has incorporated Cover’s ideas into AI & AGI. Seems like a potentially fruitful path.
I’ve also wondered, if human brains might optimize their responses to the world by some Cover-like method. Brains as prediction machines. Cover's approach would seems to correspond closely with the wet-ware.
Thomas Cover did a great many interesting things. His work on universal data compression and the universal portfolio could provide very efficient and useful optimization approaches for use in AI & AGI.
Cover’s universal optimization approaches grow out of the beginnings of information theory, especially John Kelly’s work at Bell Labs in the 1950s.
In his "universal" approaches, Cover developed the theoretical optimization framework for identifying, at successive time steps, the mean rank-weighting “portfolio” of agents/algorithms/performace from an infinite number of possible combinations of the inputs.
Think of this as a multi-dimensional regular simplex with rank weightings as a hyper-cap. One can then find the mean rank-weighted “portfolio” geometrically.
Cover proved that successively following that mean rank-weighted “portfolio” (shifting the portfolio allocation at each time step) converges asymptotically to the best single “portfolio” of agents at any future time step with a probability of 1.
Optimization without Monte Carlo. No requirements for any distribution of the inputs. Incredibly versatile.
I don’t know of anyone that has incorporated Cover’s ideas into AI & AGI. Seems like a potentially fruitful path.
I’ve also wondered, if human brains might optimize their responses to the world by some Cover-like method. Brains as prediction machines. Cover's approach would seems to correspond closely with the wet-ware.