amcknight comments on Rationality and Winning - Less Wrong

19 Post author: lukeprog 04 May 2012 06:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 07 May 2012 01:59:30PM *  0 points [-]

It seems to me that the LessWrong rationality does not concern itself with the computational limitations of the agents, using as normative the idealized model that ignores limitations, and lacks extensive discussion of comparative computational complexities of different methods, as well as the security of the agent from deliberate (or semi accidental) subversion by other agents. (See my post about naive agent)

Thus the default hypothesis should be that the teachings of LessWrong for the most part do not increase the efficacy (win-ness) of computationally bounded agents, and likely decrease the efficacy. Most cures do not work, even those that intuitively should; furthermore there is a strong placebo effect when it comes to reporting of the efficacy of the cures.

The burden of proof is not on those who claim it does not work. The expected utility of the LW teachings should start at zero, or small negative value (for the time spent, instead of spending that time e.g. training for a profession, studying math in more conventional way, etc).

As intuition pump for the computationally limited agents, consider a weather simulator that has to predict weather on specific hardware, having to 'outrun' the real weather. If you replace each number in the weather simulator with the probability distribution of the sensor data (with Bayesian updates if you wish), you will obtain a much, much slower weather simulator, which will have to simulate weather on a lower resolution grid, and will perform much worse than original weather simulator, on same hardware. Improving weather prediction within same hardware is a very difficult task with no neat solutions, that will involve a lot of timing of the different approaches.

Comment author: amcknight 08 May 2012 07:42:46PM 0 points [-]

It seems to me that the LessWrong rationality does not concern itself with the computational limitations of the agents

The LessWrong community is made up of a lot of people that concern themselves with all kinds of things. I get annoyed when I hear people generalizing too much about LessWrong members, or even worse, when they talk about LessWrong as if it is a thing with beliefs and concerns. Sorry if I'm being too nit-picky.