jacob_cannell comments on Dreams of AIXI - Less Wrong

-1 Post author: jacob_cannell 30 August 2010 10:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (145)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 01 September 2010 06:52:31PM *  1 point [-]

Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will d

I feel like you didn't read my original post. Here is the line of thinking again, condensed:

  1. Universal optimal intelligence requires simulating the universe to high fidelity (AIXI)
  2. as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity
  3. intelligence is simulation

rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don't have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility

If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.

Comment author: rhollerith_dot_com 02 September 2010 11:56:52PM *  1 point [-]

Jacob, I am the only one replying to your replies to me (and no one is voting me up). I choose to take that as a sign that this thread is insufficiently interesting to sufficient numbers of LWers for me to continue.

Note that doing so is not a norm of this community although I would like it if it were and it was IIRC one of the planks or principles of a small movement on Usenet in the 1990s or very early 2000s.