You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Squark comments on Versions of AIXI can be arbitrarily stupid - Less Wrong Discussion

15 Post author: Stuart_Armstrong 10 August 2015 01:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 13 August 2015 06:45:34PM 1 point [-]

If we find a mathematical formula describing the "subjectively correct" prior P and give it to the AI, the AI will still effectively use a different prior initially, namely the convolution of P with some kind of "logical uncertainty kernel". IMO this means we still need a learning phase.

Comment author: Wei_Dai 13 August 2015 08:57:00PM 1 point [-]

In the post you linked to, at the end you mention a proposed "fetus" stage where the agent receives no external inputs. Did you ever write the posts describing it in more detail? I have to say my initial reaction to that idea is also skeptical though. Human don't have a fetus stage where we think/learn about math with external inputs deliberately blocked off. Why do artificial agents need it? If an agent couldn't simultaneously learn about math and process external inputs, it seems like something must be wrong with the basic design which we should fix instead of work around.

Comment author: Squark 14 August 2015 06:20:37PM 1 point [-]

I didn't develop the idea, and I'm still not sure whether it's correct. I'm planning to get back to these questions once I'm ready to use the theory of optimal predictors to put everything on rigorous footing. So I'm not sure we really need to block the external inputs. However, note that the AI is in a sense more fragile than a human since the AI is capable of self-modifying in irreversible damaging ways.