whpearson comments on Open Thread: December 2009 - Less Wrong

3 Post author: CannibalSmith 01 December 2009 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (263)

You are viewing a single comment's thread.

Comment author: whpearson 14 December 2009 10:59:19AM 0 points [-]

Are people interested in discussing bounded memory rationality? I see a fair number of people talking about solomonov type systems, but not much about what a finite system should do.

Comment author: wedrifid 14 December 2009 11:15:05AM 0 points [-]

Are people interested in discussing bounded memory rationality?

Sure. What about it in particular? Care to post some insights?

Comment author: whpearson 17 December 2009 12:28:27PM 0 points [-]

Would my other reply to you be an interesting/valid way of thinking about the problem. If not what were you looking for?

Comment author: wedrifid 17 December 2009 12:50:54PM 0 points [-]

Would my other reply to you be an interesting/valid way of thinking about the problem. If not what were you looking for?

Pardon me. Missed the reply. Yes, I'd certainly engage with that subject if you fleshed it out a bit.

Comment author: whpearson 14 December 2009 04:17:13PM 0 points [-]

I was thinking about starting with very simple agents. Things like 1 input, 1 output with 1 bit of memory and looking at them from a decision theory point of view. Asking questions like "Would we view it as having a goal/decision theory?" If not what is the minimal agent that we would and does it make any trade offs for having a decision theory module in terms of the complexity of the function it can represent.

Comment author: wedrifid 17 December 2009 12:41:56PM 0 points [-]

Things like 1 input, 1 output with 1 bit of memory and looking at them from a decision theory point of view. Asking questions like "Would we view it as having a goal/decision theory?"

I tend to let other people draw those lines up. It just seems like defining words and doesn't tend to spark my interest.

If not what is the minimal agent that we would and does it make any trade offs for having a decision theory module in terms of the complexity of the function it can represent.

I would be interested to see where you went with your answer to that one.