You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on What is optimization power, formally? - Less Wrong Discussion

10 Post author: sbenthall 18 October 2014 06:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 20 October 2014 06:22:11PM *  0 points [-]

You might say bounded rationality is our primary framework for thinking about AI agents, just like it is in AI textbooks like Russell & Norvig's. So that question sounds to me like it might sound to a biologist if she was asked whether her sub-area had any connections to that "Neo-Darwinism" thing. :)

Comment author: sbenthall 21 October 2014 12:05:22AM 0 points [-]

That makes sense. I'm surprised that I haven't found any explicit reference to that in the literature I've been looking at. Is that because it is considered to be implicitly understood?

One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it's too tied to notions of intelligence and doesn't look enough at outcomes?

It's this move that I find confusing.