Tim_Tyler comments on Invisible Frameworks - Less Wrong

12 Post author: Eliezer_Yudkowsky 22 August 2008 03:36AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tim_Tyler 22 August 2008 09:04:19PM 0 points [-]

Re: no distinction whatever is made between "intelligent" and "sentient".

It seems like an irrelevance in this context. The paper is about self-improving systems. Normally these would be fairly advanced - and so would be intelligent and sentient.

Re: the few instances where an AI would change their utility function mentioned in the paper are certainly not exhaustive, I found the selection quite arbitrary.

How do you think these cases should be classified?

Re: The second flaw in the little abstract above was the positing of "drives".

That's the point of the paper. That a chess program, a paper clip maximiser, and a share-price maximiser will share some fundamental and important traits and behaviours.

Re: microeconomics applying to humans.

Humans aren't perfect rational economic agents - but they are approximations. Of course microeconomics applies to humans.

Re: I see nothing of the vastness of mindspace in this paper.

The framework allows for arbitrary utility functions. What more do you want?