You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on Arguing Orthogonality, published form - Less Wrong Discussion

10 Post author: Stuart_Armstrong 18 March 2013 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (10)

You are viewing a single comment's thread.

Comment author: Manfred 18 March 2013 10:43:32PM *  3 points [-]

Thanks! Nice paper overall.

Minor nitpicks up to section 3.1:

The relevant criteria is whether the agent

Should be "criterion."

Since we are looking to resolve a mainly empirical question – what systems of motivations could we actually code into a putative AI – this theoretical disagreement is highly problematic.

I'm not sure what you mean by "problematic," and it seems unclear - are you just trying to say "useless" nicely? If so, I'd construct this sentence more positively - "we can settle the empirical question without needing to resolve the theoretical disagreement."

to accumulate more power, to become more intelligence and to be able to cooperate with other agents

Should be "intelligent."

, the rational agent will then attempt to maximise it, using the approaches in all cases

Should be "same approaches," I assume.

Comment author: Stuart_Armstrong 19 March 2013 10:53:12AM 0 points [-]

Thanks for that! I won't correct them here, though - I'll wait to see what the final published version is, and update then.