SimonF comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: MaoShan 16 May 2012 02:57:41AM *  4 points [-]

Just some minor text corrections for you:

From 3.1

The utility function picture of a rational agent maps perfectly onto the Orthogonality thesis: here have the goal structure, the utility fu...

...could be "here we have the...

From 3.2

Human minds remain our only real model of general intelligence, and this strongly direct and informs...

this strongly directs and informs...

From 4.1

“All human-designed rational beings would follow the same morality (or one of small sets of moralities)” sound plausible; in contract “All human-designed superefficient

I think it would be sounds since the subject is the argument, even though the argument contains plural subjects, and I think you meant "in contrast", but I may be mistaken.

Comment author: SimonF 16 May 2012 12:48:55PM *  4 points [-]

From 3.3

To do we would want to put the threatened agent

to do so(?) we would

From 3.4

an agent whose single goal is to stymie the plans and goals of single given agent

of a single given agent

From 4.1

then all self-improving or constructed superintelligence must fall prey to it, even if it were actively seeking to avoid it.

every, or change the rest of the sentence (superintelligences, they were)

From 4.5

There are goals G, such that an entity an entity with goal G

a superintelligence will goal G can exist.