You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

private_messaging comments on Wanted: "The AIs will need humans" arguments - Less Wrong Discussion

7 Post author: Kaj_Sotala 14 June 2012 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (83)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 18 June 2012 03:38:02PM *  -2 points [-]

You don't define it as ability, you define it as ability plus some material goals themselves. Furthermore you imagine that super-intelligence will necessarily be able to maximize number of paperclips in the universe as terminal goal, whereas it is not at all necessarily the case that it is possible to specify that sort of goal. edit: that is to say, material goals are very difficult, cousin_it had some idea for the utilities for UDT, the UDT agent has to simulate entire multiverse (starting from big bang) and find instances of itself inside of it: http://lesswrong.com/lw/8ys/a_way_of_specifying_utility_functions_for_udt/ . It's laughably hard to make a dangerous goal.

edit: that is to say, you focus on material goals (maybe for lack of understanding of any other goals). For example, the koo can try to find values for multiple variables describing a microchip, that result in maximum microchip performance. That's easy goal to define. The baz would try to either attain some material state of the variables and registers of it's hardware, resisting the shut-down, or outright try to attain the material goal of building a better CPU in reality. All the goal space you can even think of is but a tiny speck in the enormous space of possible goals. Un-interesting speck that is both hard to reach and is obviously counter productive.