TheOtherDave comments on Work on Security Instead of Friendliness? - Less Wrong

29 Post author: Wei_Dai 21 July 2012 06:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 22 July 2012 08:52:43AM 2 points [-]

(shrug)

It seems to me that even if I ignore everything SI has to say about AI and existential risk and so on, ignore all the fear-mongering and etc., the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.

And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.

If any part of that is as incoherent as you suggest, and you're capable of pointing out the incoherence in a clear fashion, I would appreciate that.

Comment author: JaneQ 22 July 2012 11:44:06AM *  3 points [-]

the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.

The prevalence of X is defined how?

And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.

In A, you confuse your model of the world with the world itself; in your model of the world you have a possible item 'paperclip', and you can therefore easily imagine maximization of number of paperclips inside your model of the world, complete with the AI necessarily trying to improve it's understanding of the 'world' (your model). With B, you construct a falsely singular alternative of a rather broken AI, and see a crisp distinction between two irrelevant ideas.

The practical issue is that the 'prevalence of some X' can not be specified without the model of the world; you can not have a function without specifying it's input domain, and the 'reality' is never an input domain of mathematical functions; the notion is not only incoherent but outright nonsensical.

If any part of that is as incoherent as you suggest, and you're capable of pointing out the incoherence in a clear fashion, I would appreciate that.

Incoherence of so poorly defined concepts can not be demonstrated when no attempts has been made to make the notions specific enough to even rationally assert coherence in the first place.

Comment author: TheOtherDave 22 July 2012 04:43:06PM 1 point [-]

OK. Thanks for your time.