wedrifid comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 14 March 2013 01:35:22PM 1 point [-]

What is the force of "could" here?

The force is that all this talk about understanding 'the pain/pleasure' axis would be a complete waste of time for a paperclip maximiser. In most situations it would be more efficient not to bother with it at all and spend it's optimisation efforts on making more efficient relativistic rockets so as to claim more of the future light cone for paperclip manufacture.

It would require motivation for the paperclip maximiser to expend computational resources understanding the arbitrary quirks of DNA based creatures. For example some contrived game of Omega's which rewards arbitrary things with paperclips. Or if it found itself emerging on a human inhabited world, making being able to understand humans a short term instrumental goal for the purpose of more efficiently exterminating the threat.

By analogy, if I were building a perpetual motion machine but allegedly "could" grasp the second law of thermodynamics, the modal verb is doing an awful lot of work.

Terrible analogy. Not understanding "pain and pleasure" is in no way similar to believing it can create a perpetual motion machine. Better analogy: An Engineer designing microchips allegedly 'could' grasp analytic cubism. If she had some motivation to do so. It would be a distraction from her primary interests but if someone paid her then maybe she would bother.

Surely, If I grasped the second law of thermodynamics, then I'd stop. Likewise, if the paperclipper were to be consumed by unbearable agony, it would stop too.

Now "if" is doing a lot of work. If the paperclipper was a fundamentally different to a paperclipper and was actually similar to a human or DNA based relative capable of experiencing 'agony' and assuming agony was just as debilitating to the paperclipper as to a typical human... then sure all sorts of weird stuff follows.

The paperclipper simply hasn't understood the nature of what was doing.

I prefer the word True in this context.

Is the qualia-naive paperclipper really superintelligent - or just polymorphic malware?

To the extent that you believed that such polymorphic malware is theoretically possible and consisted of most possible minds it would possible for your model to be used to accurately describe all possible agents---it would just mean systematically using different words. Unfortunately I don't think you are quite at that level.

Comment author: davidpearce 14 March 2013 03:23:40PM 0 points [-]

Wedrifid, granted, a paperclip-maximiser might be unmotivated to understand the pleasure-pain axis and the quaila-spaces of organic sentients. Likewise, we can understand how a junkie may not be motivated to understand anything unrelated to securing his supply of heroin - and a wireheader in anything beyond wireheading. But superintelligent? Insofar as the paperclipper - or the junkie - is ignorant of the properties of alien qualia-spaces, then it/he is ignorant of a fundamental feature of the natural world - hence not superintelligent in any sense I can recognise, and arguably not even stupid. For sure, if we're hypothesising the existence of a clippiness/unclippiness qualia-space unrelated to the pleasure-pain axis, then organic sentients are partially ignorant too. Yet the remedy for our hypothetical ignorance is presumably to add a module supporting clippiness - just as we might add a CNS module supporting echolocatory experience to understand bat-like sentience - enriching our knowledge rather than shedding it.

Comment author: Creutzer 14 March 2013 03:33:13PM *  2 points [-]

But superintelligent? Insofar as the paperclipper - or the junkie - is ignorant of the properties of alien qualia-spaces, then it/he is ignorant of a fundamental feature of the natural world - hence not superintelligent in any sense I can recognise, and arguably not even stupid.

What does (super-)intelligence have to do with knowing things that are irrelevant to one's values?

Comment author: whowhowho 14 March 2013 04:40:18PM *  0 points [-]

What does knowing everything about airline safety statistics, and nothing else, have to do with intelligence? That sort of thing is called Savant ability -- short for ''idiot savant''.

Comment author: [deleted] 15 March 2013 01:16:48PM 0 points [-]

I guess there's a link missing (possibly due to a missing <http://> in the Markdown) after the second word.