TheAncientGeek comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 16 May 2012 06:53:53AM *  -1 points [-]

You think it is in principle impossible to make (an implementation of) AIXI that understands the map/territory distinction, and values paperclips in the territory more than paper clips in the map?

You need to somehow specify a conversion from the real world state (quarks, leptops, etc etc) to a number of paperclips, so that the paperclips can be ordered differently, or have slightly different compositions. That conversion is essentially a map.

You do not want goal to distinguish between '1000 paperclips that are lying in a box in this specific configuration' and '1000 paperclips that are lying in a box in that specific configuration'.

There isn't such discriminator in the territory. There is only in your mapping process.

I'm feeling that much of the reasoning here is driven by verbal confusion. To understand the map-territory issue, is to understand the above. But to understand also has the meaning as in 'understand how to drive a car', with the implied sense that understanding of map territory distinction would somehow make you not be constrained by associated problems.

Comment author: TheAncientGeek 12 March 2014 08:25:02PM *  0 points [-]

Indeed. The problem of making sure that you are maximizing the real entity you want to maximize , and not a proxy is roughly equivalent to the disproving solipsism, which, itself,is widely regarded as almost impossible,by philosophers. Realists tend to assume their way out of the quandary...but assumption isn't proof. In other words, there is no proof that humans are maximizing (good stuff) , and not just (good stuff porn)