RobinZ comments on Fusing AI with Superstition - Less Wrong

-6 Post author: Drahflow 21 April 2010 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: Drahflow 22 April 2010 06:05:39AM 0 points [-]

Not having heard your argument against "Describing ..." yet, but assuming you believe some to exist, I estimate the chance of me still believing it after your argument at 0.6.

Now for guessing the two problems:

The first possible problem will be describing "mass" and "energy" to a system which basically only has sensor readings. However, if we can describe concepts like "human" or "freedom", I expect descriptions of matter and energy to be simpler (even though 10.000 years ago, telling somebody about "humans" was easier than telling them about mass but that was not the same concept of "humans" we would actually like to describe). And for "mass" and "energy" the physicists already have at quite formal descriptions.

One other problem is that mass and energy might not be contained within a certain part of space, as per physics, it is just the probability of it having an effect outside some space going down to pretty much zero the greater the distance. Thus removing all energy and matter somewhere might produce subtle effects somewhere totally different . However I do expect these effects to be so subtle not even to matter to the AI because they become smaller than the local quantum noise for very short distances already.

Regarding the condescending: "I say this..." I would have liked it more if you would have stated explicitly that your preference originates from a wish to further my learning. I have no business optimizing your value function. Anyway, I operate by Crocker's Rules.

Comment author: RobinZ 22 April 2010 11:37:23AM 0 points [-]

Regarding the condescending: "I say this..." I would have liked it more if you would have stated explicitly that your preference originates from a wish to further my learning. I have no business optimizing your value function. Anyway, I operate by Crocker's Rules.

I apologize - that was, in fact, my intent.