paper-machine comments on Muehlhauser-Goertzel Dialogue, Part 2 - Less Wrong

9 Post author: lukeprog 05 May 2012 12:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 06 May 2012 06:12:50PM *  1 point [-]

You say, "Have you ever seen an ape species evolving into a human species?" You insist on videotapes - on that particular proof.

And that particular proof is one we couldn't possibly be expected to have on hand; it's a form of evidence we couldn't possibly be expected to be able to provide, even given that evolution is true.

-- You're Entitled to Arguments, But Not (That Particular) Proof.

Nevermind that formally describing a paperclip maximizer would be dangerous and increase existential risk.

EDIT: Please also consider this a response to this comment as well.

Comment author: private_messaging 06 May 2012 06:31:50PM *  0 points [-]

Dog ate my homework excuse, in this particular case. Maximizing real world paperclips when you act upon sensory input is an incredibly tough problem, and it gets zillion times tougher still if you want that agent to start adding new hardware to itself.

edit:

Simultaneously, designing new hardware, or new weapons, or the like, within the simulation space, without proper AGI, is a solved problem. This real world paperclip maximizer has to be more inventive than the less general tools running on same hardware, to pose any danger.

The real world goals are ontologically basic to humans, and seem simple to people with little knowledge of the field. The fact is that doing things to reality based on the sensory input is a very tough extra problem separate from 'cross domain optimization'. Even if you had some genie that solves any mathematically defined problems, it is still incredibly difficult to get it to paperclip maximize, even though you can use this genie to design anything.