faul_sname comments on Muehlhauser-Goertzel Dialogue, Part 1 - Less Wrong

28 Post author: lukeprog 16 March 2012 05:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: faul_sname 16 March 2012 11:20:04PM 1 point [-]

I read this as effectively saying that paperclip maximizers/ mickey mouse maximizers would not permanently populate the universe because self-copiers would be better at maximizing their goals. Which makes sense: the paperclips Clippy produces don't produce more paperclips, but the copies the self-copier creates do copy themselves. So it's quite possibly a difference between polynomial and exponential growth.

So Clippy probably is unrealistic. Not that reproduction-maximizing AIs are any better for humanity.

Comment author: Mitchell_Porter 17 March 2012 01:16:45AM 5 points [-]

A paperclip maximizer can create self-reproducing paperclip makers.

It's quite imaginable that somewhere in the universe there are organisms which either resemble paperclips (maybe an intelligent gastropod with a paperclip-shaped shell) or which have a fundamental use for paperclip-like artefacts (they lay their eggs in a hardened tunnel dug in a paperclip shape). So while it is outlandish to imagine that the first AGI made by human beings will end up fetishizing an object which in our context is a useful but minor artefact, what we would call a "paperclip maximizer" might have a much higher probability of arising from that species, as a degenerated expression of some of its basic impulses.

The real question is, how likely is that, or indeed, how likely is any scenario in which superintelligence is employed to convert as much of the universe as possible to "X" - remembering that "interstellar civilizations populated by beings experiencing growth, choice, and joy" is also a possible value of X.

It would seem that universe-converting X-maximizers are a somewhat likely, but not an inevitable, outcome of a naturally intelligent species experiencing a technological singularity. But we don't know how likely that is, and we don't know what possible Xs are likely.

Comment author: Manfred 17 March 2012 03:15:04AM 7 points [-]

There is nothing stopping a paperclip maximizer from simply behaving like a self-copier, if that works better. And then once it "wins," it can make the paperclips.

So I think the whole notion makes very little sense.