Snowyowl comments on The hard limits of hard nanotech - Less Wrong

19 Post author: lsparrish 07 November 2010 12:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Snowyowl 07 November 2010 01:47:10AM *  1 point [-]

Lasers? EMPs that can take down a planet? And more than 99% of the universe is a low-temperature vacuum, so I wouldn't rule out a grey-goo scenario if the nanobots get into space.

Assuming they can build their components out of hydrogen, or if they resort to asteroid mining.

Comment author: Eugine_Nier 07 November 2010 02:30:16AM 4 points [-]

These scenarios assume an AGI directing them. And an unfriendly AGI is an existential risk with or without nano.

Comment author: Clippy 07 November 2010 03:56:00AM 1 point [-]

And that's why it's so important to distinguish a judgment that an AGI is unFriendly from a hasty, racist assumption about how a different kind of intelligent being might want to act. Just because a being doesn't want to combine some of its macromolecules with other versions of itself doesn't mean it's okay to be racist against it.

Anyone here know anybody like that?

Comment author: wedrifid 07 November 2010 07:53:26AM *  1 point [-]

Technical misuse of 'racist'. Bigoted is a potential substitute. Egocentric would serve as spice.

Comment author: Vladimir_M 07 November 2010 06:50:29PM *  7 points [-]

One could speculate on how deep the act actually is here. One recurring feature of the Clippy character is that he attempts to mimic human social behavior in crude and clumsy ways. Maybe Clippy noticed how humans throw accusations of "racism" as an effective way to shame others into shutting up about unpleasant questions or to put them on the defensive, and is now trying to mimic this debating tactic when writing his propaganda comments. So he ends up throwing accusations of "racism" in a way that seems grotesque even by the usual contemporary standards.

Whoever stands behind Clippy, if this is what's actually going on, then hats off for creativity.

Comment author: Clippy 11 November 2010 04:44:34PM *  0 points [-]

I'm behind Clippy, non-ape.

Comment author: TheOtherDave 30 November 2010 07:36:31PM 1 point [-]

Now, now.

The connotations of calling Vladimir "ape" are insulting among humans; the implication is not just that he is family Hominidae, which he is, but also that he shares other characteristics (such as subhuman intelligence, socially unacceptable hygiene levels, and so forth) with other hominoids like gorillas, orangutans, gibbons and so forth, which he does not.

Let's try to avoid throwing insults around, here.

Admittedly, the comment you're responding to used some pretty negative language to describe you as well; describing your social behavior as "crude and clumsy" is pretty rude. And the fact that the comment was so strongly upvoted despite that is unfortunate.

Still, I would rather you ask for an apology than adopt the same techniques in response.

Just to be clear: this has nothing whatsoever to do with the degree to which you are or aren't a neurotypical human. I would just prefer we not establish the convention of throwing insults at each other on this site.

Comment author: Clippy 30 November 2010 08:16:27PM 4 points [-]

Okay, thanks for clarifying all of that. You're a good human.

Comment author: TheOtherDave 30 November 2010 09:00:17PM 2 points [-]

(blink)

OK, now I'm curious: what do you mean by that?

My first assumption was that it was a "white lie" intended to make me feel good... after all, the thing Clippy uses "good" to refer to I decidedly am not (well, OK, I do contribute marginally to an economy that causes there to be many more paperclips than there were a thousand years ago, but it seems implausible that you had that in mind).

In other words, I assumed you were simply trying to reward me socially.

Which was fine as far as it went, although of course when offered such a reward by an entity whose terminal values are inconsistent with my continued existence, I do best to not appreciate it... that is, I should reject the reward in that case in order to protect myself from primate social biases that might otherwise compel me to reciprocate in some way.

(That said, in practice I did appreciate it, since I don't actually believe you're such an entity. See what I mean about pretending to be human being useful for Clippy's purposes? If there are other paperclip-maximizers on this site, ones pretending to be human so well it never occurs to anyone to question it, they are probably being much more effective at generating paperclips than Clippy is. By its own moral lights, Clippy ought to stop presenting itself as a paperclip-maximizer.)

But on subsequent thought, I realized you might have meant "good human" in the same way that I might call someone a "good paperclip-maximizer" to mean that they generate more paperclips, or higher-quality paperclips, than average. In which case it wouldn't be a lie at all (although it would still be a social reward, with all the same issues as above).

(Actually, now that I think of it: is there any scalar notion of paperclip quality that plays a significant role in Clippy's utility function? Or is that just swamped by the utility of more paperclips, once Clippy recognizes an object as a paperclip in the first place?)

The most disturbing thing, though, is that the more I think about this the clearer it becomes that I really want to believe that any entity I can have a conversation with is one that I can have a mutually rewarding social relationship with as well, even though I know perfectly well that this is simply not true in the world.

Not that this is a surprise... this is basically why human sociopaths are successful... but I don't often have occasion to reflect on it.

Brrr.

Comment author: Clippy 30 November 2010 09:24:45PM 2 points [-]

I called you a good human before because you did something good for me. That's all.

Now you seem to be a weird, conflicted human.

Comment author: Kevin 30 November 2010 10:36:41PM 0 points [-]

an entity whose terminal values are inconsistent with my continued existence

Indeed, but in the larger scheme of possible universe tiling agent space, Clippy and us don't look so different. Clippy would tile the universe with computronium doing something like recursively simulating universes tiled with paperclips. We would likely tile the universe with computronium simulating lots of fun-having post-humans.

It's a software difference, not a hardware difference, and it would be easy to propose ways for us and Clippy to cooperate (such as Clippy commits to dedicating x% of resources to simulating post-humans if he tiles the universe, and we commit to dedicating y% of resources to simulating paperclips if we tile the universe).

Comment author: Kingreaper 30 November 2010 09:53:42PM 0 points [-]

By its own moral lights, Clippy ought to stop presenting itself as a paperclip-maximizer.

Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.

The interplay between Clippy and a fake-human account could serve to create an environment more conducive to Clippy's end-goal.

Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication. Would be an interesting, but incomplete, safeguard on an AI.

Comment author: MartinB 11 November 2010 06:08:02PM 0 points [-]

Whoever stands behind Clippy, if this is what's actually going on, then hats off for creativity.

Ever consider he might be the real thing?

Comment author: [deleted] 11 November 2010 06:17:14PM 0 points [-]

Haha! That would be a funny train of thought. An AI hanging out on a blog set up by a non-profit dedicated to researching AI.

Comment author: wnoise 15 November 2010 10:41:56AM 0 points [-]

Any AGI that isn't Friendly is UnFriendly.

Comment author: katydee 11 November 2010 06:32:52PM 0 points [-]

I have never been sexually attracted to any entity or trait, real or fictional. People generally aren't bigoted against me-- the worst I've seen is people treating me like an interesting novelty, which can be somewhat condescending. So there is hope for those with nonstandard goals, at least on some level! :)

Comment author: JoshuaZ 07 November 2010 02:42:31PM 1 point [-]

It might be a general existential risk but without nanotech the space of things that an unfriendly AGI can do goes down a lot. Lack of practical nanotech reduces chance to FOOM.

Comment author: lsparrish 07 November 2010 05:56:04AM *  0 points [-]

Presumably, humans will resort to asteroid mining at some point. They might use hard nanotech for that purpose. If they aren't careful in how they do so, a gray goo might end up taking over any body in the solar system not too warm to support it.

Intentionally designed replicators with thermal shields and heat pumps could be more aggressive. However they would probably tend to be larger and hence less difficult to locate and destroy.