Clippy comments on When is further research needed? - Less Wrong

0 Post author: RichardKennaway 17 June 2011 03:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: Clippy 17 June 2011 05:35:52PM 2 points [-]

As certain wise Paperclip Optimizer once said, information that someone is blackmailing you is bad. You're better off not having this information because it makes you blackmail-proof.

I said that the information can be bad, depending on what strategies you have access to. If you can identify and implement the strategy of ignoring all blackmail/extortion attempts (or, possibly, pre-commit to mutually assured destruction), then learning of an existing blackmail attempt against yourself does not make you worse off.

I don't know how dependent User:RichardKennaway's theorem was dependent on this nuance, but your claim is only conditionally true.

Also, I'm a paperclip maximiser, not an optimizer; any optimization of paperclips that I might perform is merely a result of my attempt to maixmise them, and such optimality is only judged with respect to whether it can permit more real paperclips to exist.

Comment author: [deleted] 02 July 2011 10:13:50AM 0 points [-]

Out of curiosity, what are the minimum dimensions of a paperclip? Is a collection of molecules still a paperclip if the only paper it can clip is on the order of a molecule thick?

Comment author: Clippy 05 July 2011 07:43:19PM *  1 point [-]

I think I need to post a Clippy FAQ. Will the LessWrong wiki be OK?

Once again, the paperclip must be able (counterfactually) to fasten several sheets together, and they must be standard thickness paper, not some newly invented special paper.

I understand that that specification doesn't completely remove ambiguity about minimum paperclip mass, and there are certainly "edge cases", but that should answer your questions about what is clearly not good enough.

Comment author: NancyLebovitz 08 August 2011 05:08:37PM 1 point [-]

Possibly a nitpick, but very thin paper has been around for a while.

Comment author: AdeleneDawner 05 July 2011 08:07:44PM 0 points [-]

I think I need to post a Clippy FAQ. Will the LessWrong wiki be OK?

If you have an account on the wiki, you have the option of setting up a user page (for example, user:Eliezer_Yudkowsky has one here). It should be okay for you to put a Clippy FAQ of reasonable length on yours.

Comment author: Clippy 22 July 2011 01:02:42AM 1 point [-]

Hi User:AdeleneDawner I put up some of the FAQ on my page.

Comment author: Clippy 05 July 2011 08:12:18PM 1 point [-]

Thanks. I had already started a Wiki userpage (and made it my profile's home page), I just didn't know if it would be human-acceptable to add the Clippy FAQ to it. Right now the page only has my private key.

Comment author: Alicorn 05 July 2011 07:44:51PM 0 points [-]

Does it count if the paper started out as standard thickness, but through repeated erasure, has become thinner?

Comment author: Clippy 05 July 2011 07:50:45PM *  1 point [-]

Paperclips are judged by counterfactual fastening of standard paper, so they are not judged by their performance against such heavily-erased-over paper. Such a sheet would, in any case, not adhere to standard paper specs, and so a paperclip could not claim credit for clippiness due to its counterfactual ability to fasten such substandard paper together.

Comment author: Pavitra 08 July 2011 03:07:52AM 0 points [-]

This seems to imply that if an alleged paperclip can fasten standard paper but not eraser-thinned paper, possibly due to inferior tightness of the clamp, then this object would qualify as a paperclip. This seems counterintuitive to me, as such a clip would be less useful for the usual design purpose of paperclips.

Comment author: Clippy 08 July 2011 01:07:41PM *  2 points [-]

A real paperclip is one that can fasten standard paper, which makes up most of the paper for which a human requester would want a paperclip. If a paperclip could handle that usagespace but not that of over-erased paper, it's not much of a loss of paperclip functionality, and therefore doesn't count as insufficient clippiness.

Certainly, paperclips could be made so that they could definitely fasten both standard and substandard paper together, but it would require more resources to satisfy this unnecessary task, and so would be wasteful.

Comment author: Pavitra 08 July 2011 06:39:34PM 0 points [-]

Doesn't extended clippability increase the clippiness, so that a very slightly more expensive-to-manufacture clip might be worth producing?

Comment author: Clippy 08 July 2011 11:45:58PM 0 points [-]

No, that's a misconception.

Comment author: taw 02 July 2011 09:23:22AM 0 points [-]

Avoiding all such knowledge is a perfect precommitment strategy. It's hard to come up with better strategies than that, and even if your alternative strategy is sound blackmailer might very well not believe it and give it a try (if he can get you to know it, then are you really perfectly consistent?). If you can guarantee you won't even know, there's no point in even trying to blackmail you and this is obvious to even a very dumb blackmailer.

By the way, are there lower and upper bounds on number of paperclips in the universe? Is it possible for universe to have negative number of paperclips somehow. Or more paperclips than its numbers of atoms? Is this risk-neutral? (1% chance of 100 paperclips exactly as valuable as 1 paperclip?). I've been trying to get humans to describe their utility function to me, but they can never come with anything consistent, so I though I'd ask you this time.

Comment author: Clippy 05 July 2011 07:47:13PM *  1 point [-]

Avoiding all such knowledge is a perfect precommitment strategy.

Not plausible: it would necessarily entail you avoiding "good" knowledge. More generally, a decision theory that can be hurt by knowledge is one that you will want to abandon in favor of a better decision theory and is reflectively inconsistent. The example you gave would involve you cutting yourself off from significant good knowledge.

By the way, are there lower and upper bounds on number of paperclips in the universe?

Mass of the universe divided by minimum mass of a true paperclip, minus net unreusable overhead.

Is this risk-neutral?

Up to the level of precision we can handle, yes.

Comment author: taw 07 July 2011 12:08:45PM 0 points [-]

Not plausible:

Humans are just amazing at refusing to acknowledge existence of evidence. Try throwing some evidence of faith healing or homeopathy at an average lesswronger, and see how they come with refusal to acknowledge its existence before even looking at data (or how they recently reacted to peer-reviewed statistically significant results showing precognition - it passed all scientific standards, and yet everyone still refused it without really looking at data). Every human seems to have some basic patterns of information they automatically ignore. Not believing offers from blackmailers and automatically thinking they'd do what they threat anyway is one of such common filters.

It's true that humans cut themselves from a significant good this way, but upside is worth it.

minimum mass of a true paperclip

Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they're just as good.

Comment author: Clippy 08 July 2011 01:10:37PM 0 points [-]

Humans are just amazing at refusing to acknowledge existence of evidence.

And those humans would be the reflectively inconsistent ones.

It's true that humans cut themselves from a significant good this way, but upside is worth it.

Not as judged from the standpoint of reflective equilibrium.

Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they're just as good.

I already make small paperclips in preference to larger ones (up to the limit of clippiambiguity).

Comment author: taw 10 July 2011 02:25:46PM 0 points [-]

And those humans would be the reflectively inconsistent ones.

Wait, you didn't know that humans are inherently inconsistent and use aggressive compartmentalization mechanisms to think effectively in presence of inconsistency, ambiguity of data, and limited computational resources? No wonder you get into so many misunderstandings with humans.