In response to Worse Than Random
Comment author: Silas 12 November 2008 04:54:47PM 2 points [-]

@Caledonian and Tiiba: If we knew where the image was, we wouldn't need the dots.

Okay, let's take a step back: the scenario, as Caledonian originally stated, was that the museum people could make *a patron* better see the image if the *museum people* put random dots on the image. (Pronouns avoided for clarity.) So, the problem is framed as whether you can make *someone else* see an image that *you* already know is there, by somehow exploiting randomness. My response is that, if you already know the image is there, you can improve beyond randomness, but just putting the dots there in a way that highlights the hidden image's lines. In any case, *from that position*, Eliezer_Yudkowsky is correct in that you can only improve the patron's detection ability for that image, by exploiting your non-random knowledge about the image.

Now, if you want to reframe that scenario, you have to adjust the baselines appropriately. (Apples to apples and all.) Let's look at a different version:

I don't know if there are subtle, barely-visible images that will come up in my daily life, but if there are, I want to see them. Can I make myself better off by adding random gray dots to my vision? By scattering physical dots wherever I go?

I can's see how it would help, but feel free to prove me wrong.

In response to Worse Than Random
Comment author: Silas 11 November 2008 09:47:02PM 3 points [-]

@Joshua_Simmons: I got to thinking about that idea as I read today's post, but I think Eliezer_Yudkowsky answered it therein: Yes, it's important to expirment, but why must your selection of what to try out, be random? You should be able to do better by exploiting all of your knowledge about the structure of the space, so as to pick better ways to experiment. To the extent that your non-random choices of what to test do worse than random, it is because your understanding of the problem is so poor as to be worse than random.

(And of course, the only time when searching the small space around known-useful points is a good idea, is when you *already* have knowledge of the structure of the space...)

@Caledonian: That's an interesting point. But are you sure the effect you describe (at science museums) isn't merely due to the brain now seeing a new color gradient in the image, rather than randomness as such? Don't you get the same effect from adding an orderly grid of dots? What about from aligning the dots along the lines of the image?

Remember, Eliezer_Yudkowsky's point was not that randomness can never be an improvement, but that it's always possible improve beyond what randomness would yield.

In response to Lawful Uncertainty
Comment author: Silas 11 November 2008 03:04:15PM 7 points [-]

So, in short: "Randomness is like poison: Yes, it can benefit you, but only if you feed it to people you don't like."

Comment author: Silas 05 November 2008 09:42:33PM 1 point [-]

Will_Pearson: Is it literally? Are you saying I couldn't send a message to someone that enabled them to print out a list of the first hundred integers without referencing a human's cognitive structure.

Yes, that's what I'm saying. It's counterintuitive because you so effortlessly refernce others' cognitive structures. In communicating, you assume a certain amount of common understanding, which allows you to know whehter your message will be understood. In sending such a message, you rely on that information. You would have to think, "will they understand what this sentence means", "can they read this font", etc.

Tim_Tyler: The whole idea looks like it needs major surgery to me - at least I can't see much of interest in it as it stands. Think you can reformulate it so it makes sense? Be my guest.

Certainly. All you have to do is read it so you can tell me what about it doesn't make sense.

Anyway, such a criticism cuts against the original claim as well - since that contained "know" as well as "don't know".

Which contests the point how?

Comment author: Silas 05 November 2008 07:12:23PM 0 points [-]

Okay, fair challenge.

I agree about your metal example, but it differs significantly from my discussion of the list-output program for the non-trivial reason I gave: specifically, the output is defined by its impact on people's cognitive structure.

Look at it this way: Tim_Tyler claims that I know everything there is to know about the output of a program that spits out the integers from 1 to 100. But, when I get the output, what makes me agree that I am in fact looking at those integers? Let's say that when printing it out (my argument can be converted to one about monitor output), I see blank pages. Well, then I know something messed up: the printer ran out of ink, was disabled, etc.

Now, here's where it gets tricky: what if instead it only *sorta* messes up: the ink is low and so it's applied unevenly so that only *parts* of the numbers are missing? Well, depending on how *badly* it messes up, I may or may not still recognize the numbers as being the integers 1-100. It depends on whether it retains enough of the critical characteristics of those numbers for me to so recognize them.

To tie it back to my original point, what this all means is that the output is only defined with respect to a certain cognitive system: that determines whether the numbers are in fact recognizable as 9's, etc. If it's not yet clear what the difference is between this and metal's melting point, keep in mind that we can write a program to find a metal's melting point, but we can't write a program that will look at a printout and know if it retains enough of its form that a human recognizes it as any specific letter -- not yet, anyway.

Comment author: Silas 05 November 2008 05:36:23PM 0 points [-]

Further analysis, you say, Tim_Tyler? Could you please redirect effort away from putdowns and into finding what was wrong with the reasoning in my previous comment?

Comment author: Silas 04 November 2008 11:55:53PM -1 points [-]

Very worthwhile points, Tim_Tyler.

First of all, the reason for my spirited defense of MH's statement is that looked like a good theory because of how concise it was, and how consistent with my knowledge of programs it was. So, I upped my prior on it and tended to see apparent failures of it as a sign I'm not applying it correctly, and that further analysis could yield a useful insight.

And I think I that belief is turning out to be true:

It seems to specify that the output is what is unknown - not the sensations that output generates in any particular observer.

But the sensations *are* a property of the output. In a trivial sense: it is a fact about the output, that a human will perceive it in a certain way.

And in a deeper sense, the numeral "9" means "that that someone will perceive as symbol representing the number nine in the standard number system". I'm reminded of Douglas Hofstadter's claim that definition of individual letters is an AI-complete problem because you must know a wealth of information about the cognitive system to be able to identify the full set of symbols someone will recognize as e.g. an "A".

This yields the counterintuitive result that, for certain programs, you *must* reference the human cognitive system (or some concept isomorphic thereto) in listing all the facts about the output. That result must hold for any program whose output will eventually establish mutual information with your brain.

Am I way off the deep end here? :-/

Comment author: Silas 04 November 2008 10:47:21PM 0 points [-]

@Eliezer_Yudkowsky: It wouldn't be an exact sequence repeating, since the program would have to handle contingencies, like cows being uncooperative because of insufficiently stimulating conversation.

Comment author: Silas 04 November 2008 08:45:48PM 0 points [-]

Nick_Tarleton: Actually, Tim_Tyler's claim would still be true there, because you may want to print out that list, even if you knew some exact arrangement of atoms with that property.

However, I think Marcello's Rule is still valid there and survives Tim_Tyler's objection: in that case, what you don't know is "the sensation arising from looking at a the numbers 1 through 100 prettily printed". Even if you had seen such a list before, you probably would want to print it out unless your memory were perfect.

My claim generalizes nicely. For example, even if you ran a program for the purpose of automating a farm, and knew exactly how the farm would work, then what you don't know in that case is "the sensation of subsisting for x more days". Although Marcello's Rule starts to sound vacuous at that point.

Hey, make a squirrely objection, get a counterobjection twice as squirrely ;-)

Comment author: Silas 03 November 2008 02:09:28PM 0 points [-]

Quick question: How would you build something smarter, in a general sense, than yourself? I'm not doubting that it's possible, I'm just interested in knowing the specific process one would use.

Keep it brief, please. ;-)

View more: Prev | Next