Comment author: Mass_Driver 17 June 2010 10:27:17PM 1 point [-]

My question is, do you also know that K(E)? K(K(E))?

I have a sensory/gut experience of being a thinking being, or, as you put it, E.

Based on that experience, I develop the abstract belief that I exist, i.e., K(E).

By induction, if K(E) is reliable, then so is K(K(K(K(K(K(K(E)))))))). In other words, there is no particular reason to doubt that my self-reflective abstract propositional knowledge is correct, short of doubting the original proposition.

So I like the distinction between E and K(E), but I'm not sure what insights further recursion is supposed to provide.

Comment author: zero_call 21 June 2010 01:53:03AM *  0 points [-]

I just saw this and realized I basically just expanded on this above.

Comment author: cousin_it 17 June 2010 10:19:10PM *  2 points [-]

Hmm. Your comment has brought to my attention an issue I hadn't thought of before.

Are you familiar with Aumann's knowledge operators? In brief, he posits an all-encompassing set of world states that describe your state of mind as well as everything else. Events are subsets of world states, and the knowledge operator K transforms an event E into another event K(E): "I know that E". Note that the operator's output is of the same type as its input - a subset of the all-encompassing universe of discourse - and so it's natural to try iterating the operator, obtaining K(K(E)) and so on.

Which brings me to my question. Let E be the event "you are a thing that thinks", or "you exist". You have read Descartes and know how to logically deduce E. My question is, do you also know that K(E)? K(K(E))? These are stronger statements than E - smaller subsets of the universe of discourse - so they could help you learn more about the external world. The first few iterations imply that you have functioning memory and reason, at the very least. Or maybe you could take the other horn of the dilemma: admit that you know E but deny knowing that you know it. That would be pretty awesome!

Comment author: zero_call 21 June 2010 01:50:22AM *  0 points [-]

I wasn't familiar with this description of "world states", but it sounds interesting, yes. I take it that positing "I am a think that things" is the same as asserting K(E). In asserting K(K(E)), I assert that I know that I know that I am a thing that thinks. If this understanding is incorrect, my following logic doesn't apply.

I would argue that K(K(E)) is actually a necessary condition for K(E). Because if I don't know that I know proposition A, then I don't know proposition A.

Edit/Revised: I think all you have to do is realize that "K(K(A)) false" permits "K(A) false". At first I had a little proof but now it seems just redundant so I deleted it.

So I guess I disagree, I think the iterations K(K...) are actually weaker statements, which are necessary for K(A) to be achieved. Consequentially I don't see how you can learn anything beyond K(A).

Comment author: cousin_it 17 June 2010 10:48:22PM *  0 points [-]

FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement.

The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can't measure the round-trip signal delay... Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations - like brains in vats - can be detected pretty easily.

Comment author: zero_call 18 June 2010 02:23:00AM *  4 points [-]

Um, if you're a brain in a vat, then any "brain" you perceive in the real world like on a "real world" MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you're a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.

Comment author: JoshuaZ 17 June 2010 02:20:52PM 0 points [-]

This is the entire point of the brain in the vat idea. It's not that "you could posit it", you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate "experienced brain damage" (in our world) with "reduced mental faculties", that just means that the vat imposes that correlation on us through its brain life support system.

When I've read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don't mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described.

Although I don't claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable.

Considering how much philosophy is complete nonsense I'd think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.

Comment author: zero_call 17 June 2010 10:34:31PM *  2 points [-]

People don't mention anything like altering the brain itself.

Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The "human experiences" are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.

Comment author: JoshuaZ 17 June 2010 02:21:39AM 0 points [-]

If you are a brain in a vat then that should alter sensory perception. It shouldn't alter cognitive processes (say ability to add numbers, or to spell or the like). You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain, but the point is that we have data about how the external world relates to us that isn't purely sensory.

Comment author: zero_call 17 June 2010 05:14:24AM *  0 points [-]

You don't seem to be familiar with this concept.

You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain,

This is the entire point of the brain in the vat idea. It's not that "you could posit it", you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate "experienced brain damage" (in our world) with "reduced mental faculties", that just means that the vat imposes that correlation on us through its brain life support system.

Although I don't claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable. The only thing we can really know is that we are a thing that thinks. This is Descartes 101.

Comment author: cousin_it 15 June 2010 10:12:44PM *  3 points [-]

Apologies for posting so much in the June Open Threads. For some reason I'm getting many random ideas lately that don't merit a top-level post, but still lead to interesting discussions. Here's some more.

  1. How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.

  2. How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.

Of course, both those arguments fall apart if the deception equipment is "unusually clever" at deceiving you. In that case both questions are probably hopeless.

Comment author: zero_call 17 June 2010 02:17:34AM 3 points [-]

How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.

No, there's no way of knowing that you're not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.

The "brain in the vat" idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.

Comment author: Vladimir_Nesov 31 May 2010 06:52:10PM *  6 points [-]

We do put innocent people in prison. If not putting innocent people in prison was the most important thing, we'd have to live without prisons. The tradeoff is there, but it's easier to be hypocritical about it when it's not made explicit.

Comment author: zero_call 01 June 2010 12:33:55AM -1 points [-]

That's a flagrant misinterpretation. The OP's intention was to say that innocent people don't get put in prison intentionally.

Comment author: zero_call 31 May 2010 07:11:01AM *  0 points [-]

I sometimes get various ideas for inventions, but I'm not sure what to do with it, as they are often unrelated to my work, and I don't really possess the craftsmanship capabilities to make prototypes and market them or investigate them on my own. Does anyone have experience and/or recommendations for going about selling or profiting from these ideas?

Comment author: SilasBarta 20 May 2010 06:20:35PM *  4 points [-]

So, now we have a second uninformative article in your series, in which you're just stating the minimum message length (MML) formalism (as you note at the end), which most people here are already familiar with, and which we already accept as a superior epistemology to traditional science.

And you took a lot more words to say it than you needed to.

Now, if you were just out to present a new, more accessible introduction to MML, that would be great: stuff that helps people understand the foundation of rationality is welcome here. But you're claiming you have a new idea, and yet you've already blown over 5,000 words saying nothing new. Commenters asked you last time to get to the point.

Please do so.

Then we can explain to you that people like Marcus Hutter (who supports the compression benchmark and advised Matt Mahoney) are well aware of this epistemology, and yet still haven't produced a being with the intelligence of a three-year-old, despite having had computable algorithms that implement this for more than three years now. A little more insight is still needed, beyond MML-type induction.

ETA: You know what? Daniel_Burfoot is still getting net positive karma for both articles, despite not advancing any seemingly promising idea. I might as well post my rival research program and corresponding desired model in a top-level article. I probably won't have all the citations to drop, but I'm sure most here will find it more insightful and promising than fluffy, delaying articles like this one. And you'll see something new as well.

Comment author: zero_call 30 May 2010 08:08:44AM *  0 points [-]

This comment just seems really harsh to me... I understand what you're saying but surely the author doesn't have bad intentions here...

Comment author: zero_call 30 May 2010 08:02:37AM 1 point [-]

This seems very well written and I'd like to complement you on that regard. I find the shaman example amusing and also very fun to read.

For Sophie, if she has a large data set, then her theory should be able to predict a data set for the same experimental configuration, and then the the two data sets would be compared. That is the obvious standard and I'm not sure why it's not permitted here. Perhaps you were trying to emphasize Sophie's desire to go on and test her theory on different experimental parameters, etc.

The original shaman example works very well for me, it is rather basic and doesn't make any very unsubstantiated claims. In the next examples, however, there needs to be more elaboration on the method in which you go from theory --> data. In the post you say,

She immediately returns to her office and spends the next several weeks writing Matlab code, converting her theory into a compression algorithm. The resulting compressor is highly successful: it shrinks the corpus of experimental data from an initial size of 8.7e11 bits to an encoded size of 3.3e9 bits.

Without knowing the details of how you go from theory to compressed end product, it's hard to say that this method makes sense. Actually, I would probably be fairly satisfied if you stopped after the second section. But when you introduce the third section, with the competition between colleagues, it implies there is some kind of unknown, nontrivial relation between fitting parameters of the theory, the theory, the compression program, the compression program data size, and the final compressed data.

It all seems pretty vague to make a conclusion like "add the compression program size and the final data size to get the final number".

View more: Prev | Next