Comment author: arundelo 18 January 2010 12:35:41PM *  7 points [-]

This is interesting, but I find it specific enough that I think I'd have trouble applying it to another domain.

Less Wrong really needs pre-tags.

Indent each line by four spaces. * \ [ _

Less Wrong comments use (a not completely bug-free implementation of) Markdown.

Comment author: HalFinney 18 January 2010 07:25:27PM 7 points [-]

A perhaps similar example, sometimes I have solved geometry problems (on tests) by using analytical geometry. Transform the problem into algebra by letting point 1 be (x1,y1), point 2 be (x2,y2), etc, get equations for the lines between the points, calculate their points of intersection, and so on. Sometimes this gives the answer with just mechanical application of algebra, no real insight or pattern recognition needed.

Comment author: HalFinney 15 January 2010 04:14:19AM 5 points [-]

I wouldn't be so quick to discard the idea of the AI persuading us that things are pretty nice the way they are. There are probably strong limits to the persuadability of human beings, so it wouldn't be a disaster. And there is a long tradition of advice regarding the (claimed) wisdom of learning to enjoy life as you find it.

Comment author: Wei_Dai 11 January 2010 10:23:45PM 8 points [-]

I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.

Perhaps the movie also illustrates a danger of majoritarianism: if someone really found a secret that could save the world, it would be tragic if he allowed himself to be convinced otherwise due to majoritarian considerations. Don't most (nearly all?) true beliefs start their existence as a minority?

Comment author: HalFinney 14 January 2010 11:01:48PM *  0 points [-]

I agree about the majoritarianism problem. We should pay people to adopt and advocate independent views, to their own detriment. Less ethically we could encourage people to think for themselves, so we can free-ride on the costs they experience.

In response to Consciousness
Comment author: Mitchell_Porter 10 January 2010 01:32:07AM 2 points [-]

This article contains three simple questions which I want to see answered. To organize the discussion, I'm creating a thread for each question, so people with an answer can state it or link to it. If you link, please provide a brief summary of your answer here as well.

First question: Where is color?

I see a red apple. The redness, I grant you, is not a property of the thing that grew on the tree, the object outside my skull. It's the sensation or perception of the apple which is red. However, I do insist that something is red. But if reality is nothing but particles in space, and empty space is not red, and the particles are not red, then what is? What is the red thing; where is the redness?

Comment author: HalFinney 13 January 2010 06:04:37PM 8 points [-]

Suppose it turned out that the part of the brain devoted to experiencing (or processing) the color red actually was red, and similarly for the other colors. Would this explain anything?

Wouldn't we then wonder why the part of the brain devoted to smelling flowers did not smell like flowers, and the part for smelling sewage didn't stink?

Would we wonder why the part of the brain for hearing high pitches didn't sound like a high pitch? Why the part which feels a punch in the nose doesn't actually reach out and punch us in the nose when we lean close?

I can't help feeling that this line of questioning is bizarre and unproductive.

Comment author: RolfAndreassen 10 January 2010 08:29:37PM 8 points [-]

Having got 15 net upvotes but no replies, I feel an obligation to be my own devil's advocate: All three of my examples deal with the heart, which is basically a pump with some electric control mechanisms. Cryonics deals with the brain, which works in very different ways. It follows that, unless we can come up with some life-prolonging techniques that work on the brain, my suggested reference class is probably wrong.

That said, we do have surgery for tumours and some treatments to prevent, reduce in severity, and recover from stroke. Again, though, these deal with the mechanical rather than informational aspects of the brain. I do not care to hold up lobotomy as life-prolonging. Does anyone know of procedures for repairing or improving the neural-network part of the brain?

Comment author: HalFinney 12 January 2010 02:14:20AM 7 points [-]

An example regarding the brain would be successful resuscitation of people who have drowned in icy water. At one time they would have been given up for dead, but now it is known that for some reason the brain often survives for a long time without air, even as much as an hour.

In response to Consciousness
Comment author: Mitchell_Porter 10 January 2010 01:35:16AM 4 points [-]

Another thread for answers to specific questions.

Second question: Where is computation?

People like to attribute computational states, not just to computers, but to the brain. And they want to say that thoughts, perceptions, etc., consist of being in a certain computational state. But a physical state does not correspond inherently to any one computational state... To be in a particular cognitive state is to be in a particular computational state. But if the "computational state" of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?

Comment author: HalFinney 11 January 2010 09:52:49PM 6 points [-]

I don't think your question is well represented by the phrase "where is computation".

Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer's hardware.

For example, consider a program that repeatedly increments the value in a register. Now we could alternatively focus on just the lowest bit of the register and see a program that repeatedly complements that bit. Which is right? Or perhaps we can see it as a program that counts through all the even numbers by interpreting the register bits as being concatenated with a 0. There is a famous argument that we can in fact interpret this counting program as enumerating the states of any arbitrarily complex computation.

Chalmers in the previous link aims to resolve the ambiguity by certain rules; basically some interpretations count and some don't. And maybe there is an unresolved ambiguity in the end. But in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting.

In summary, although one can theoretically map any computation to any physical system; for a system like we believe the brain to be, with its simultaneous complexity and organizational unity, it seems likely that one could come up with a computational program that would capture the brain's behavior, claim to have qualia, and pose the same hard questions about where the color blue lay among the electronic circuits.

In response to Consciousness
Comment author: HalFinney 10 January 2010 06:20:31PM 3 points [-]

Thomas Nagel's classic essay What is it like to be a bat? raises the question of a bat's qualia:

Our own experience provides the basic material for our imagination, whose range is therefore limited. It will not help to try to imagine that one has webbing on one's arms, which enables one to fly around at dusk and dawn catching insects in one's mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one's feet in an attic. In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications.

I also wonder whether Deep Blue could be said to possess chess qualia of a type which are similarly inaccessible to us. When we play chess we are somewhat in the position of the man in Searle's Chinese Room who simulates a Chinese woman. We simulate Deep Blue when we play chess, and our lack of access to any chess qualia no more disproves their existence than the failure of Searle's man to understand Chinese.

Do you think it will ever be possible to say whether chess qualia exist, and what they are like? Will we ever understand what it is like to be a bat?

Comment author: HalFinney 23 December 2009 09:14:14PM 0 points [-]

A bit OT, but it makes me wonder whether the scientific discoveries of the 21st century are likely to appear similarly insane to a scientist of today? Or would some be so bold as to claim that we have crossed a threshold of knowledge and/or immunity to science shock, and there are no surprises lurking out there bad enough to make us suspect insanity?

Comment author: HalFinney 11 December 2009 05:57:57PM 4 points [-]

One question on your objections: how would you characterize the state of two human rationalist wannabes who have failed to reach agreement? Would you say that their disagreement is common knowledge, or instead are they uncertain if they have a disagreement?

ISTM that people usually find themselves rather certain that they are in disagreement and that this is common knowledge. Aumann's theorem seems to forbid this even if we assume that the calculations are intractable.

The rational way to characterize the situation, if in fact intractability is a practical objection, would be that each party says he is unsure of what his opinion should be, because the information is too complex for him to make a decision. If circumstances force him to adopt a belief to act on, maybe it is rational for the two to choose different actions, but they should admit that they do not really have good grounds to assume that their choice is better than the other person's. Hence they really are not certain that they are in disagreement, in accordance with the theorem. Again this is in striking contrast to actual human behavior even among wannabes.

Comment author: Psy-Kosh 11 December 2009 05:10:31PM 0 points [-]

Then agent 1 knows that agent 2 knows one of the members of J that have non empty intersection with I(w), and similar for for agent 2.

Presumably they have to tell each other which of their own partitions w is in, right? ie, presumably SOME sort of information sharing happens about each other's conclusions.

And, once that happens, seems like intersection I(w) and J(w) would be their resultant common knowledge.

I'm confused still though what the "meet" operation is.

Unless... the idea is something like this: they exchange probabilities. Then agent 1 reasons "J(w) is a member of J such that it both Intersects I(w) AND would assign that particular probability. So then I can determine the subset of I(w) that intersects with those" and determine a probability from there." And similar for agent 2. Then they exchange probabilities again, and go through an equivalent reasoning process to tighten the spaces a bit more... and the theorem ensures that they'd end up converging on the same probabilities? (each time they state unequal probabilities, they each learn more information and each one then comes up with a set that's a strict subset of the one they were previously considering, but each of their sets always contain the intersection of I(w) and J(w))?

Comment author: HalFinney 11 December 2009 05:45:08PM 1 point [-]

Try a concrete example: Two dice are thrown, and each agent learns one die's value. In addition, each learns whether the other die is in the range 1-3 vs 4-6. Now what can we say about the sum of the dice?

Suppose player 1 sees a 2 and learns that player 2's die is in 1-3. Then he knows that player 2 knows that player 1's die is in 1-3. It is common knowledge that the sum is in 2-6.

You could graph it by drawing a 6x6 grid and circling the information partition of player 1 in one color, and player 2 in another color. You will find that the meet is a partition of 4 elements, each a 3x3 grid in one of the corners.

In general, anything which is common knowledge will limit the meet - that is, the meet partition the world is in will not extend to include world-states which contradict what is common knowledge. If 2 people disagree about global warming, it is probably common knowledge what the current CO2 level is and what the historical record of that level is. They agree on this data and each knows that the other agrees, etc.

The thrust of the theorem though is not what is common knowledge before, but what is common knowledge after. The claim is that it cannot be common knowledge that the two parties disagree.

View more: Prev | Next