Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to The Quantum Arena
Comment author: Nick_Hay2 20 April 2008 01:57:45AM 0 points [-]

Just in case it's not clear from the above: there are uncountably many degrees of freedom to an arbitrary complex function on the real line, since you can specify its value at each point independently.

A continuous function, however, has only countably many degrees of freedom: it is uniquely determined by its values on the rational numbers (or any dense set).

In response to Thou Art Godshatter
Comment author: Nick_Hay2 14 November 2007 01:59:02AM 3 points [-]

Eliezer: poetic and informative. I like it.

In response to Cached Thoughts
Comment author: Nick_Hay2 13 October 2007 03:36:38PM 3 points [-]
In response to A Priori
Comment author: Nick_Hay2 10 October 2007 09:44:06PM 2 points [-]

Eliezer: "You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence."

Richard: "It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence"."

Eliezer: "If you view it as an argument, yes. The engines yield the same outputs."

Richard: "What does the latter have to do with rationality?"

Pure thought is something your brain does. If you consider having successfully determined a conclusion from pure thought evidence that that thought is correct, then you must consider the output of your brain (i.e. its, that is your, internal representation of this conclusion) as valid evidence for the conclusion. Otherwise you have no reason to trust your conclusion is correct, because this conclusion is exactly the output of your brain after reasoning.

If you consider your own brain as evidence, and someone else's brain works in the same way, computing the same answers as yours, observing their brain is the same as observing your brain is the same as observing your own thoughts. You could know abstractly that "Bob, upon contempating X for 10 minutes, would consider it a priori true iff I would", perhaps from knowledge of both of your brains compute whether something is a priori true. If you then found out that "Bob thinks X a priori true" you could derive that X was a priori true without having to think about it: you know your output would be the same ("X is a priori true") without having to determine it.

Comment author: Nick_Hay2 13 August 2007 11:08:03PM 2 points [-]

One reason is Cox's theorem, which shows any quantitative measure of plausibility must obey the axioms of probability theory. Then this result, conservation of expected evidence, is a theorem.

What is the "confidence level"? Why is 50% special here?

Comment author: Nick_Hay2 13 August 2007 09:55:16PM 16 points [-]

Perhaps this formulation is nice:

0 = (P(H|E)-P(H))*P(E) + (P(H|~E)-P(H))*P(~E)

The expected change in probability is zero (for if you expected change you would have already changed).

Since P(E) and P(~E) are both positive, to maintain balance if P(H|E)-P(H) < 0 then P(H|~E)-P(H) > 0. If P(E) is large then P(~E) is small, so (P(H|~E)-P(H)) must be large to counteract (P(H|E)-P(H)) and maintain balance.

Comment author: Nick_Hay2 25 March 2007 03:47:47AM 0 points [-]

It seems the point of the exercise is to think of non-obvious cognitive strategies, ways of thinking, for improving things. The chronophone translation is both a tool both for finding these strategies by induction, and a rationality test to see if the strategies are sufficiently unbiased and meta.

But what would I say? The strategy of searching for and correcting biases in thought, failures of rationality, would improve things. But I think I generated that suggestion by thinking of "good ideas to transmit" which isn't meta enough. Perhaps if I discussed various biases I was concerned about, gave a stream of thought analysis of how to improve a particular bias (say, anthropomorphism), this would be invoking the strategy rather than referencing it, thus passing the filter. Hmmm.