Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: redlizard 02 December 2015 07:08:40PM *  0 points [-]

Consensus tends to be dominated by those who will not shift their purported beliefs in the face of evidence and rational argument.


Comment author: Risto_Saarelma 03 May 2015 06:08:06AM 5 points [-]

Just how bad of an idea is it for someone who knows programming and wants to learn math to try to work through a mathematics textbook with proof exercises, say Rudin's Principles of Mathematical Analysis, by learning a formal proof system like Coq and using that to try to do the proof exercises?

I'm figuring, hey, no need to guess whether whatever I come up with is valid or not. Once I get it right, the proof assistant will confirm it's good. However, I have no idea how much work it'll be to get even much simpler proofs that what are expected of the textbook reader right, how much work it'll be to formalize the textbook proofs even if you do know what you're doing and whether there are areas of mathematics where you need an inordinate amount of extra work to get machine-checkable formal proofs going to begin with.

Comment author: redlizard 05 May 2015 11:34:04PM *  2 points [-]

I have tried exactly this with basic topology, and it took me bloody ages to get anywhere despite considerable experience with coq. It was a fun and interesting exercise in both the foundations of the topic I was studying and coq, but it was by no means the most efficient way to learn the subject matter.

Comment author: Eliezer_Yudkowsky 02 June 2008 08:30:12PM 5 points [-]

Hm... yes, I can't say your formula is obviously right, but mine is obviously inconsistent. I guess I owe Nick ninety-seven cents.

Comment author: redlizard 01 April 2015 04:29:49PM 0 points [-]

My take on it:

You judge an odds ratio of 15:85 for the money having been yours versus it having been Nick's, which presumably decomposes into a maximum entropy prior (1:1) multiplied by whatever evidence you have for believing it's not yours (15:85). Similarly, Nick has a 80:20 odds ratio that decomposes into the same 1:1 prior plus 80:20 evidence.

In that case, the combined estimate would be the combination of both odds ratios applied to the shared prior, yielding a 1:1 * 15:85 * 80:20 = 12:17 ratio for the money being yours versus it being Nicks. Thus, you deserve 12/29 of it, and Nick deserves the remaining 17/29.

Comment author: dxu 01 March 2015 06:59:36AM *  3 points [-]

I'd expect Voldy to know about it and request clarification

Not necessarily. Parseltongue, if I understand it correctly, forces the speaker to tell the truth as he/she understands it (while bypassing Occlumency). If Harry knows about material implication (which he almost certainly does), he can utilize it in such a manner, but it's unlikely that Voldemort has ever encountered something similar. This isn't your standard clever wordplay that anyone smart can think of, after all--it's formal logic, which is decidedly Muggle.

Comment author: redlizard 03 March 2015 12:09:41AM 1 point [-]

So it's nonstandard clever wordplay. Voldemort will still anticipate a nontrivial probability of Harry managing undetected clever wordplay. Which means it only has a real chance of working when threatening something that Voldemort can't test immediately.

Comment author: bramflakes 28 February 2015 08:54:20PM *  10 points [-]

After 5 minutes of thinking about it, the only thing I could come up with concerns:


Bellatrix and Sirius are stars, and also Death Eaters. Voldemort has already torn apart Bellatrix to use the Dark Mark, and Harry can tear apart Sirius with the Partial Transfiguration trick people are talking about. How do we know Sirius is present? Because there is a Death Eater named "Mr Grim" who is stated to have known the Potters.

Hang on, isn't Sirius in Azkaban?

"I'm not serious, I'm not serious, I'm not serious..."

The "he" refers to both Tom Riddles, as they are branches of the same person.

Troubles with this suggestion:

The "HE IS THE END OF THE WORLD" part remains unresolved.

Narratively unsatisfying.

Comment author: redlizard 01 March 2015 05:07:33AM *  2 points [-]

I don't think this is likely, if only because of the unsatisfyingness. However:

And the messages would come out in riddles, and only someone who heard the prophecy in the seer's original voice would hear all the meaning that was in the riddle. There was no possible way that Millicent could just give out a prophecy any time she wanted, about school bullies, and then remember it, and if she had it would've come out as 'the skeleton is the key' and not 'Susan Bones has to be there'. (Ch.77)

Some foreshadowing on the idea of ominous-sounding prophecy terms actually referring to people's names.

Beneath the moonlight glints a tiny fragment of silver, a fraction of a line... (black robes, falling) ...blood spills out in litres, and someone screams a word.

"blood spills out in litres" meshes well with "TEAR APART".

In response to 2014 Survey Results
Comment author: redlizard 04 January 2015 07:01:17AM 16 points [-]

MIRI Mission/MIRI Effectiveness .395 (1331)

This result sets off my halo effect alarm.

Comment author: redlizard 26 October 2014 07:11:24PM 37 points [-]

I took the survey. No scanner available, alas.

Comment author: YVLIAZ 15 October 2014 09:15:34AM 5 points [-]

I would definitely recommend learning basics of algorithms, feasibility (P vs NP), or even computability (halting problem, Godel's incompleteness, etc). They will change your worldview significantly.

CLRS is a good entry point. After that, perhaps Sipser for some more depth.

Comment author: redlizard 18 October 2014 01:26:11PM 0 points [-]

Seconded. P versus NP is the most important piece of the basic math of computer science, and a basic notion of algorithms is a bonus. The related broader theory which nonetheless still counts as basic math is algorithmic complexity and the notion of computability.

Comment author: Jurily 26 July 2014 11:39:08AM 4 points [-]

What's the deal with spells and age? If Harry is really so far ahead of his class and can already cast spells nobody else can, why is it just now that he can cast "second-year" spells effortlessly?

Canon or not, this reminds me too much of the public school system of a certain country where kids are verboten to use words "they shouldn't know yet".

Comment author: redlizard 26 July 2014 09:17:16PM 11 points [-]

I've always modeled it as a physiological "mana capacity" aspect akin to muscle mass -- something that grows both naturally as a developing body matures, and as a result of exercise.

Comment author: Lumifer 20 June 2014 03:30:03AM 2 points [-]

the mathematical structure that is a utility function

Can you describe this "mathematical structure" in terms of mathematics? In particular, the argument(s) to this function, what do they look like mathematically?

Comment author: redlizard 20 June 2014 04:37:18AM *  3 points [-]

Certainly, though I should note that there is no original work in the following; I'm just rephrasing standard stuff. I particularly like Eliezer's explanation about it.

Assume that there is a set of things-that-could-happen, "outcomes", say "you win $10" and "you win $100". Assume that you have a preference over those outcomes; say, you prefer winning $100 over winning $10. What's more, assume that you have a preference over probability distributions over outcomes: say, you prefer a 90% chance of winning $100 and a 10% chance of winning $10 over a 80% chance of winning $100 and a 20% change of winning $10, which in turn you prefer over 70%/30% chances, etc.

A utility function is a function f from outcomes to the real numbers; for an outcome O, f(O) is called the utility of O. A utility function induces a preference ordering in which probability-distribution-over-outcomes A is preferred over B if and only if the sum of the utilities of the outcomes in A, scaled by their respective probabilities, is larger than the same for B.

Now assume that you have a preference ordering over probability distributions over outcomes that is "consistent", that is, such that it satisfies a collection of axioms that we generally like reasonable such orderings to have, such as transitivity (details here). Then the von Neumann-Morgenstern theorem says that there exists a utility function f such that the induced preference ordering of f equals your preference ordering.

Thus, if some agent has a set of preferences that is consistent -- which, basically, means the preferences scale with probability in the way one would expect -- we know that those preferences must be induced by some utility function. And that is a strong claim, because a priori, preference orderings over probability distributions over outcomes have a great many more degrees of freedom than utility functions do. The fact that a given preference ordering is induced by a utility function disallows a great many possible forms that ordering might have, allowing you to infer particular preferences from other preferences in a way that would not be possible with preference orderings in general. (Compare this LW article for another example of the degrees-of-freedom thing.) This is the mathematical structure I referred to above.

View more: Next