Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Sniffnoy 11 February 2016 02:29:05AM *  0 points [-]

Which central campus library? The building indicated on the map is not a library. (FWIW, I think the last few were at the Ann Arbor District Library.)

Comment author: Sniffnoy 22 August 2015 04:16:34AM 0 points [-]

Can't make this one, sorry!

Comment author: Vladimir_Nesov 22 July 2015 11:22:28AM *  13 points [-]

One issue is that if you base your self-esteem on your rationality, that might make it more difficult to notice flaws in your rationality, for the same reasons as basing self-esteem on being a Nazi might've made it more difficult for historical Nazis to notice the issues. Hence the idea of keeping identity small, not including important things in it, to avoid that particular cause of misperceiving them.

See Cached Selves for more details. There does seem to be an important difference between the usual ideologies and technical subjects, in that ideologies allow much more wiggle room, which might be at the heart of the problem, see Ethnic Tension And Meaningless Arguments. Sidestepping that sort of vagueness by making sure a few key ideas remain clear is also the approach explored in Yudkowsky's How To Actually Change Your Mind, for example see The Scales of Justice, the Notebook of Rationality and Human Evil and Muddled Thinking.

Comment author: Sniffnoy 22 July 2015 09:59:33PM 0 points [-]
Comment author: Sniffnoy 14 July 2015 12:08:14AM *  0 points [-]

I don't think anyone has proposed any self-referential criteria as being the point of Friendly AI? It's just that such self-referential criteria as reflective equilibrium are a necessary condition which lots of goal setups don't even meet. (And note that just because you're trying to find a fixpoint, doesn't necessarily mean you have to try to find it by iteration, if that process has problems!)

Comment author: Sniffnoy 25 June 2015 07:57:22AM 1 point [-]

Interesting article. Minor note on clarity: You might want to clarify the acronym "EMH" where it appears, since it so often here stands for "efficient market hypothesis".

Comment author: Sniffnoy 27 May 2015 07:16:23PM 2 points [-]

I'm confused about the "strategy" section; it seems largely redundant with the earlier parts.

Comment author: JonahSinick 27 May 2015 12:06:16AM *  4 points [-]

The distinction that I'm drawing is that intelligence is about the capacity to recognize patterns whereas aesthetic discernment is about selectively being drawn toward patterns that are important. I believe that intelligence explains a large fraction of the variance in mathematicians' productivity. See my post Innate Mathematical Ability. But I think that the percent of variance that intelligence explains is less than 50%.

Comment author: Sniffnoy 27 May 2015 12:24:28AM 0 points [-]

Ah, I see. I forgot about that, thanks!

Comment author: Sniffnoy 26 May 2015 11:43:27PM 1 point [-]

Is this what you were referring to in "Is Scott Alexander bad at math?" when you said that being good at math is largely about "aesthetic discernment" rather than "intelligence"? Because if so that seems like an unusual notion of "intelligence", to use it to mean explicit reasoning only and exclude pattern recognition. Like it would seem very odd to say "MIT Mystery Hunt doesn't require much intelligence," even if frequently domain knowledge is more important to spotting its patterns.

Or did you mean something else? I realize this is not the same post, but I'm just not clear on how you're separating "aesthetic discernment" from "intelligence" here; the sort of aesthetic discernment needed for mathematics seems like a kind of intelligence.

Comment author: Sniffnoy 19 April 2015 09:19:15PM *  5 points [-]

The clearest, least mystical, presentation of Goedel's First Incompleteness Theorem is: nonstandard models of first-order arithmetic exist, in which Goedel Sentences are false. The corresponding statement of Goedel's Second Incompleteness Theorem follows: nonstandard models of first-order arithmetic, which are inconsistent, exist. To capture only the consistent standard models of first-order arithmetic, you need to specify the additional axiom "First-order arithmetic is consistent", and so on up the ordinal hierarchy.

This doesn't make sense. A theory is inconsistent if and only if it has no models. I don't know what you mean by an "inconsistent model" here.

Now consider ordinal logic as started in Turing's PhD thesis, which starts with ordinary first-order logic and extends it with axioms saying "First-order logic is consistent", "First-order logic extended with the previous axiom is consistent", all the way up to the limiting countable infinity Omega (and then, I believe but haven't checked, further into the transfinite ordinals).

Actually, it stops at omega+1! Except there's not a unique way of doing omega+1, it depends on how exactly you encoded the omega. (Note: This is not something I have actually taken the time to understand beyond what's written there at all.)

Comment author: Sniffnoy 19 February 2015 09:15:36AM 2 points [-]

Not a substantial comment, but -- would you mind fixing the arXiv link to point to the abstract rather than directly to the PDF? From the abstract one can click through to the PDF, not so the reverse, and from the abstract you can see other versions of the paper, etc. (And you've made getting back to the abstract from the PDF a bit more annoying than usual as you've linked to it at some weird address rather than the usual one.) Thank you!

View more: Next