Comment author: DanielFilan 10 December 2014 11:09:14AM *  0 points [-]

You're right that (if X then Y) is just fancy notation for (not(X) or Y). However, I think you're mixing up levels of where things are being proved. For the purposes of the rest of this comment, I'll use provable(X) to mean that PA or whatever proves X, and not that we can prove X. Now, suppose provable(P). Then provable(not(not(P))) is derivable in PA. You then claim that not(provable(not(P))) follows in PA, that is to say, that provable(not(Q)) -> not(provable(Q)). However, this is precisely the statement that PA is consistent, which is not provable in PA. Therefore, even though we can go on to prove not(provable(not(P))), PA can't, so that last step doesn't work.

Comment author: Ebthgidr 16 December 2014 12:29:47PM 0 points [-]

Wait. Not(provable(consistency)) is provable in PA? Then run that through the above.

Comment author: DanielFilan 10 December 2014 11:09:14AM *  0 points [-]

You're right that (if X then Y) is just fancy notation for (not(X) or Y). However, I think you're mixing up levels of where things are being proved. For the purposes of the rest of this comment, I'll use provable(X) to mean that PA or whatever proves X, and not that we can prove X. Now, suppose provable(P). Then provable(not(not(P))) is derivable in PA. You then claim that not(provable(not(P))) follows in PA, that is to say, that provable(not(Q)) -> not(provable(Q)). However, this is precisely the statement that PA is consistent, which is not provable in PA. Therefore, even though we can go on to prove not(provable(not(P))), PA can't, so that last step doesn't work.

Comment author: Ebthgidr 10 December 2014 11:17:23AM 0 points [-]

Ok, thanks for clearing that up.

Comment author: dxu 10 December 2014 04:13:14AM *  1 point [-]

As a fellow 16-year-old (there really seem to be a lot of us popping up around here recently), I concur. With that said, rationality skills are difficult for anyone to learn, because the human brain did not evolve to be rational, but rather to succeed socially. I would add that a good deal of rationality potential is ingrained in those who find themselves attracted to LW at a young age, particularly since surveys have shown that LW users tend to have a higher incidence rate of Asperger Syndrome, the symptoms of which include social awkwardness. This suggests to me that rational thinking comes more easily to people with certain personality types, which is arguably genetic. As a single data point, I suppose I'll add that I myself have been diagnosed with Asperger's when I was younger, although with how trigger-happy American doctors are with their diagnoses these days, that's not really saying much.

Comment author: Ebthgidr 10 December 2014 10:41:44AM 1 point [-]

That's an interesting correlation, but I'm curious about the causal link: is it that a certain type of neural architecture causes both predisposition to rationality and asperger's, or the social awkwardness added on to the neural architecture creates the predisposition--i.e. I'm curious to see how much being social affects rationality. I shall need to look into this more closely.

Comment author: DanielFilan 10 December 2014 03:35:53AM 2 points [-]

I'm not sure how you're getting from not provable(X) to provable(provable(X) -> X), and I think you might be mixing meta levels. If you could prove not provable(X), then I think you could prove (provable(X) ->X), which then gives you provable(X). Perhaps the solution is that you can never prove not provable(X)? I'm not sure about this though.

Comment author: Ebthgidr 10 December 2014 10:37:36AM 0 points [-]

I forget the formal name for the theorem, but isn't (if X then Y) iff (not-x or Y) provable in PA? Because I was pretty sure that's a fundamental theorem in first order logic. Your solution is the one that looked best, but it still feels wrong. Here's why: Say P is provable. Then not-P is provably false. Then not(provable(not-P)) is provable. Not being able to prove not(provable(x)) means nothing is provable.

Comment author: Ebthgidr 10 December 2014 03:04:02AM 3 points [-]

A question about Lob's theorem: assume not provable(X). Then, by rules of If-then statements, if provable(X) then X is provable But then, by Lob's theorem, provable(X), which is a contradiction. What am I missing here?

Comment author: Ebthgidr 10 December 2014 02:34:30AM 0 points [-]

I finished up to the first major plot twist/divergence in the rationalfic(well, sort of. I'll just call it an attempted rationalfic) I've been working on for 3 months or so, and it's now in the top 15 most followed fics in the fandom(Danganronpa). Link: light in despair's darkness

Comment author: ilzolende 10 December 2014 02:07:12AM 4 points [-]

Welcome! I'm also 16. Welcome to the group of people who answer "no" to the "were you alive 20 years ago" question on a technicality. It's really great to know about risk assessment errors and whatnot when we're still teenagers, just because the bugs in our brains are even more dangerous when ignored than normal.

Comment author: Ebthgidr 10 December 2014 02:29:28AM 3 points [-]

Not only that--the greater degree of neuroplasticity that I think 16-year olds still have(if I'm wrong about this, someone please correct me) makes it a good deal easier to learn skills/ingrain rationality techniques.

Comment author: Gondolinian 09 December 2014 10:14:06PM 1 point [-]

Welcome, Leor! I'm also a 16 year old new member.

Comment author: Ebthgidr 10 December 2014 01:11:34AM 2 points [-]

Nice to meet you--it's rather reassuring to see another member at my age.

Comment author: Ebthgidr 09 December 2014 08:16:33PM 11 points [-]

Hello. I'm Leor Fishman, and also go by 'avret' on both reddit and ffn. I am currently 16. The path I took to get here isn't as...dramatic as some of the others I've seen, but I may as well record it: For as long as I can remember, I've been logically minded, preferring to base hypotheses on evidence than to rest them on blind faiths. However, for the majority of my life, that instinct was unguided and more often than not led to rationalizations rather than belief-updating. A few years back, I discovered MoR during a stumbleupon binge. I took to it like a fish to water, finishing up to the update point in a matter of days before hungrily rereading to attempt to catch whatever plot points I could glean from hints and asides in earlier chapters. However, I still read it almost purely for story-enjoyment, noting the rationality techniques as interesting asides if I noticed them.
About a year later, I followed the link on the MoR website to LW, and began reading the sequences. They were...well, transformative doesn't quite fit. Perhaps massively map-modifying might be a better term. How to Actually Change Your Mind specifically gave me the techniques I needed to update on rather many beliefs, and still does. Both Reductionism and the QM sequence, while not quite as revolutionary as HtACYM for me, explained what I had previously understood of science in a way that just...well, fit seems to be the only word that works to describe it, though it doesn't fully carry the connotation I'm trying to express. Now, I'm endeavoring to learn what I can. I'm rereading the sequences, trying to internalize the techniques I'll need and make them reflexive, and attempting to apply them as often as possible. I've gone pretty far--looking back at things I said and thought before makes that clear. On the other hand, I've still got one heck of a ways to go. Tsuyoku Naritai

Comment author: Ebthgidr 01 January 2014 06:20:08PM 0 points [-]

Is this a possible explanation or corollary to the sunk-costs fallacy of economics?

View more: Prev | Next