In response to The best 15 words
Comment author: Nisan 03 October 2013 09:09:46PM *  28 points [-]

Gödel, Escher, Bach by Douglas Hofstadter (or works of Quine):

"is false when preceded by its quotation" is false when preceded by its quotation.

In response to comment by Nisan on The best 15 words
Comment author: mfb 06 October 2013 02:26:05PM 0 points [-]

Hmm, the whole statement is ' "is false when preceded by its quotation" is false when preceded by its quotation.', and it is not preceded by its quotation.

Comment author: [deleted] 20 September 2013 09:19:20AM 1 point [-]

One of the barriers I run into when I delve into physics is that I have a very rationalist approach to math. I hate terminology and I want as little of it as possible in my reasoning. Physics has rather high barriers in that way in that academic physicists don't really like mathematical rigour, and don't precisely specify, say, the abstract algebraic axioms of the structures they are using. But when I get to a point of being able to specify what structure is behind a physical theory, I can usually intuit it readily.

Physics is domain knowledge compared to mathematical reasoning ability.

Comment author: mfb 21 September 2013 12:21:09PM *  1 point [-]

If mathematical details matter, they should be specified (or be clear anyway - e.g. you don't define "real numbers" in a physics paper). Physics can need some domain knowledge, but knowledge alone is completely useless - you need the same general reasoning ability as in mathematics to do anything (both for experimental and theoretical physics).

In fact, many physics problems get solved by reducing them to mathematical problems (that is the physics part) and then solving those mathematical problems (still considered as "solving the physical problem", but purely mathematics)

Comment author: Stuart_Armstrong 04 September 2013 08:32:15PM 4 points [-]

A counterpoint to the "developing better reasoning skills" point above: it's known that transfer of learning from one domain to another is often very low.

In my anecdotal experience, math is the most transferable of all skills I've learnt.

Comment author: mfb 06 September 2013 06:33:00PM 2 points [-]

Add physics to that.

Comment author: Armok_GoB 24 August 2013 07:44:18PM 2 points [-]

This poll is meaningless without also collecting the probability that a randomly chosen biological human, or whatever else you are comparing to, has it.

Comment author: mfb 24 August 2013 08:17:38PM 1 point [-]

I guess we can answer question 2 under the condition that the majority of humans falls under the definition of conscious, and we don't require 24/7 consciousness from the brain emulation.

Comment author: mfb 24 August 2013 07:53:08PM 12 points [-]

I cannot imagine how moving sodium and potassium ions could lead to consciousness if moving electrons cannot.

In addition, I think consciousness is a gradual process. There is no single point in the development of a human where it suddenly gets conscious, and in the same way, there was no conscious child of two non-conscious parents.

Comment author: mfb 28 May 2013 11:05:07PM 3 points [-]

"There are a million reasons to learn a foreign language, but it'd be a very costly way to improve rationality."

It is a "free" side-effect if you belong to the 95% of the world population without English as native language.

Comment author: mfb 23 March 2013 01:48:25PM 0 points [-]

So much room for improvements in healthcare even without new stuff :).

Comment author: Omegaile 24 January 2013 07:42:04AM 6 points [-]

Lets abstract about this:

There are 2 unfair coins. One has P(heads)=1/3 and the other P(heads)=2/3. I take one of them, flip twice and it turns heads twice. Now I believe that the coin chosen was the one with P(heads)=2/3. In fact there are 4/5 likelihood of being so. I also believe that flipping again will turn heads again, mostly because I think that I choose the 2/3 heads coin (p=8/15). I also admit the possibility of getting heads but being wrong about the chosen coin, but this is much less likely (p=1/15). So I bet on heads. So I flip it again and it turns heads. I was right. But it turns out that the coin was the other one, the one with P(heads)=1/3 (which I found after a few hundreds flips). Would you say I was right for the wrong reasons? Well I was certainly surprised to find out I had the wrong coin. Does this apply for the Gettier problem?

Lets go back to the original problem to see that this abstraction is similar. Smith believes "the person who will get the job has ten coins in his pocket". And he does that mostly because he thinks Jones will get it and has ten coins. But if he is reasonable, he will also admit the possibility of he getting the job and also having ten coins, although with lower probability.

My point here is: at which probability the Gettier problem arises? Would it arises if in the coin problem P(heads) was different?

Comment author: mfb 25 January 2013 06:09:08PM 0 points [-]

I think it arises at the point where you did not even consider the alternative. This is a very subjective thing, of course.

If the probability of the actual outcome was really negligible (with a perfect evaluation by the prediction-maker), this should not influence the evaluation of predictions in a significant way. If the probability was significant, it is likely that the prediction-maker considered it. If not, count it as false.

Comment author: RobbBB 16 January 2013 10:58:35PM *  11 points [-]

There are a variety of reasons interpreters might think that a prediction didn't come true, while Kurzweil boldly claims that it did:

  1. Kurzweil didn't express himself clearly, so interpreters misunderstood what the prediction really was. Miscommunication adds random noise, and most randomly generated predictions will turn out false, so this will skew the results against Kurzweil.

  2. Kurzweil's prediction was vague. So charitable interpreters will think they're basically true, while less charitable interpreters will think they're basically false. And we can expect random LessWrongers to be less charitable toward Kurzweil than Kurzweil is toward Kurzweil.

  3. Interpreters tend to be factually mistaken about current events, in a specific direction: They are ignorant of the nature, existence, or prevalence of the latest innovations in technology and culture.

  4. Kurzweil tends to be factually mistaken about current events, in a specific direction: He thinks a variety of technologies are more advanced, and more widespread, than they really are.

  5. There are systemic differences in the evaluation scales used by Kurzweil and by others. For instance, Kurzweil and Armstrong individuate 'predictions' differently, lumping and splitting at different points in the source text. There may also be systemic disagreements about how (temporally and technologically) precise an interpretation must be to count as 'correct,' and about whether grammatical forms like 'X is Y' most closely means 'X is always Y', 'X is usually Y', 'X is commonly Y', 'X is sometimes (occasionally) Y', or 'X is Y at least once'. This ties into vagueness, but may bias the results due to linguistic variation rather than just as a result of generic degree of interpretive charity.

I'm particularly curious about testing 3, since the strongest criticism Kurzweil could make of our methodology for assessing his accuracy is that our reviewers simply got the facts wrong. We can calibrate our assumptions about the accuracy and up-to-dateness of LessWrongers regarding technology generally. Or more specifically we can expose them to Kurzweil's arguments and see how much their assessment of his predictive success changes after hearing why he thinks he got a certain prediction 'correct'.

Comment author: mfb 22 January 2013 07:11:25PM *  0 points [-]

I think (5.) can give a significant difference (together with 1 and 2 - I would not expect so much trouble from 3 and 4). Imagine a series of 4 statements, where the last three basically require the first one. If all 4 are correct, it is easy to check every single statement, giving 4 correct predictions. But if the first one is wrong - and the others have to be wrong as consequence - Kurzweil might count the whole block as one wrong prediction.

For predictions judged by multiple volunteers, it might be interesting to check how much they deviate from each other. This gives some insight how important (1.) to (3.) are. satt looked at that, but I don't know which conclusion we can draw from that.

Comment author: mfb 06 January 2013 02:25:05PM 1 point [-]

That might sound weird, but do we have any evidence that our time follows the standard numbers (or a continuous version of them) only? Is it even possible to get such evidence? Maybe our turing machine (looking for contradictions in PA) stops at -2* - "we" cannot see it, as "we" are on the standard numbers only, experiencing only effects of previous standard numbers.

View more: Next