Comment author: Gondolinian 03 June 2015 02:22:49PM *  1 point [-]

What Is Mathematics? was the only one I was able to find from a local library. I've put a request in for it and I should be getting it soon. Thanks for the recommendation; if it helps me to not hate math then I might be able to do something actually useful for existential risk reduction.

Comment author: Vladimir_Nesov 03 June 2015 07:45:20PM 1 point [-]

These are available on Library Genesis.

Also, "What is Mathematics?" is more serious than the other two. "The Shape of Space" is probably the easiest and most fun, and "The Enjoyment of Math" is a collection of almost completely independent small pieces that don't assume any background, but some of them are a bit involved for something that doesn't assume any background.

Comment author: JonahSinick 01 June 2015 02:51:06AM 1 point [-]

So it may well be sufficient to provide a bit of motivation to keep them learning, with evidence about truth of the eventual conclusions irrelevant throughout.

Are you saying that it might be best to provide no evidence, and instead just give references?

Comment author: Vladimir_Nesov 01 June 2015 03:12:43AM *  4 points [-]

It's often possible to sidestep the difficulty of communicating evidence by focusing on explaining relevant concepts, which usually doesn't require evidence (or references), except as clarifying further reading. Evidence may be useful as motivation, when it's easier to communicate in the outline than the concepts, but not otherwise. And after the concepts are clear, evidence may become easier to communicate.

(Imagine trying to convince a denizen of Ancient Greece that there is a supermassive black hole at the center of our galaxy. You won't get to presenting actual astronomical observations for quite some time, and might start with the entirely theoretical geometry and mechanics. Even the mechanics doesn't have to be motivated by experimental verification, as it's interesting as mathematics on its own. And mentioning black holes may be ill-advised at that stage.)

Comment author: Vladimir_Nesov 01 June 2015 02:40:44AM *  10 points [-]

A lot of communication is about explaining what you mean, not about proving that something is true. In many cases, you don't need to provide any evidence at all, as it's already available or trivially obtainable to your audience, the bottleneck is knowing what to look for. So it may well be sufficient to give a bit of motivation to keep them learning (such as the beauty of the concepts, or of their presentation). The evidence about truth of the eventual conclusions, or clear idea of what they are, could remain irrelevant throughout.

Comment author: [deleted] 24 May 2015 05:28:57PM 0 points [-]

I'll think about how this can be phrased differently such that it might sway you. Given that you are not Valentine, is there a difference of opinion between his posts above and your views?

That part you pulled out and quoted is essentially what I was writing about in the OP. There is a philosophy-over-hard-subjects which is pursued here, in the sequences, at FHI, and is exemplified in the conclusions drawn by Bostrom in Superintelligence, and Yudkowsky in the later sequences. Sometimes it works, e.g. the argument in the sequences about the compatibility of determinism and free will works because it essentially shows how non-determinism and free will are incompatible--it exposes a cached thought that free-will == non-deterministic choice which was never grounded in the first place. But over new subjects where you are not confused in the first place -- e.g. the nature and risk of superintelligence -- people seem to be using thought experiments alone to reach ungrounded conclusions, and not following up with empirical studies.

That is dangerous. If you allow yourself to reason from thought experiments alone, I can get you to believe almost anything. I can't get you to believe the sky is green--unless you've never seen the sky--but anything you yourself don't have available experimental evidence for or against, I can sway you in either way. E.g. that consciousness is in information being computed and not the computational process itself. That an AI takeoff would be hard, not soft, and basically uncontrollable. That boxing techniques are foredoomed to failure irregardless of circumstances. That intelligence and values are orthogonal under all circumstances. That cryonics is an open-and-shut case. On these sorts of questions we need more, not less experimentation.

When you hear a clever thought experiment that seems to demonstrate the truth of something you previously thought to have low probability, then (1) check if your priors here are inconsistent with each other; then (2) check if there is empirical data here that you have not fully updated on. If neither of those approaches resolves the issue, then (3) notice you are confused, and seek an experimental result to resolve the confusion. If you are truly unable to find an experimental test you can perform now, then (4) operate as if you do not know which of the possible theories is true.

You do not say "that thought experiment seemed convincing, so until I know otherwise I'll update in favor of it." That is the sort of thinking which led the ancients to believe that "All things come to rest eventually, so the natural state is a lack of motion. Planets continue in clockwork motion, so they must be a separate magisteria from earthly objects." You may think we as rationalists are above that mistake, but history has shown otherwise. Hindsight bias makes the Greeks seem a lot stupider than they actually were.

Take a concrete example: the physical origin of consciousness. We can rule out the naïve my-atoms-constitute-my-consciousness view from biological arguments. However I have been unable to find or construct for myself an experiment which would definitively rule out either the information-identity or computational-process theories, both of which are supported by available empirical evidence.

How is this relevant? Some are arguing for brain preservation instead of cryonics. But this only achieves personal longevity if the information-identity theory is correct as it is destructive of the computational process. Cryonics on the other hand achieves personal longevity by preserving the computational substrate itself, which achieves both information- and computational-preservation. So unless there is a much larger difference in success likelihood than appears to be the case, my money (and my life) is on cryonics. Not because I think that computational-process theory is correct (although I do have other weak evidence that makes it more likely), but because I can't rule it out as a possibility so I must consider the case where destructive brain preservation gets popularized but at the cost of fewer cryopreservations, and it turns out that personal longevity is only achieved with the preservation of computational processes. So I do not support the Brain Preservation Foundation.

To be clear, I think that arguing for destructive brain preservation at this point in time is a morally unconscionable thing to do, even though (exactly because!) we don't know the nature of consciousness and personal identity, and there is an alternative which is likely to work no matter how that problem is resolved.

Comment author: Vladimir_Nesov 24 May 2015 11:51:21PM *  3 points [-]

My point is that the very statements you are making, that we are all making all the time, are also very theory-loaded, "not followed up with empirical studies". This includes the statements about the need to follow things up with empirical studies. You can't escape the need for experimentally unverified theoretical judgement, and it does seem to work, even though I can't give you a well-designed experimental verification of that. Some well-designed studies even prove that ghosts exist.

The degree to which discussion of familiar topics is closer to observations than discussion of more theoretical topics is unclear, and the distinction should be cashed out as uncertainty on a case-by-case basis. Some very theoretical things are crystal clear math, more certain than the measurement of the charge of an electron.

That is dangerous.

Being wrong is dangerous. Not taking theoretical arguments into account can result in error. This statement probably wouldn't be much affected by further experimental verification. What specifically should be concluded depends on the problem, not on a vague outside measure of the problem like the degree to which it's removed from empirical study.

[...] anything you yourself don't have available experimental evidence for or against, I can sway you in either way. E.g. that consciousness is in information being computed and not the computational process itself.

Before considering the truth of a statement, we should first establish its meaning, which describes the conditions for judging its truth. For a vague idea, there are many alternative formulations of its meaning, and it may be unclear which one is interesting, but that's separate from the issue of thinking about any specific formulation clearly.

Comment author: RichardKennaway 24 May 2015 11:30:33AM 2 points [-]

Er, what?

Comment author: Vladimir_Nesov 24 May 2015 11:36:36AM *  5 points [-]

My guess is that this is a comment by the same user who posted the article. In a few minutes after the comment appeared (when I saw it), article's Karma was at -1, and the usernames used to post both the article and the comment were deleted. Perhaps the user didn't like the downvoting of their article and reacted by deleting their account.

Comment author: [deleted] 24 May 2015 04:05:35AM *  0 points [-]

Perhaps you're using a Frequentist definition of "likelihood" whereas I'm using a Bayesian one?

There's a difference? Probability is probability.

So, if you mean to suggest that figuring out which hypothesis is worthy of testing does not involve altering our subjective likelihood that said hypothesis will turn out to be true, then I quite strongly disagree.

But if you mean that clever arguments can't change what's true even by a little bit, then of course I agree with you.

If you go about selecting a hypothesis by evaluating a space of hypotheses to see how they rate against your model of the world (whether you think they are true) and against each other (how much you stand to learn by testing them), you are essentially coming to reflective equilibrium regarding these hypothesis and your current beliefs. What I'm saying is that this shouldn't change your actual beliefs -- it will flush out some stale caching, or at best identify an inconsistent belief, including empirical data that you haven't fully updated on. But it does not, by itself, constitute evidence.

So a clever argument might reveal an inconsistency in your priors, which in turn might make you want seek out new evidence. But the argument itself is insufficient for drawing conclusions. Even if the hypothesis is itself hard to test.

Comment author: Vladimir_Nesov 24 May 2015 10:36:34AM *  4 points [-]

[...] this shouldn't change your actual beliefs [...] it does not, by itself, constitute evidence [...] the argument itself is insufficient for drawing conclusions. Even if the hypothesis is itself hard to test.

Is that a conclusion or a hypothesis? I don't believe there is a fundamental distinction between "actual beliefs", "conclusions" and "hypotheses". What should it take to change my beliefs about this?

Comment author: Vladimir_Nesov 23 May 2015 05:41:53PM 0 points [-]

That's not a quote from the article you cite. Most of the text in what you include as a quote is instead from here, but the last statement is not there (there is a similar statement instead that doesn't include a link).

Comment author: DanielLC 19 May 2015 10:44:09PM 3 points [-]

Please get rid of the formatting. Copy it into notepad and then back or something.

Comment author: Vladimir_Nesov 22 May 2015 12:10:49PM 1 point [-]

Fixed.

Comment author: Vladimir_Nesov 20 May 2015 12:32:32AM *  6 points [-]

I knew that people thought I had bad social skills, but they weren't able to explain the situation to me in a way that I could understand, because they were totally misinterpreting me, on account of not knowing what was going on in my mind.

A useful analogy is bug reports made by non-programmer users. They are sometimes right that there is a problem, but most attempts on their part to formulate what it is or "${deity-}" forbid what is the cause is so confused that saying nothing is an improvement. You have to reproduce and debug the problem yourself. Another example is writing: "the reader is always right" in the sense that unexpected negative reaction is a flaw in your model of the reader's perception of your work, even if the reader is wrong about the reasons for their reaction.

Each clue about an error is a poorly or misleadingly stated bug report, and there is usually nobody qualified to investigate the issue if you don't do it yourself, of your own initiative: formulating hypotheses, running tests, observing responses.

Comment author: Vladimir_Nesov 16 May 2015 11:25:50AM *  0 points [-]

Did you intend to post five meetup articles for May-September at the same time? That's what happened.

View more: Prev | Next