Comment author: Vladimir_Nesov 01 June 2015 02:40:44AM *  10 points [-]

A lot of communication is about explaining what you mean, not about proving that something is true. In many cases, you don't need to provide any evidence at all, as it's already available or trivially obtainable to your audience, the bottleneck is knowing what to look for. So it may well be sufficient to give a bit of motivation to keep them learning (such as the beauty of the concepts, or of their presentation). The evidence about truth of the eventual conclusions, or clear idea of what they are, could remain irrelevant throughout.

Comment author: [deleted] 24 May 2015 05:28:57PM 0 points [-]

I'll think about how this can be phrased differently such that it might sway you. Given that you are not Valentine, is there a difference of opinion between his posts above and your views?

That part you pulled out and quoted is essentially what I was writing about in the OP. There is a philosophy-over-hard-subjects which is pursued here, in the sequences, at FHI, and is exemplified in the conclusions drawn by Bostrom in Superintelligence, and Yudkowsky in the later sequences. Sometimes it works, e.g. the argument in the sequences about the compatibility of determinism and free will works because it essentially shows how non-determinism and free will are incompatible--it exposes a cached thought that free-will == non-deterministic choice which was never grounded in the first place. But over new subjects where you are not confused in the first place -- e.g. the nature and risk of superintelligence -- people seem to be using thought experiments alone to reach ungrounded conclusions, and not following up with empirical studies.

That is dangerous. If you allow yourself to reason from thought experiments alone, I can get you to believe almost anything. I can't get you to believe the sky is green--unless you've never seen the sky--but anything you yourself don't have available experimental evidence for or against, I can sway you in either way. E.g. that consciousness is in information being computed and not the computational process itself. That an AI takeoff would be hard, not soft, and basically uncontrollable. That boxing techniques are foredoomed to failure irregardless of circumstances. That intelligence and values are orthogonal under all circumstances. That cryonics is an open-and-shut case. On these sorts of questions we need more, not less experimentation.

When you hear a clever thought experiment that seems to demonstrate the truth of something you previously thought to have low probability, then (1) check if your priors here are inconsistent with each other; then (2) check if there is empirical data here that you have not fully updated on. If neither of those approaches resolves the issue, then (3) notice you are confused, and seek an experimental result to resolve the confusion. If you are truly unable to find an experimental test you can perform now, then (4) operate as if you do not know which of the possible theories is true.

You do not say "that thought experiment seemed convincing, so until I know otherwise I'll update in favor of it." That is the sort of thinking which led the ancients to believe that "All things come to rest eventually, so the natural state is a lack of motion. Planets continue in clockwork motion, so they must be a separate magisteria from earthly objects." You may think we as rationalists are above that mistake, but history has shown otherwise. Hindsight bias makes the Greeks seem a lot stupider than they actually were.

Take a concrete example: the physical origin of consciousness. We can rule out the naïve my-atoms-constitute-my-consciousness view from biological arguments. However I have been unable to find or construct for myself an experiment which would definitively rule out either the information-identity or computational-process theories, both of which are supported by available empirical evidence.

How is this relevant? Some are arguing for brain preservation instead of cryonics. But this only achieves personal longevity if the information-identity theory is correct as it is destructive of the computational process. Cryonics on the other hand achieves personal longevity by preserving the computational substrate itself, which achieves both information- and computational-preservation. So unless there is a much larger difference in success likelihood than appears to be the case, my money (and my life) is on cryonics. Not because I think that computational-process theory is correct (although I do have other weak evidence that makes it more likely), but because I can't rule it out as a possibility so I must consider the case where destructive brain preservation gets popularized but at the cost of fewer cryopreservations, and it turns out that personal longevity is only achieved with the preservation of computational processes. So I do not support the Brain Preservation Foundation.

To be clear, I think that arguing for destructive brain preservation at this point in time is a morally unconscionable thing to do, even though (exactly because!) we don't know the nature of consciousness and personal identity, and there is an alternative which is likely to work no matter how that problem is resolved.

Comment author: Vladimir_Nesov 24 May 2015 11:51:21PM *  3 points [-]

My point is that the very statements you are making, that we are all making all the time, are also very theory-loaded, "not followed up with empirical studies". This includes the statements about the need to follow things up with empirical studies. You can't escape the need for experimentally unverified theoretical judgement, and it does seem to work, even though I can't give you a well-designed experimental verification of that. Some well-designed studies even prove that ghosts exist.

The degree to which discussion of familiar topics is closer to observations than discussion of more theoretical topics is unclear, and the distinction should be cashed out as uncertainty on a case-by-case basis. Some very theoretical things are crystal clear math, more certain than the measurement of the charge of an electron.

That is dangerous.

Being wrong is dangerous. Not taking theoretical arguments into account can result in error. This statement probably wouldn't be much affected by further experimental verification. What specifically should be concluded depends on the problem, not on a vague outside measure of the problem like the degree to which it's removed from empirical study.

[...] anything you yourself don't have available experimental evidence for or against, I can sway you in either way. E.g. that consciousness is in information being computed and not the computational process itself.

Before considering the truth of a statement, we should first establish its meaning, which describes the conditions for judging its truth. For a vague idea, there are many alternative formulations of its meaning, and it may be unclear which one is interesting, but that's separate from the issue of thinking about any specific formulation clearly.

Comment author: [deleted] 24 May 2015 04:05:35AM *  0 points [-]

Perhaps you're using a Frequentist definition of "likelihood" whereas I'm using a Bayesian one?

There's a difference? Probability is probability.

So, if you mean to suggest that figuring out which hypothesis is worthy of testing does not involve altering our subjective likelihood that said hypothesis will turn out to be true, then I quite strongly disagree.

But if you mean that clever arguments can't change what's true even by a little bit, then of course I agree with you.

If you go about selecting a hypothesis by evaluating a space of hypotheses to see how they rate against your model of the world (whether you think they are true) and against each other (how much you stand to learn by testing them), you are essentially coming to reflective equilibrium regarding these hypothesis and your current beliefs. What I'm saying is that this shouldn't change your actual beliefs -- it will flush out some stale caching, or at best identify an inconsistent belief, including empirical data that you haven't fully updated on. But it does not, by itself, constitute evidence.

So a clever argument might reveal an inconsistency in your priors, which in turn might make you want seek out new evidence. But the argument itself is insufficient for drawing conclusions. Even if the hypothesis is itself hard to test.

Comment author: Vladimir_Nesov 24 May 2015 10:36:34AM *  4 points [-]

[...] this shouldn't change your actual beliefs [...] it does not, by itself, constitute evidence [...] the argument itself is insufficient for drawing conclusions. Even if the hypothesis is itself hard to test.

Is that a conclusion or a hypothesis? I don't believe there is a fundamental distinction between "actual beliefs", "conclusions" and "hypotheses". What should it take to change my beliefs about this?

Comment author: Vladimir_Nesov 23 May 2015 05:41:53PM 0 points [-]

That's not a quote from the article you cite. Most of the text in what you include as a quote is instead from here, but the last statement is not there (there is a similar statement instead that doesn't include a link).

Comment author: DanielLC 19 May 2015 10:44:09PM 3 points [-]

Please get rid of the formatting. Copy it into notepad and then back or something.

Comment author: Vladimir_Nesov 22 May 2015 12:10:49PM 1 point [-]

Fixed.

Comment author: Vladimir_Nesov 20 May 2015 12:32:32AM *  6 points [-]

I knew that people thought I had bad social skills, but they weren't able to explain the situation to me in a way that I could understand, because they were totally misinterpreting me, on account of not knowing what was going on in my mind.

A useful analogy is bug reports made by non-programmer users. They are sometimes right that there is a problem, but most attempts on their part to formulate what it is or "${deity-}" forbid what is the cause is so confused that saying nothing is an improvement. You have to reproduce and debug the problem yourself. Another example is writing: "the reader is always right" in the sense that unexpected negative reaction is a flaw in your model of the reader's perception of your work, even if the reader is wrong about the reasons for their reaction.

Each clue about an error is a poorly or misleadingly stated bug report, and there is usually nobody qualified to investigate the issue if you don't do it yourself, of your own initiative: formulating hypotheses, running tests, observing responses.

Comment author: Vladimir_Nesov 16 May 2015 11:25:50AM *  0 points [-]

Did you intend to post five meetup articles for May-September at the same time? That's what happened.

Comment author: [deleted] 11 May 2015 07:57:00AM *  0 points [-]

https://en.wikipedia.org/wiki/Variable_(mathematics)#Genesisandevolutionofthe_concept

Also, being very frugal with token length seems to be a thing into the 1960's, see Unix e.g. "ls -l" instead of the far more human eye friendly "list -long" I don't exactly understand why but apparently this wasn't really a priority until about, say, 1995 when more and more programmers said fsck Perl with its unreadably frugal letter soup and use stuff like Python, where things are expressed in actual words.

I guess there are good reasons behind it. I still don't have to like it.

In response to comment by [deleted] on Is Scott Alexander bad at math?
Comment author: Vladimir_Nesov 14 May 2015 10:59:13AM *  3 points [-]

To get the link
https://en.wikipedia.org/wiki/Variable_(mathematics)#Genesis_and_evolution_of_the_concept
use the following code in your comment:

[https://en.wikipedia.org/wiki/Variable\_(mathematics)#Genesis\_and\_evolution\_of\_the\_concept](https://en.wikipedia.org/wiki/Variable_(mathematics\)#Genesis_and_evolution_of_the_concept)

See Comment formatting/Escaping special symbols on the wiki for more details (I've backslash-escaped underscores _ in the text part of the link to avoid their turning surrounding texts into italics, and the closing round bracket in the URL part of the link to avoid its interpretation as the end of the URL).

Comment author: JonahSinick 06 May 2015 10:40:20PM 1 point [-]

I'm knowingly breaking social norms. I reject the social norms that are in place as maladaptive, in the same way that Martin Luther King rejected social norms around segregation as maladaptive.

And no, I'm not going to apologize for analogizing myself to Martin Luther King on account of it coming across as a status grab: even if I'm totally inconsequential, I still identify with him strongly, and whatever other people think, it's not a status grab.

Comment author: Vladimir_Nesov 06 May 2015 11:15:26PM 6 points [-]

I reject the social norms that are in place as maladaptive

Do you expect the social norms to accept your arguments, and should they, given the evidence (i.e. what is the role of addressing them in this context, expressing disapproval of certain responses)? That's the frustration of hard-to-communicate facts: you can (1) give up, (2) turn to the dark side and cut through your audience's epistemology with a machete, insisting that they accept the conclusion based on insufficient evidence and appeals to on-reflection irrelevant things, or (3) put in so much work that the result isn't worth the trouble.

(I personally dislike the machete more than the breaking of social norms, but that might be unusual.)

Comment author: V_V 06 May 2015 10:12:19PM *  1 point [-]

The relevant difference is in isolation and formulation of side effects, which encourages formulation of more pieces of code whose behavior can be understood precisely in most situations. The toolset of functional programming is usually better for writing higher order code that keeps the sources of side effects abstract, so that they are put back separately, without affecting the rest of the code. As a result, a lot of code can have well-defined behavior that's not disrupted by context in which it's used.

Yes, that's how it was intended to be and how they spin it, but in practice the abstraction is leaky and it leaks in bad, difficult to predict ways therefore, as I said, you end up with things like having to test for memory leaks, something that is usually not an issue in "imperative" languages like Java, C# or Python.

I like the functional paradigm inside a good multi-paradigm language: passing around closures as first-class objects is much cleaner and concise than fiddling with subclasses and virtual methods, but forcing immutability and lazy evaluation as the main principles of the language doesn't seem to be a good design choice. It forces you to jump through hoops to implement common functionality like interaction, logging or configuration, and in return it doesn't deliver the higher modularity and intelligibility that were promised.

Anyway, we are going OT.

Comment author: Vladimir_Nesov 06 May 2015 10:49:36PM 0 points [-]

Agreed. Abstractions are still leaky, and where some pathologies in abstraction (i.e. human-understandable precise formulation) can be made much less of an issue by using the functional tools and types, others tend to surface that are only rarely a problem for more concrete code. In practice, the tradeoff is not one-sided, so its structure is useful for making decisions in particular cases.

View more: Prev | Next