Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: IlyaShpitser 09 November 2017 10:04:58PM *  2 points [-]

"explanation", as far as the concept can be modelled mathematically, is fitness to data and low complexity

Nope. To explain, e.g. to describe "why" something happened, is to talk about causes and effects. At least that's the way people use that word in practice.

Prediction and explanation are very very different.

Comment author: MrMind 10 November 2017 11:32:40AM *  0 points [-]

To explain, e.g. to describe "why" something happened, is to talk about causes and effects.

I would still say that cause and effect is a subset of the kind of models that are used in statistics. A case in point is for example Bayesian networks, that can accomodate both probabilistc and causal relations.
I'm aware that Judea Pearl and probably others reverse the picture, and think that C&E are the real relations, which are only approximated in our mind as probabilistic relations. On that, I would say that quantum mechanics seems to point out that there is something fundamentally undetermined about our relations with cause and effect. Also, causal relations are very useful in physics, but one may want to use other models where physics is not especially relevant.
From what one may call "instrumentalist" point of view, time is a dimension so universal that any model can compress information by incorporating it, but it is not necessarily so, as relativity shows us: indeed, general relativity shows us you can compress a lot of information by not explicitly talking about time, and thus by sidestepping clean causal relations (what is cause in a reference frame is effect in another).

Prediction and explanation are very very different.

I'm not aware of a theory or a model that uses vastly different entities to explain and to predict. The typical case of a physical law posits an ontology governed by a stable relation, thus using the precise same pieces to explain the past and predict the future. Besides, such a model would be very difficult to tune: any set of data can be partitioned in any way you like between training and test, and it seems odd that a model is so dependent from the experimenter's intent.

Comment author: MrMind 09 November 2017 01:02:29PM *  0 points [-]

By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories [...]

Phew, I thought for a moment he was about to refute the actual Bayesian philosophy of science...

Snark aside, as others have noticed, point 1 is highly problematic. From a broader perspective, if Bayesian probability has to inform the practice of science, then a scientist should be wary of the concept of truth. Once a model has reached probability 1, it becomes an unwieldy object: it cannot be swayed by further, contrary evidence, and if we ever encounter an impossible piece of data (impossible for that model), the whole system breaks down. It is then considered good practice to always hedge models with a small probability for 'unknown unknowns', even with our most certain beliefs. After all, humans are finite and the universe is much, much bigger.

On the other hand, I don't think it's fair to say that the objective of science is either to "just explain" or "just predict". Both views are unified and expanded by the Bayesian perspective: "explanation", as far as the concept can be modelled mathematically, is fitness to data and low complexity. On the other hand, predictive power is fitness to future data, which can only be checked once the future data had been acquired. What is one man's prediction can be another man's explanation.

Comment author: MrMind 09 November 2017 11:29:34AM 1 point [-]

In my understanding, there’s no one who speaks for LW, as its representative, and is responsible for addressing questions and criticisms.

Exactly. That is by design. See the title of the site? It doesn't say "MoreRight". Here even Yudkowski, the Founding Father, was frequently disagreed upon.
This is the School-less school.

Comment author: Erfeyah 06 October 2017 07:44:18PM *  0 points [-]

Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same.

Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.

Indeed, not even computers are based on symbolic manipulation: at the deepest level, it's all electrons flowing back and forth.

We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it's all sound waves pushing air particles back and forth”.

As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).

Comment author: MrMind 09 October 2017 12:16:14PM *  0 points [-]

Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis?

No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn't have prefaced with "Either the brain is capable of doing things that would require infinite resources for a computer to perform". We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it's infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time.
So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.

As per the electron thing, there's a level where there is symbolic manipulation and a level where there isn't. I don't understand why it's symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.

Comment author: Erfeyah 05 October 2017 07:09:39PM *  0 points [-]

Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle's conclusion but I am examining my thought process for errors.

Searle's argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work.

In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation.

Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?

Comment author: MrMind 06 October 2017 10:21:49AM 0 points [-]

It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks).

Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same. Indeed, not even computers are based on symbolic manipulation: at the deepest level, it's all electrons flowing back and forth.

Comment author: CellBioGuy 04 October 2017 09:36:26PM *  2 points [-]

Latest results on KIC 8462852 / Boyajians Star:

After comparing data from Spitzer and Swift - an infrared and ultraviolet telescope - whatever the heck the three dimensional distribution of the material causing the brightness dips, the long-term secular dimming of the star is being caused by dust. Over the course of a year of observations the star dimmed less in the infrared than in the ultraviolet, with the light extinction dependent upon wavelength in a way that screams dust of a size larger than primordial interstellar dust (and thus likely in the star system rather than somewhere between us) but still dust.

Still a weird situation. There cannot be a very large amount of dust in total since there is no infrared excess, so we must be seeing small amounts of it pass directly between the star and us.

The dipping is also semiperiodic, to the point that a complex of dips beginning in May was predicted months in advance.

Comment author: MrMind 06 October 2017 10:11:48AM 0 points [-]

That's interesting... is the dust size still consistent with artificial objects?

Comment author: abcdef 29 September 2017 12:41:07PM 0 points [-]

Sorry I don't follow. What do you mean by starting assumptions and models that I should have more than one for each entity?

Comment author: MrMind 29 September 2017 03:34:51PM 0 points [-]

Well, to calculate P(T|S) = p you need a model of how a student 'works', in such a way that the test's result T happens for the kind of students S with probability p. Or you can calculate P(S|T), thereby having a model of how a test 'works' by producing the kind of student S with probability p.
If you have only one of those, these are the only things you can calculate.

If on the other hand you have one or more complementary models (complemenetary here means that they exclude each other and form a complete set), then you can calculate the probabilities P(T1|S1), P(T1|S2), P(T2|S1) and P(T2|S2). With these numbers, via Bayes, you have both P(T|S) and P(S|T), so it's up to you to decide if you're analyzing stundents or tests.
Usually one is more natural than the other, but it's up to you, since they're models anyway.

Comment author: abcdef 28 September 2017 12:21:46PM *  0 points [-]

I'm not a statistician, but I happen to have some intuitions and sometimes work out formulas or find them on the web.

I have a bunch of students that took a test each day. The test of each day had a threshold score out of, say, 100 points. Scores under the threshold are considered insufficient.

I don't know whether of the two is true:

  1. I can either use the tests to evaluate the students, or the students to evaluate the tests.

  2. I can evaluate the students using the tests and the tests using the students at the same time.

The option 2. seems counterintuitive at first sight, especially if one wants to be epistemically sound. It seems more intuitive at second sight, though. I think it might be analogous to how you can evaluate a circular flow of feedback by using linear algebra (cfr. LW 2.0 discussions).

Some other context: In my evaluation model I would rather not only consider whether the scores were sufficient or not, but consider how much they were sufficient or insufficient, possibly after opportunely transforming them. Also, I want the weights of the scores to decay exponentially. I would also rather use a bayesian approach.

Is this reasonable, and where can I find instructions on how to do so?

Comment author: MrMind 29 September 2017 09:35:06AM 0 points [-]

From a Bayesian perspective, you calculate P(S|T) and P(T|S) at the same time, so it doesn't really matter. What does matter, and greatly, are your starting assumptions and models: if you have only one for each entity, you won't be able to calculate how much some datum is evidence of your model or not.

Comment author: MrMind 25 September 2017 10:30:05AM 0 points [-]

Which is the proper route to signal a bug?

Comment author: Lumifer 18 September 2017 04:00:11PM 1 point [-]

The IgNobels for 2017 are out.

I think LW should re-focus on more important issues under discussion in peer-reviewed science, e.g. "Never Smile at a Crocodile: Betting on Electronic Gaming Machines is Intensified by Reptile-Induced Arousal" (link)

Comment author: MrMind 19 September 2017 07:54:18AM 0 points [-]

Wonderful as always!

View more: Next