Comment author: eternal_neophyte 27 June 2015 03:32:30PM *  0 points [-]

If the brain were rewired to find lemons sweet, would sweetness then be an objective quality of lemons?

Comment author: gurugeorge 30 June 2015 12:05:47PM 0 points [-]

Yes, for that person. Remember, we're not talking about an intrinsic or inherent quality, but an objective quality. Test it however many times you like, the lemon will be sweet to that person - i.e. it's an objective quality of the lemon for that person.

Or to put it another way, the lemon is consistently "giving off" the same set of causal effects that produce in one person "tart", another person "sweet".

The initial oddness arises precisely because we think "sweetness" must itself be an intrinsic quality of something, because there's several hundred years of bad philosophy that tells us there are qualia, which are intrinsically private, intrinsically subjective, etc.

Comment author: gurugeorge 27 June 2015 01:37:39PM *  0 points [-]

Sweetness isn't an intrinsic property of the thing, but it is a relational property of the thing - i.e. the thing's sweetness comes into existence when we (with our particular characteristics) interact with it. And objectively so.

It's not right to mix up "intrinsic" or "inherent" with objective. They're different things. A property doesn't have to be intrinsic in order to be objective.

So sweetness isn't a property of the mental model either.

It's an objective quality (of a thing) that arises only in its interaction with us. An analogy would be how we're parents to our children, colleagues to our co-workers, lovers to our lovers. We are not parents to our lovers, or intrinsically or inherently parents, but that doesn't mean our parenthood towards our children is solely a property of our childrens' perception, or that we're not really parents because we're not parents to our lovers.

And I think Dennett would say something like this too; he's very much against "qualia" (at least to a large degree, he does allow some use of the concept, just not the full-on traditional use).

When we imagine, visualize or dream things, it's like the activation of our half of the interaction on its own. The other half that would normally make up a veridical perception isn't there, just our half.

Comment author: Curt_Welch 23 June 2015 01:41:33AM 9 points [-]

If the brain were naturally a universal learner, then surely we wouldn't have to learn universal learning (e.g. we wouldn't have to learn to overcome cognitive biases, Bayesian reasoning wouldn't be a recent discovery, etc.)? The system seems too gappy and glitchy, too full of quick judgement and prejudice, to have been designed as a universal learner from the ground up.

You are conflating the ideas of universal learning and rational thinking. They are not the same thing.

I'm a strong believer in the idea that the human intelligence emerges from a strong general purpose reinforcement learning algorithm. If that's true, then it's very consistent with our problems of cognitive bias.

If the RL idea is correct, then thinking is best understood as as a learned behavior, just like what words we speak with our lips is a learned behavior, just as how we move our arms and legs are learned behaviors. Under the principle that we are are an RL learning machine, what we learn, is ANY behavior which helps us to maximize our reward signal.

We don't learn rational behavior, we learn whatever behavior the learning system rationally has computed is what is needed to produce the most rewards. And in this care, our prime rewards are just those things which give us pleasure, and which reduce pain.

If we live in an environment that gives us rewards when we say "I believe God is real, and the Bible is to book of God, and the Earth is 10,000 years old", -- then we will say those words. We will do ANYTHING that works to maximize rewards, in our enviornment. We will not only say them, we will believe them in our core. If we are conditioned by our enviornment to believe these things, that is what we will believe.

If we live in an environment that trains us to look at the data, and make conclusions based on what the data tells us (follow the behavior of a rational scientist), when we will act that way instead.

A universal learning can learn to act in any way it needs to in order to maximize rewards.

That's what our cognitive bias is -- our brain's desire to act as our past experience as trained us, not to act rationally.

To learn to act rationally, we must carefully be trained to act rationally -- which is why the ideas of less wrong are needed to overcome our bias.

Also keep in mind that the purpose of the human brain is to control our actions -- and for controlling actions, speed is critical. Our brain is best understood not as a "thinking machine" but rather as a reaction machine -- a machine that must choose a course of action in a very short time frame (like .1 seconds) -- so that when needed, we can quickly react to an external danger that is trying to kill is -- from a bear attacking us, to a gust of wind, that almost pushed us over the edge of the cliff.

So what the brain needs to learn, as a universal learner, is an internal "program" of quick heuristics, how to respond instantly, to any environmental stimulus. We learn (universally) how to react, not how to "think rationally".

A process like thinking rationally is a large set of learned micro reactions -- one that a takes along time to assemble and perfect. To be a good rational thinker, we have to overcome all the learned reactions that have helped us in the past gain rewards, but which have been shown not to be the actions of a rational thinker. We have to help train each other, to spot false behaviors, and train the person to have only ration behaviors when we try to engage in rational behavior that is.

Most of our life, we don't need rational behavior -- we need accurate reward maximizing behavior. But when we choose to engage in a rational thought and analysis process, we want to do our best, to be rational, and not let our learned (cognitive baise) trick us into believing we are being rational, when in fact we are just reward seeking.

So, our universal learning, could be a reward maximising process, but if it is, then that explains why we have strong cognitive bias, it's not an argument against having a cognitive baise. This is because our reward function is not wired to make us maximize rationality -- it's wired to make us act anyone needed, so as to maximize pleasure and minimize pain. Only if we immerse ourselves in an environment that rewards us for rational thinking behaviors do those behavior emerge in us.

Comment author: gurugeorge 23 June 2015 02:13:53PM *  0 points [-]

Hmm, but isn't this conflating "learning" in the sense of "learning about the world/nature" with "learning" in the sense of "learning behaviours"? We know the brain can do the latter, it's whether it can do the former that we're interested in, surely?

IOW, it looks like you're saying precisely that the brain is not a ULM (in the sense of a machine that learns about nature), it is rather a machine that approximates a ULM by cobbling together a bunch of evolved and learned behaviours.

It's adept at learning (in the sense of learning reactive behaviours that satisfice conditions) but only proximally adept at learning about the world.

Comment author: jacob_cannell 21 June 2015 08:38:43PM 3 points [-]

Ah ok your gerrymandering analogy now makes sense.

That was my sketchy understanding of how it works from evol psych and things like Dennett's books, Pinker, etc.

I think that's a good summary of the evolved modularity hypothesis. It turns out that we can actually look into the brain and test that hypothesis. Those tests were done, and lo and behold, the brain doesn't work that way. The universal learning hypothesis emerged as the new theory to explain the new neuroscience data from the last decade or so.

So basically this is what the article is all about. You said earlier you skimmed it, so perhaps I need a better abstract or summary at the top, as oge suggested.

Furthermore, I thought the rationale of this explanation was that it's hard to see how a universal learning machine can get off the ground evolutionarily (it's going to be energetically expensive, not fast enough, etc.) whereas task-specific gadgets are easier to evolve ("need to know" principle),

This is a pretty good sounding rationale. It's also probably wrong. It turns out a small ULM is relatively easy to specify, and also is completely compatible with innate task-specific gadgetry. In other words the universal learning machinery has very little drawbacks. All vertebrates have a similar core architecture based on the basal ganglia. In large brained mammals, the general purpose coprocessors (neocortex, cerebellum) are just expanded more than other structures.

In particular it looks like the brainstem has a bunch of old innate circuitry that the cortex and BG learns how to control (the BG does not just control the cortex), but I didn't have time to get into the brainstem in the scope of this article.

Comment author: gurugeorge 21 June 2015 10:37:23PM 0 points [-]

Great stuff, thanks! I'll dig into the article more.

Comment author: jacob_cannell 21 June 2015 04:37:05PM 3 points [-]

I thought the point of the modularity hypothesis is that the brain only approximates a universal learning machine and has to be gerrymandered and trained to do so?

I'm not sure what you mean by gerrymandered. I summarized the modularity hypothesis in the beginning to differentiate it from the ULM hypothesis. There are a huge range of views in this space, so I reduced them to examplars of two important viewpoint clusters.

The specific key difference is the extent to which complex mental algorithms are learned vs innate.

If the brain were naturally a universal learner, then surely we wouldn't have to learn universal learning (e.g. we wouldn't have to learn to overcome cognitive biases, Bayesian reasoning wouldn't be a recent discovery, etc.)?

You certainly don't need to learn how to overcome cognitive biases to learn (this should be obvious). Knowledge of the brain's limitations could be useful, but is probably more useful only in the context of having a high level understanding of how the brain works.

In regards to bayesian reasoning, the brain has a huge number of parallel systems and computations going on at once, many of which are implementing efficient approximate bayesian inference.

Verbal bayesian reasoning is just a subset of verbal mathematical reasoning - mapping sentences to equations, solving, and mapping back to sentences. It's a specific complex ability that uses a number of brain regions. It's something you need to learn for the same reasons you need to learn multiplication. The brain does tons of analog multiplications every second, but that doesn't mean you have an automatic innate ability to do verbal math - as you don't have an automatic innate ability to do much of anything.

The system seems too gappy and glitchy, too full of quick judgement and prejudice, to have been designed as a universal learner from the ground up.

One of the main points I make in the article is that universal learning machines are a very general thing that - in simplest form - can be specified in a small number of bits, just like a turing machine. So it's a sort of obvious design that evolution would find.

Comment author: gurugeorge 21 June 2015 08:11:04PM *  1 point [-]

I'm not sure what you mean by gerrymandered.

What I meant is that you have sub-systems dedicated to (and originally evolved to perform) specific concrete tasks, and shifting coalitions of them (or rather shifting coalitions of their abstract core algorithms) are leveraged to work together to approximate a universal learning machine.

IOW any given specific subsystem (e.g. "recognizing a red spot in a patch of green") has some abstract algorithm at its core which is then drawn upon at need by an organizing principle which utilizes it (plus other algorithms drawn from other task-specific brain gadgets) for more universal learning tasks.

That was my sketchy understanding of how it works from evol psych and things like Dennett's books, Pinker, etc.

Furthermore, I thought the rationale of this explanation was that it's hard to see how a universal learning machine can get off the ground evolutionarily (it's going to be energetically expensive, not fast enough, etc.) whereas task-specific gadgets are easier to evolve ("need to know" principle), and it's easier to later get an approximation of a universal machine off the ground on the back of shifting coalitions of them.

Comment author: gurugeorge 21 June 2015 03:14:49PM *  2 points [-]

That's a lot to absorb, so I've skimmed it, so please forgive if responses to the following are already implicit in what you've said.

I thought the point of the modularity hypothesis is that the brain only approximates a universal learning machine and has to be gerrymandered and trained to do so?

If the brain were naturally a universal learner, then surely we wouldn't have to learn universal learning (e.g. we wouldn't have to learn to overcome cognitive biases, Bayesian reasoning wouldn't be a recent discovery, etc.)? The system seems too gappy and glitchy, too full of quick judgement and prejudice, to have been designed as a universal learner from the ground up.

Comment author: gurugeorge 18 June 2015 07:46:38PM 0 points [-]

I think there's always been something misleading about the connection between knowledge and belief. In the sense that you're updating a model of the world, yes, "belief" is an ok way of describing what you're updating. But in the sense of "belief" as trust, that's misleading. Whether one trusts one's model or not is irrelevant to its truth or falsity, so any sort of investment one way or another is a side-issue.

IOW, knowledge is not a modification of a psychological state, it's the actual, objective status of an "aperiodic crystal" (sequences of marks, sounds, etc) as filtered via public habits of use ("interpretation" in more of the mathematical sense) to be representational. IOW there are 3 components, the sequence of scratches, the way the sequence of scratches is used (usually involving interaction with the world, implicitly predicting the world will react a certain way conditional upon certain actions), and the way the world is. None of those involve belief.

So don't worry about belief. Take things lightly. Except on relatively rare mission-critical occasions, you don't need to know, and as Feynman typically wisely pointed out, it's ok not to know.

That thing of lurching from believing in one thing as the greatest thing since sliced bread, to another, I'm familiar with, but at some point, you start to see that emotional roller-coaster as unnecessary.

So it's not gullibility, but lability (labileness?) that's the key. Like the old Zen master story "Is that so?":-

"The Zen master Hakuin was praised by his neighbours as one living a pure life. A beautiful Japanese girl whose parents owned a food store lived near him. Suddenly, without any warning, her parents discovered she was with child. This made her parents angry. She would not confess who the man was, but after much harassment at last named Hakuin. In great anger the parent went to the master. "Is that so?" was all he would say.

"After the child was born it was brought to Hakuin. By this time he had lost his reputation, which did not trouble him, but he took very good care of the child. He obtained milk from his neighbours and everything else he needed. A year later the girl-mother could stand it no longer. She told her parents the truth - the real father of the child was a young man who worked in the fishmarket. The mother and father of the girl at once went to Hakuin to ask forgiveness, to apologize at length, and to get the child back. Hakuin was willing. In yielding the child, all he said was: "Is that so?"

Comment author: gurugeorge 07 June 2015 12:09:56AM 1 point [-]

I remember reading a book many years ago which talked about the "hormonal bath" in the body being actually part of cognition, such that thinking of the brain/CNS as the functional unit is wrong (it's necessary but not sufficient).

This ties in with the philosophical position of Externalism (I'm very much into the Process Externalism of Riccardo Manzotti). The "thinking unit" is really the whole body - and actually finally the whole world (not in the Panpsychist sense, quite, but rather in the sense of any individual instance of cognition being the peak of a pyramid that has roots that go all the way through the whole).

I'm as intrigued and hopeful about the possibility of uploading, etc., as the next nerd, but this sort of stuff has always led me to be cautious about the prospects of it.

There may also a lot more to be discovered about the brain and body too, in the area of some connection between the fascia and the immune system (cf. the anecdotal connection between things like yoga and "internal" martial arts and health).

Comment author: Lumifer 29 May 2015 08:43:40PM 3 points [-]

Isn't suicide always an option?

Not if you're an upload.

Comment author: gurugeorge 30 May 2015 09:38:57PM 0 points [-]

Oh, true for the "uploaded prisoner" scenario, I was just thinking of someone who'd deliberately uploaded themselves and wasn't restricted - clearly suicide would be possible for them.

But even for the "uploaded prisoner", given sufficient time it would be possible - there's no absolute impermeability to information anywhere, is there? And where there's information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )

But that reminds me of the problem of trying to isolate an AI once built.

Comment author: gurugeorge 29 May 2015 08:25:23PM *  0 points [-]

Isn't suicide always an option? When it comes to imagining immortality, I'm like Han Solo, but limits are conceivable and boredom might become insurmountable.

The real question is whether intelligence has a ceiling at all - if not, then even millions of years wouldn't be a problem.

Charlie Brooker's Black Mirror tv show played with the punishment idea - a mind uploaded to cube experiencing subjectively hundreds of years in a virtual kitchen with a virtual garden, as punishment for murder (the murder was committed in the kitchen). In real time, the cube is just left on casually overnight by the "gaoler" for amusement. Hellish scenario.

(In another episode- or it might be the same one? - a version of the same kind of "punishment" - except just a featureless white space for a few years - is also used to "tame" a copy of a person's mind that's trained to be a boring virtual assistant for the person.)

View more: Prev | Next