Comment author: JoshuaZ 22 November 2012 01:03:36AM 0 points [-]

So what do you expect to be the signs of arbitrary general reasoning? Humans run out of memory eventually. If a dolphin or a chimp can do arbitrary reasoning but lacks the capacity to keep long-chains inside but for this, what would you expect to see. I'm still not sure what actual testable distinction would occur in these cases, although in so far as I can think of what might arguably be evidence, it looks like dolphins pass, as you can see in this article already linked to in this thread.

Comment author: DuncanS 25 November 2012 10:00:15PM 2 points [-]

Let's think about the computer that you're using to look at this website. It's able to do general purpose logic, which is in some ways quite a trivial thing to learn. It's really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.

As I'm sure you know, there's a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can't. Critical to the former is the ability to form a stored program of some description, and read it back and execute it. Computers that can do this can emulate any other computer, (albeit in a speed-challenged way in some cases).

Chimps and dolphins are undoubtedly smart, but for some reason they aren't crossing the threshold to generality. Their minds can represent many things, but not (apparently) the full gamut of what we can do. You won't find any chimps or dolphins discussing philosophy or computer science. My point actually is that humans went from making only relatively simple stone tools to discussing philosophy in an evolutionary eye-blink - there isn't THAT much of a difference between the two states.

My observation is that when we think, we introspect. We think about our thinking. This allows thought to connect to thought, and form patterns. If you can do THAT, then you are able to form the matrix of thought that leads to being able to think about the kinds of things we discuss here.

This only can happen if you have a sufficiently strong introspective sense. If you haven't got that, your thoughts remain dominated by the concrete world driven by your other senses.

Can I turn this on its head? A chimp has WAY more processing power than any supercomputer ever built, including the Watson machine that trounced various humans at jeopardy. The puzzle is why they can't think about philosophy, not why we can. Our much vaunted generality is pretty borderline at times - humans are truly BAD at being rational, and incredibly slow at reasoning. Why is such a powerful piece of hardware as us so utterly incompetent at something so simple?

The reason, I believe, is that our brains are largely evolved to do something else. Our purpose is to sense the world, and rapidly come up with some appropriate response. We are vastly parallel machines which do pattern recognition and ultra-fast response, based on inherently slow switches. Introspection appears largely irrelevant to this. We probably evolved it only as a means of predicting what other humans and creatures would do, and only incidentally did it turn into a means of thinking about thinking.

What is the actual testable distinction? Hard to say, but once you gain the ability to reason independently from the senses, the ability to think about numbers - big numbers - is not that far away.

Something like the ability to grasp that there is no largest number is probably the threshold - the logic's simple, but requires you to think of a number separately from the real world. Hard to know how to show whether dolphins might know this or not, I appreciate that. I think it's essentially proven that dolphins are smart enough to understand the logical relationships between the pieces of this proof, as the relationships are simple, and they can grasp things of that complexity that are driven by the external world. But perhaps they can't see their internal world well enough to be able to pull 'number' as an idea out from 'two' and 'three' (which are ideas that dolphins are surely able to get.), and then finish the puzzle.

Perhaps it's not chains that are the issue, but the ability to abstract clear of the outside world and carry on going.

Comment author: JoshuaZ 22 November 2012 12:33:26AM 0 points [-]

The divide isn't absolute, I'm sure - I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought.

If dolphins or chimps did or did not have chains of thought how would be able to tell the difference?

Comment author: DuncanS 22 November 2012 12:56:33AM *  -2 points [-]

Because of what you can do with a train of thought.

"That mammoth is very dangerous, but would be tasty if I killed it."

"I could kill it if I had the right weapon"

"What kind of weapon would work?"

As against.... "That mammoth is very dangerous - run!"

Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don't have the ability to read your own output, you can't.

If dolphins or chimps did have arbitrarily long chains of thought, they'd be able to do general reasoning, as we do.

Comment author: DuncanS 22 November 2012 12:25:18AM *  3 points [-]

What is the essential difference between human and animal intelligence? I don't actually think it's just a matter of degree. To put it simply, most brains are once-through machines. They take input from the senses, process it in conjunction with memories, and turn that into actions, and perhaps new memories. Their brains have lots of special-purpose optimizations for many things, and a surprising amount can be achieved like this. The brains are once-through largely because that's the fastest approach, and speed is important for many things. Human brains are still mostly once-through.

But we humans have one extra trick, which is to do with self-awareness. We can to an extent sense the output of our brains, and that output then becomes new input. This in turn leads to new output which can become input again. This apparently simple capability - forming a loop - is all that's needed to form a Turing-complete machine out of the specialized animal brain.

Without such a loop, an animal may know many things, but it will not know that it knows them. Because it isn't able to sense explicitly about it was just thinking about, it can't then start off a new thought based on the contents of the previous one.

The divide isn't absolute, I'm sure - I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought. And that small difference makes all the difference in the world.

Comment author: DuncanS 22 November 2012 12:06:06AM 3 points [-]

Evolution, as an algorithm, is very much better as an optimizer of an existing design than it is as a creator of a new design. Optimizing the size of the brain of a creature is, for evolution, an easy problem. Making a better, more efficient brain is a much harder problem, and happens slowly, comparatively speaking.

The optimization problem is essentially a kind of budgeting problem. If I have a budget of X calories per day, I can spend it on X kilos of muscle, or Y grams of brain tissue. Both will cost me the same amount of calories, and each brings its own advantages. Since evolution is good at this kind of problem, we can expect that it will correctly find the point of tradeoff - the point where the rate of gain of advantage for additional expenditure on ANY organ in the body is exactly the same.

Putting it differently, a cow design could trade larger brain for smaller muscles, or larger muscles for smaller brain. The actual cow is found at the point where those tradeoffs are pretty much balanced.

A whale has a large brain, but it's quite small in comparison to the whale as a whole. If a whale were to double the size of its brain, it wouldn't make a huge dent in the overall calorie budget. However, evolution's balance of the whale body suggests that it wouldn't be worth it. Making a whale brain that much bigger wouldn't make the whale sufficiently better for it to cost in.

Where this argument basically leads is to turn the conventional wisdom on its head. People say that big brains are better because they are bigger. However, the argument that evolution can balance the size of body structures efficiently and quickly leads to the opposite conclusion. Modern brains are bigger because they are better. Because modern brains are better than they used to be - because evolution has managed to create better brains - it becomes more worthwhile making them bigger. Because brains are better, adding more brain gives you a bigger benefit, so the tradeoff point moves towards larger brain sizes.

Dinosaur brains were very much smaller, on the whole, than the brains of similar animals today. We can infer from this argument that this because their brains were less effective, and that in turn lowered any advantage that might have been gained from making the size of the brain larger. Consequently, dinosaurs must have been even more stupid than the small size of their brains suggests.

Although there is a nutritional argument for bigger brains in humans - the taming of fire allowed for much more efficient food usage - perhaps there is also some sense in which the human brain has recently become better, which in turn led it to become larger. Speculative, perhaps. But on the larger scale, looking at the sweeping increase in brain sizes across the whole of the geological record, the qualitative improvement in brains has to be seen in the gradual increase in size.

Comment author: DuncanS 24 October 2012 10:33:17PM 0 points [-]

I think the interesting question is why we care for our future selves at all.

As kids, we tend not to. It's almost a standard that a child has a holiday, and a bit of homework to do during that holiday, then they will decide not to do the work at the beginning of the break. The reason is they care about their current selves, and not about their future self. Of course in due time the future becomes the present, and that same child has to spend the entire time at the end of their holiday working furiously on everything that's been left to the last minute. At that point, they wish that their past self had chosen an alternative plan. This is still not really wisdom, as they don't much care about their past self either - they care about their present self who now has to do the homework.

Summarising - if your utility function changes over time, then you will, as you mentioned, have conflict between your current and future self. This prevents your plans for the future from being stable - a plan that maximises utility when considered at one point no longer maximises it when considered again later. You cannot plan properly - and this undermines the very point of planning. (You may plan to diet tomorrow, but when tomorrow comes, dieting no longer seems the right answer....)

I think this is why the long view becomes the rational view - if you weight future benefits equally to your present ones, assuming (as you should) that your reward function is stable, then a plan you make now will still be valid in the future.

In fact the mathematical form that works is any kind of exponential - it's OK to have the past be more important than the future, or the future more important than the past as long as this happens as an exponential function of time. Then as you pass through time, the actual sizes of the allocated rewards change, but the relative sizes remain the same, and planning should be stable. In practice an exponential rise pushes all the importance of reward far out into the indefinite future, and is useless for planning. Exponential decays push all the important rewards into your past, but since you can't actually change that, it's almost workable. But the effect of it is that you plan to maximise your immediate reward to the neglect of the future, and since when you reach the future you don't actually think that it was worthwhile that your past self enjoyed these benefits at the expense of your present self, this doesn't really work either as a means of having coherent plans.

That leaves the flat case. But this is a learned fact, not an instinctive one.

Comment author: chaosmosis 18 October 2012 09:05:48AM 2 points [-]

More generally, Eliezer takes an explicit and very detail oriented approach to some aspects of some topics, but takes a very informal and relaxed approach to other topics. It seems to me that he pays attention to detail more when he agrees with the arguments he is outlining. This has the effect of making the arguments he opposes seem artificially weaker.

This isn't directly relevant to the chocolate cake issue, but your comment reminded me of this.

Comment author: DuncanS 24 October 2012 09:38:08PM 1 point [-]

I agree - I think this is because Eliezer's intent is to explain what he believes to be right, rather than to worry too much about the arguments of those he doesn't agree with. An approach I entirely agree with - my experience is that debate is remarkably ineffective as a means of reaching new knowledge, whilst teaching the particular viewpoint you hold is normally much more enlightening to the listener, whether they agree with the viewpoint or not.

Comment author: Eliezer_Yudkowsky 10 October 2012 05:56:43AM 5 points [-]

Koan 3:

Does the idea that everything is made of causes and effects meaningfully constrain experience? Can you coherently say how reality might look, if our universe did not have the kind of structure that appears in a causal model?

Comment author: DuncanS 16 October 2012 10:59:22PM *  1 point [-]

I think it is a mistake to tie the question of what reality is to the particulars of the physics of our actual universe. These questions are about what it is to have an external reality, and the answers to them should be the same whether the question is asked by us in our current universe, or by some other hapless inhabitants of a universe bearing a distinct resemblance to Minecraft.

I can imagine types of existence which don't include cause and effect - geometrical patterns are an example - there are relationships, but they are not cause and effect relationships - they are purely spatial relations. I can imagine living in a universe where part of its structure was purely such spatial relationships, and not a matter of cause and effect.

Comment author: DuncanS 16 October 2012 10:33:50PM *  0 points [-]

It's meaningful and false, rather than meaningless, to say that on March 22nd, 2003, the particles in the center of the Sun spontaneously arranged themselves into a short-lived chocolate cake. This statement's truth or falsity has no consequences we'll ever be able to test experientally. Nonetheless, it legitimately describes a way reality could be, but isn't; the atoms in our universe could've been arranged like that on March 22nd 2003, but they weren't.

I actually think this a confusing statement. From a thermodynamic perspective, it's not impossible that the particles in the center of the Sun spontaneously arranged themselves into a short-lived chocolate cake on that day. It's very, very, extremely unlikely, but not actually completely impossible.

The extreme unlikelihood (roughly equal to me temporarily becoming a chocolate cake myself) is such that we are justified, in terms of the approximation that is plain English, in saying that it is impossible that such a thing occurred, and that it is just wrong to claim that it happened. But this is using the usual rule of thumb that absolute truth and falsity isn't something we can actually have, so we happily settle for saying something is true or false when we're merely extremely sure rather than in possession of absolute proof.

It's quite OK in that context to claim that it's meaningless and false to claim that the chocolate cake appeared, as the claimant has no good reason to make the claim, and saying the claim is false is pointing out the lack of that reason. The bit I don't agree with is your final sentence.

Nonetheless, it legitimately describes a way reality could be, but isn't; the atoms in our universe could've been arranged like that on March 22nd 2003, but they weren't.

Here's where it gets confusing. If you are speaking in colloquial English, it's true to say that it's impossible that a chocolate cake could appear in the middle of the Sun, and therefore it didn't happen. If you're speaking more scientifically, it's instead true to say that it's possible that the atoms in the Sun's core could spontaneously form a chocolate cake, but the likelihood is of the order of 10^10^23 (or something like that) against, which clearly is sufficiently close to impossible for us to say informally that it didn't happen. As the sentence stands, you end up making a claim of knowledge which you don't have - that it was possible that a certain state of affairs could occur in the Sun, but that you know somehow that it didn't.

Comment author: William_Quixote 16 October 2012 04:24:00PM 4 points [-]

There will always be multiple centers of power

is not a well supported statement. Two main trends argue against this

1 with technological gaps the number of center is th number of agents with tech

In lots of colonies there were many centers of power, then colonists with guns landed and there was 1 center. Post ww2 the us had the bomb and strategic superiority. The soviets matched tech and there were two power centers. When Britan ruled the seas there was one major naval power In many developing nations in the cold war there were 2 powers centers that corresponded to us and soviet supplied arms.

It's not crazy to think AI or whatever technology it can invent as being another instance of tech superiority reducing the number of power centers

Comment author: DuncanS 16 October 2012 07:32:47PM 1 point [-]

Tech also seems quite vulnerable to monocultures. Think of file formats, for example. In the early days there are often several formats, but after a while most of them go extinct and the survivors end up being universally used. Image display formats, for example, fall largely into two categories - formats that every computer knows how to display, and formats that hardly anybody uses at all. (Image editing formats are different, I know.) How many word processors have you used recently that can't support .doc format ?

The most likely scenario is that there will be only one center of intelligence, and that although the intelligence isn't really there yet, the center is. You're using it now.

Comment author: staticIP 13 October 2012 12:27:13PM 0 points [-]

In this case my judgement of the probabilities is that we are completely justified in ignoring the threat.

Do you consider my pascals mugging to be less likely then the general examples of the genre, or do you think that all pascals muggings" probabilities are that we are completely justified in ignoring the threat."

Comment author: DuncanS 13 October 2012 09:52:36PM 0 points [-]

It surely depends on one's estimate of the numbers. It seems worthwhile doing something about possible asteroid impacts, for example.

View more: Prev | Next