Posts

Sorted by New

Wiki Contributions

Comments

We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things.

Implication being that we're wasting our time?

Hope not, as debating politics is also a way to learn and understand politics. National or international politics are the equivalent of, say, the weather - something we experience, can't affect, but which we surely want to understand.

When your knowledge is incomplete - meaning that the world will seem to you to have an element of randomness - randomizing your actions doesn't solve the problem

Ants don't agree. Take away their food. They'll go in to random search mode.

As far as that experiment is concerned, it seems that AnneC hits the point: How was it framed? Were the subjects led to believe that they were searching for a pattern? Or were they told the pattern? Wild guess: the former.

@James: If we want a robot that can navigate mazes, we could put some known pathfinding/search algorithms into it. Or we could put a neural network in it and run it through thousands of trials with slowly increasing levels of difficulty.

That evokes some loopy thinking. To wit:

It's always seemed that AI programs, striving for intelligence, can have their intelligence measured by how easy it is to get them to do something. E.g. It's easier to simply run that neural net through a bunch of trials than it is to painstakingly engineer an algorithm for a particular search problem.

So, does that mean that the definition of "intelligence" is: "How easy it is for me to get the intelligent being to do my bidding multiplied by the effect of their actions?"

Or is that a definition of "intelligence we want"? And the definition of "intelligence" is: "The ability to create "intelligence we want" and avoid "intelligence we don't want"?

Nice calculations!

But don't these calculations establish a lower bound on how complex or adaptive genetic evolution is? But not an upper bound?

It would seem that, using the same approach toward a nervous system would lead one to calculate the adaptiveness of a dendrite - or less. Uh, what is a part of nervous system operation that seems comfortably "understood" to the same extent as AGTC operations? Whatever part that is, would, in a fair comparison, be what could be compared to the mechanism these calculations describe. Yes?

Anyway, isn't it premature to assert, "Natural selection, though not simple, is simpler than a human brain", given the current understanding of either?

And, please, let's not go too far along the road of "Look how smart we are! Evolution didn't produce diddly, while, in only 4 hundred years we have produced millions of My Little Pony dolls." Evolution produced cow pies, which we are still struggling with, after all. :)

Speculation of what nervous systems and genetic evolution do in common sure seems like fertile ground, though. It would be interesting to know, for instance, what's both necessary and sufficient to describe both.

Is caching the best mental model of how these jillions of "100hz processors" operate?

An alternate: lossy decompression. Rather like, for instance, how dna information is expressed during an individual's life. (And, one cannot help but suspect, at a much larger scale than that of the lives of individuals.)

A reason to prefer "lossy compression" over "caching": "Caching" leads one to believe that the information is cached without loss. And, one tends to look around to find where the uncompressed bits can be stored.

But, I'll admit I've failed to put together the pieces of a general intelligence machine using a lossy compression model. So maybe it's a bogus model, too.

I never went to school. Bill Bullard seems to assume that without the indoctrinating influence of school, we'd be prissy self-effacing socialists. He's wrong, because I'm an individualist and I think his first two points are garbage.