Comment author: Will_Pearson 13 December 2008 03:29:00PM 0 points [-]

Morality does not compress; it's not something you can learn just by looking at the (nonhuman) environment or by doing logic; if you want to get all the details correct, you have to look at human brains.

Why? Why can't you rewrite this as "complexity and morality"?

You may talk about the difference between mathematical and moral insights. Which is true, but then mathematical insights aren't sufficient for intelligence. Maths doesn't tell you whether a snake is poisonous and will kill you or not....

In response to You Only Live Twice
Comment author: Will_Pearson 12 December 2008 10:28:34PM 1 point [-]

The number of people living today because their ancestors invested their money in themselves/their status and their children, all of us:

The number of people living today because they or someone else invested their money in cryonics or other scheme to live forever, 0.

Not saying that things won't change in the future, but there is a tremendously strong bias to spend your resources on ambulatory people and new people, because that has been what has worked previously.

Women might have stronger instincts in this respect as they have been more strongly selected for the ability to care for their children (unlike men).

If you want to change this state of affairs, swiftly at least, you have to tap into our common psyche as successful replicators and have it pass the "useful for fitness test". This would be as easy as making it fashionable or a symbol of high status, get Obama to sign up publicly and I think you would see a lot more interest.

High status has been something sort after because it gets you better mates and more of them (perhaps illicitly).

Comment author: Will_Pearson 11 December 2008 10:42:22PM -1 points [-]

Will, your example, good or bad, is universal over singletons, nonsingletons, any way of doing things anywhere.

My point was not that non-singletons can see it coming. But if one non-singletons trys self-modification in a certain way and it doesn't work out then other non-singletons can learn from the mistake (or in worst the evolutionary case the descendents of people curious in a certain way would be out competed by those that instinctively didn't try the dangerous activity). Less so with the physics experiments, depending on dispersal of non-singletons, range of the physical destruction.

Comment author: Will_Pearson 11 December 2008 10:08:13PM 2 points [-]

There are some types of knowledge that seem hard to come by (especially for singletons). The type of knowledge is knowing what destroys you. As all knowledge is just an imperfect map, there are some things a priori that you need to know to avoid. The archetypal example is in-built fear of snakes in humans/primates. If we hadn't had this while it was important we would have experimented with snakes the same way we experiment with stones/twigs etc and generally gotten ourselves killed. In a social system you can see what destroys other things like you, but the knowledge of what can kill you is still hard won.

If you don't have this type of knowledge you may step into an unsafe region, and it doesn't matter how much processing power or how much you correctly use your previous data. Examples that might threaten singletons:

1) Physics experiments, the model says you should be okay but you don't trust your model under these circumstances, which is the reason to do the experiment. 2) Self-change, your model says that the change will be better but the model is wrong. It disables the system to a state it can't recover from, i.e. not an obvious error but something that renders it ineffectual. 3) Physical self-change. Large scale unexpected effects from feedback loops at a different levels of analysis, e.g. things like the swinging/vibrating bridge problem, but deadly.

Comment author: Will_Pearson 09 December 2008 04:18:43PM 4 points [-]

No diminishing returns on complexity in the region of the transition to human intelligence: "We're so similar to chimps in brain design, and yet so much more powerful; the upward slope must be really steep."

Or *there is no curve* and it is a random landscape with software being very important...

Scalability of hardware: "Humans have only four times the brain volume of chimps - now imagine an AI suddenly acquiring a thousand times as much power."

Bottle nosed dolphins have twice the brain volume as normal dolphins (and comparable to our brain volume), yet aren't massively more powerful compared to them. Asian elephants have 5 times the weight...

Comment author: Will_Pearson 09 December 2008 01:00:49AM 0 points [-]

I personally find the comparison between spike frequency and clockspeed unconvincing. It glosses over all sorts of questions of whether the system can keep all the working memory it needs in 2MB or whatever processor cache it has. Neurons have the advantage of having local memory, no need for the round trip off chip.

We also have no idea how neurons really work, there has been recent work on the role of methylation of dna in memory. Perhaps it would be better to view neural firing as communication between mini computers, rather than processing in itself.

I'm also unimpressed with large numbers, 10^15 operations is not enough to process the positions of 1 gram of hydrogen atoms, in fact it would take 20 million years for it to do so (assuming one op per atom). So this is what we have to worry about planning to atomically change our world to the optimal form. Sure it is far more than we can consciously do, and quite possibly a lot more than we can do unconsciously as well. But it is not mindboglingly huge compared to the real world.

Comment author: Will_Pearson 07 December 2008 08:56:18PM -1 points [-]

The universe doesn't have to be kind and make all problems amenable to insight....

There are only a certain number of short programs, and once a program gets above a certain length it is hard to compress (I can't remember the reference for this, so it may be wrong, can anyone help?). We can of course reorder things, but then we have to make things currently simple complex.

That said I do think insight will play some small part in the development of AI, but that there may well be a hell a lot of parameter tweaking that we don't understand or know why they are so.

Comment author: Will_Pearson 06 December 2008 11:00:49AM 1 point [-]

You are right about the smaller is faster and local being more capable of reacting. But Eliezer's arguments are predicated on there being a type of AI that can change itself without deviation from a purpose. So an AI that splits itself into two may deviate in capability, but should share the same purpose.

Whether such an AI is possible or would be effective in the world is another matter.

Comment author: Will_Pearson 05 December 2008 11:54:56PM 0 points [-]

We also suppose that the technology feeding Moore's Law has not yet hit physical limits. And that, as human brains are already highly parallel, we can speed them up even if Moore's Law is manifesting in increased parallelism instead of faster serial speeds - we suppose the uploads aren't yet being run on a fully parallelized machine, and so their actual serial speed goes up with Moore's Law. Etcetera.

Moore's Law says nothing about speed in the canonical form. You should probably define exactly what variant you are using.

Comment author: Will_Pearson 05 December 2008 07:02:49AM 0 points [-]

Consider the following. Chimpanzees make tools. The first hominid tools were simple chipped stone from 2.5 million years ago. Nothing changed for a million years. Then homo erectus came along with Acheulian tech, nothing happened for a million years. Then two thousand years ago H. Sapiens appeared and tool use really diversified. The brains had been swelling from 3 million years ago.

If brains had been getting more generally intelligent at that time as they were increasing in size, it is not shown. They may have been getting better at wooing women and looking attractive to men.

This info has been cribbed from the Red Queen page 313 hardback edition.

I would say this shows a discontinous improvement in intelligence, where intelligence is defined as the ability to *generally* hit a small target in search space about the world. Rather than the ability to get into another hominids pants.

View more: Prev | Next