Comment author: Tim_Tyler 07 January 2009 04:59:14PM 1 point [-]

General Optimizer, you seem like a prospect for responding to this question: "in the interests of transparency, would anyone else like to share what they think their utility function is?"

In response to Changing Emotions
Comment author: Tim_Tyler 05 January 2009 06:52:10PM 1 point [-]

Intelligent machines will not really be built "from scratch" because augmentation of human intelligence by machines makes use of all the same technology as is present is straight machine intelligence projects, plus a human brain. Those projects have the advantage of being competitive with humans out of the box - and they interact synergetically with traditional machine intelligence projects. For details see my intelligence augmentation video/essay.

The thing that doesn't make much sense is building directly on the human brain's wetware with more of the same. Such projects are typically banned at the moment - and face all kinds of technical problems anyway.

In response to Growing Up is Hard
Comment author: Tim_Tyler 04 January 2009 07:25:29PM 0 points [-]

It seems like a general property of an intelligent system that it can't know everything about how with would react to everything. That falls out of the halting theorem (and for that matter Godel's first incompleteness theorem) fairly directly.

Er, no, it doesn't.

In response to Growing Up is Hard
Comment author: Tim_Tyler 04 January 2009 02:27:16PM 0 points [-]

Robin, it sounds as though you are thinking about the changes that could be made after brain digitalisation.

That seems like a pretty different topic to me. Once you have things in a digital medium, it is indeed much easier to make changes - even though you are still dealing with a nightmarish mess of hacked-together spaghetti code.

In response to Growing Up is Hard
Comment author: Tim_Tyler 04 January 2009 11:39:27AM 1 point [-]

This is one reason to be wary of, say, cholinergic memory enhancers: if they have no downsides, why doesn't the brain produce more acetylcholine already?

There's considerable scope for the answer to this question being: "because of resource costs". Resource costs for nutrients today are radically different from those in the environment of our ancestors.

We are not designed for our parts to be upgraded. Our parts are adapted to work exactly as they are, in their current context, every part tested in a regime of the other parts being the way they are.

That's true - but things are not quite as bad as that makes it sound. Evolution is concerned with things like modularity and evolvability. Those contribute to the modularity of our internal organs - and that helps explain why things like kidney transplants work. Evolution didn't plan for organ transplant operations - but it did arrange things in a modular fashion. Modularity has other benefits - and ease of upgrading and replacement is a side effect.

People probably broke in the ancestral environment too. Organisms are simply fragile, and most fail to survive and reproduce.

Another good popular book on the evolution of intelligence is "The Runaway Brain". I liked it, anyway. I also have time for Sue Blackmore's exposition on the topic, in "The Meme Machine".

"Hm... to get from a chimpanzee to a human... you enlarge the frontal cortex... so if we enlarge it even further..." The road to +Human is not that simple.

Well, we could do that. Cesarian sections, nutrients, drugs, brain growth factor gene therapy, synthetic skulls, brains-in-vats - and so on.

It would probably only add a year or so onto the human expiration date, but it might be worth doing anyway - since the longer humans remain competitive for, the better the chances of a smooth transition. The main problem I see is the "yuck" factor - people don't like looking closely at that path.

Comment author: Tim_Tyler 03 January 2009 04:55:44PM 1 point [-]

My guess is that it's a representation of my position on sexual selection and cultural evolution. I may still be banned from discussing this subject - and anyway, it seems off-topic on this thread, so I won't go into details.

If this hypothesis about the comment is correct, the main link that I can see would be: things that Eliezer and Tim disagree about.

Comment author: Tim_Tyler 02 January 2009 11:46:57PM 0 points [-]

Well, that is so vague as to hardly be worth the trouble of responding to - but I will say that I do hope you were not thinking of referring me here.

However, I should perhaps add that I overspoke. I did not literally mean "any sufficiently-powerful optimisation process". Only that such things are natural tendencies - that tend to be produced unless you actively wire things into the utility function to prevent their manifestation.

In response to Dunbar's Function
Comment author: Tim_Tyler 31 December 2008 11:01:35AM 3 points [-]

One of the primary principles of evolutionary psychology is that "Our modern skulls house a stone age mind"

Our minds are made by (essentially) stone-age genes, but they import up-to-date memes - and are a product of influences from both sources.

So: our minds are actually pretty radically different from stone-age minds - because they have downloaded and are running a very different set of brain-software routines. This influence of memes explains why modern society is so different from the societies present in the stone age.

Comment author: Tim_Tyler 30 December 2008 06:39:53PM 1 point [-]

And that, to this end, we would like to know what is or isn't a person - or at least have a predicate that returns 1 for all people and could return 0 or 1 for anything that isn't a person, so that, if the predicate returns 0, we know we have a definite nonperson on our hands.

So: define such a function - as is done by the world's legal systems. Of course, in a post-human era, it probably won't "carve nature at the joints" much better than the "how many hairs make a beard" function manages to.

Comment author: Tim_Tyler 29 December 2008 09:07:22PM 1 point [-]

And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity. I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions.

A possible problem here is that your high entry requirement specifications may well, with a substantial probability, allow others with lower standards to create a superintelligence before you do.

So: since you seem to think that would be pretty bad, and since you say you are a consequentialist - and believe in the greater good - you should probably act to stop them - e.g. by stepping up your own efforts to get there first by bringing the target nearer to you.

View more: Prev | Next