timtyler comments on Advice for AI makers - Less Wrong

7 Post author: Stuart_Armstrong 14 January 2010 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (196)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 17 January 2010 01:45:51AM *  -2 points [-]

Yay, that really helped!

Roko and I don't see eye to eye on this issue. From my POV, we have had 50 years of unsuccessful attempts. That is not exactly "getting it right the first time".

Google was not the first search engine, Microsoft was not the first OS maker - and Diffie–Hellman didn't invent public key crypto.

Being first does not necessarily make players uncatchable - and there's a selection process at work in the mean time, that weeds out certain classes of failures.

From my perspective, this is mainly a SIAI confusion. Because their funding is all oriented around the prospect of them saving the world from imminent danger, the execution of their mission apparently involves exaggerating the risks associated with that - which has the effect of stimulating funding from those who they convince that DOOM is imminent - and that the SIAI can help with averting in.

Humans will most likely get the machines they want - because people will build them to sell them - and because people won't buy bad machines.

Comment deleted 17 January 2010 02:01:17AM *  [-]
Comment author: timtyler 17 January 2010 10:20:00AM *  2 points [-]

The other thing to say is that there's an important sense in which most modern creatures don't value anything - except for their genetic heritage - which all living things necessarily value.

Contrast with a gold-atom maximiser. That values collections of pure gold atoms. It cares about something besides the survival of its genes (which obviously it also cares about - no genes, no gold). It strives to leave something of value behind.

Most modern organisms don't leave anything behind - except for things that are inherited - genes and memes. Nothing that they expect to last for long, anyway. They keep dissipating energy gradients until everything is obliterated in high-entropy soup.

Those values are not very difficult to preserve - they are the default state.

If ecosystems cared about creating some sort of low-entropy state somewhere, then that property would take some effort to preserve (since it is vulnerable to invasion by creatures who use that low-entropy state as fuel). However, with the current situation, there aren't really any values to preserve - except for those of the replicators concerned.

The idea has been called variously: goal system zero, god's utility function, Shiva's values.

Even the individual replicators aren't really valued in themselves - except by themselves. There's a parliament of genes, and any gene is expendable, on a majority vote. Genes are only potentially immortal. Over time, the representation of the original genes drops. Modern refactoring techniques will mean it will drop faster. There is not really a floor to the process - eventually, all may go.

Comment author: timtyler 17 January 2010 09:56:32AM *  1 point [-]

I figure a fair amount of modern heritable information (such as morals) will not be lost. Civilization seems to be getting better at keeping and passing on records. You pretty-much have to hypothesize a breakdown of civilization for much of genuine value to be lost - an unprecedented and unlikely phenomenon.

However, I expect increasing amounts of it to be preserved mostly in history books and museums as time passes. Over time, that will probably include most DNA-based creatures - including humans.

Evolution is rather like a rope. Just as no strand in a rope goes from one end to the other, most genes don't tend to do that either. That doesn't mean the rope is weak, or that future creatures are not - partly - our descendants.

Comment deleted 17 January 2010 11:45:06AM [-]
Comment author: timtyler 17 January 2010 01:34:23PM -1 points [-]

Museums have some paperclips in them. You have to imagine future museums as dynamic things that recreate and help to visualise the past - as well as preserving artefacts.

Comment author: orthonormal 17 January 2010 06:38:35PM *  2 points [-]

If you were an intelligence only cared about the number of paperclips in the universe, you would not build a museum to the past, because you could make more paperclips with the resources needed to create such a museum.

This is not some clever, convoluted argument. This is the same as saying that if you make your computer execute

10: GOTO 20

20: GOTO 10

then it won't at any point realize the program is "stupid" and stop looping. You could even give the computer another program which is capable of proving that the first one is an infinite loop, but it won't care, because its goal is to execute the first program.

Comment author: timtyler 17 January 2010 07:10:55PM *  -2 points [-]

That's a different question - and one which is poorly specified:

If insufficient look-ahead is used, such an agent won't bother to remember its history - prefering instead the gratification of instant paperclips.

On the other hand, if you set the look-ahead further out, it will. That's because most intelligent agents are motivated to remember the past - since only by remembering the past can they predict the future.

Understanding the history of their own evolution may well help them to understand the possible forms of aliens - which might well help them avoid being obliterated by alien races (along with all the paper clips they have made so far). Important stuff - and well worth building a few museums over.

Remebering the past is thus actually an proximate goal for a wide range of agents. If you want to argue paperclip-loving agents won't build museums, you need to be much more specific about which paperclip-loving agents you are talking about - because some of them will.

Once you understand this you should be able to see what nonsense the "value is fragile" post is.

Comment author: orthonormal 17 January 2010 07:51:29PM *  0 points [-]

At this point, I'm only saying this to ensure you don't take any new LWers with you in your perennial folly, but your post has anthropomorphic optimism written all over it.

Comment author: timtyler 17 January 2010 08:47:07PM *  2 points [-]

This has nothing to do with anthropomorphism or optimism - it is a common drive for intelligent agents to make records of their pasts - so that they can predict the consequences of their actions in the future.

Once information is lost, it is gone for good. If information might be valuable in the future, a wide range of agents will want to preserve it - to help them attain their future goals. These points do not seem particularly complicated.

I hope at least that you now realise that your "loop" analogy was wrong. You can't just argue that paperclipping agents will not have preserving the past in museums as a proximate goal - since their ultimate goal involves making paperclips. There is a clear mechanism by which preserving their past in museums might help them attain that goal in the long term.

A wide class of paperclipping agents who are not suffering from temporal myopia should attempt to conquer the universe before wasting precious time and resources with making any paperclips. Once the universe is securely in their hands - then they can get on with making paperclips. Otherwise they run a considerable risk of aliens - who have not been so distracted with useless trivia - eating them, and their paperclips. They will realise that they are in an alien race - and so they will run.

Comment author: Bo102010 18 January 2010 12:12:14AM *  3 points [-]

Did you make some huge transgression that I missed that is causing people to get together and downvote your comments?

Edit: My question has now been answered.

Comment author: Technologos 17 January 2010 09:16:23PM 1 point [-]

an unprecedented and unlikely phenomenon

Possible precedents: the Library of Alexandria and the Dark Ages.

Comment author: timtyler 17 January 2010 09:27:33PM *  1 point [-]

Reaching, though: the dark ages were confined to Western Europe - and something like the Library of Alexandria couldn't happen these days - there are too many libraries.