timtyler comments on Advice for AI makers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (196)
I figure a fair amount of modern heritable information (such as morals) will not be lost. Civilization seems to be getting better at keeping and passing on records. You pretty-much have to hypothesize a breakdown of civilization for much of genuine value to be lost - an unprecedented and unlikely phenomenon.
However, I expect increasing amounts of it to be preserved mostly in history books and museums as time passes. Over time, that will probably include most DNA-based creatures - including humans.
Evolution is rather like a rope. Just as no strand in a rope goes from one end to the other, most genes don't tend to do that either. That doesn't mean the rope is weak, or that future creatures are not - partly - our descendants.
Museums have some paperclips in them. You have to imagine future museums as dynamic things that recreate and help to visualise the past - as well as preserving artefacts.
If you were an intelligence only cared about the number of paperclips in the universe, you would not build a museum to the past, because you could make more paperclips with the resources needed to create such a museum.
This is not some clever, convoluted argument. This is the same as saying that if you make your computer execute
10: GOTO 20
20: GOTO 10
then it won't at any point realize the program is "stupid" and stop looping. You could even give the computer another program which is capable of proving that the first one is an infinite loop, but it won't care, because its goal is to execute the first program.
That's a different question - and one which is poorly specified:
If insufficient look-ahead is used, such an agent won't bother to remember its history - prefering instead the gratification of instant paperclips.
On the other hand, if you set the look-ahead further out, it will. That's because most intelligent agents are motivated to remember the past - since only by remembering the past can they predict the future.
Understanding the history of their own evolution may well help them to understand the possible forms of aliens - which might well help them avoid being obliterated by alien races (along with all the paper clips they have made so far). Important stuff - and well worth building a few museums over.
Remebering the past is thus actually an proximate goal for a wide range of agents. If you want to argue paperclip-loving agents won't build museums, you need to be much more specific about which paperclip-loving agents you are talking about - because some of them will.
Once you understand this you should be able to see what nonsense the "value is fragile" post is.
At this point, I'm only saying this to ensure you don't take any new LWers with you in your perennial folly, but your post has anthropomorphic optimism written all over it.
This has nothing to do with anthropomorphism or optimism - it is a common drive for intelligent agents to make records of their pasts - so that they can predict the consequences of their actions in the future.
Once information is lost, it is gone for good. If information might be valuable in the future, a wide range of agents will want to preserve it - to help them attain their future goals. These points do not seem particularly complicated.
I hope at least that you now realise that your "loop" analogy was wrong. You can't just argue that paperclipping agents will not have preserving the past in museums as a proximate goal - since their ultimate goal involves making paperclips. There is a clear mechanism by which preserving their past in museums might help them attain that goal in the long term.
A wide class of paperclipping agents who are not suffering from temporal myopia should attempt to conquer the universe before wasting precious time and resources with making any paperclips. Once the universe is securely in their hands - then they can get on with making paperclips. Otherwise they run a considerable risk of aliens - who have not been so distracted with useless trivia - eating them, and their paperclips. They will realise that they are in an alien race - and so they will run.
Did you make some huge transgression that I missed that is causing people to get together and downvote your comments?
Edit: My question has now been answered.
I haven't downvoted, but I assume it's because he's conflating 'sees the value in storing some kinds of information' with 'will build museums'. Museums don't seem to be particularly efficient forms of data-storage, to me.
Future "museums" may not look exactly like current ones - and sure - some information will be preserved in "libraries" - which may not look exactly like current ones either - and in other ways.
Not really, just lots of little ones involving the misuse of almost valid ideas. They get distracting.
You got voted down because you were rational. You went over some peoples heads.
These are popularity points, not rationality points.
That's pretty vague. Care to point to something specific?
Your use of "get together" brings to mind some sort of Less Wrong cabal who gathered to make a decision. This is of course the opposite of the truth, which is that each downvote is the result of someone reading the thread and deciding to downvote the comment. They're not necessarily uncorrelated, but "get together" is completely the wrong way to think about how these downvotes occur.
Actually, that's what I was meaning to evoke. I read his recent comments, and while I didn't agree with all of them, didn't find them to be in bad faith. I found it odd that so many of them would be at -3, and wondered if I missed something.
Possible precedents: the Library of Alexandria and the Dark Ages.
Reaching, though: the dark ages were confined to Western Europe - and something like the Library of Alexandria couldn't happen these days - there are too many libraries.