Wasted life
It's just occurred to me that, giving all the cheerful risk stuff I work with, one of the most optimistic things people could say to me would be:
"You've wasted your life. Nothing of what you've done is relevant or useful."
That would make me very happy. Of course, that only works if it's credible.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (39)
My previous job consisted of a lot of disaster contingency planning. Every three months I'd write a document that would be needed in the event of the deaths of me and a significant number of my work colleagues, only for it to be destroyed three months later and replaced with another one.
None of those documents were ever opened while I was at that company, (or else this is an incredibly spooky comment), but none of them were wasteful either. The knowledge that they wouldn't be needed was only available in retrospect, and the potential cost of losing the core technical staff was enormous.
But surely you'd be offended if someone suggested that, even if the subject you study is important, your personal contributions to it are not relevant or useful?
There's the less cheerful possibility: the risks are real but your work is totally irrelevant to their reduction.
That would indeed be less cheerful. (but still useful to know)
Imagine pondering creation of artificial life-force from combinations of mechanical parts; that sounds incredibly dangerous, and like a worthwhile area of study. One could spend a lot of time thinking in terms of life-force - how do we ensure that the life-force goo wont eat everything in it's path? Should we stop research into steam locomotive to avoid such scenario?
Would you want to know if you are thinking in terms of irrelevant abstraction? We humans have capability of abstract thought; we love abstract thinking; some concepts are just abstraction porn though, only useful for tickling our 'grand insights feel good' thing.
If people had reasoned that way in the 18th century, they would have correctly predicted the risks of nanotech and maybe biotech. So I guess you should conclude that unfriendly AI risk is real, though far in the future... Anyway, how do you tell which concepts are "abstraction porn" and which aren't?
How useful that would have been, though? I don't think you can have a single useful insight about making safe nanotech or biotech from thinking in terms of abstract 'life force'. edit: also one can end up predicting a lot of invalid stuff this way, like zombies...
Concepts that are not built bottom up are usually abstraction porn.
This monolithic "intelligence" concept where you can have grand-feeling insights without having to be concerned with any hard details like algorithmic complexities, existing problem solving algorithms, the different aspects of intelligence such as problem solving, world model, sensory processing, without being consideration that the intelligence has to work in decentralized manner due to speed of light lag (and the parts of it have to implement some sort of efficient protocol for cooperation... mankind got such protocol, we call it morality), etc, is as suspicious as a concept can be. Ditto the 'utility' as per LW (shouldn't be confused with utility as in a mathematical function inside some of current software).
edit: Actually, do you guys even use any concept built from bottom up to think of AI ?
How useful would knowing the AI would be using, say, A* search, as opposed to meta reasonings on what it is likely to be searching for? We know both from computer science and our own minds that effective heuristics exist to approximately solve most problems. The precise bottom up knowledge you refer to is akin to knowing that the travelling salesman problem can only be solved (assuming NP not P) in exponential time; the meta-knowledge "good polymomial time heuristics exist for most problems" is much more useful to predicting the future of AI.
The issue is not merely that you don't have ground up definitions which respect the time constraints. The issue is that you don't seem to have any ground-up definitions at all, i.e. not even for something like AIXI. The goals themselves lack any bottom up definitions.
Worst of all you build stuff from the dubious concepts like that monolithic "intelligence".
Say, we want to make better microchips. We the engineers have to build it from bottom up, so we make some partially implemented intelligence to achieve such a goal, omitting the definition of what exactly is 'best microchip', omitting the real world goals, focussing the search (heuristics are about where you search!) and instead making it design smaller logical gates and then route up the chip, and perhaps figure out manufacturing. All doable with same methods, all to the point, strongly superhuman performance on subhuman hardware.
You build Oracle AI out of that monolithic "intelligence" concept, and tell it - I want a better microchip. This monolithic intelligence figures out how to take over the world to do so. You think, how do we prevent this monolithic intelligence concept from thinking about taking over the world? That looks like an incredibly difficult problem.
Or the orthogonality thesis. You think - are the goals of that monolithic intelligence arbitrary?
Meanwhile if you try to build bottom up or at least from the concepts with known bottom up definitions, well, something like number of paperclips in the universe is clearly more difficult than f(x,n) where x is the output on the screen at the step n and f is 1 if the operator responds with the reward button, 0 otherwise (note that the state if the computer is unplugged has to be explicitly modelled, and its not trivial to build bottom-up the concept of mathematical function 'disappearing', may actually be impossible).
The reason for that is that the AIs that are worrying are those of human-like levels of ability. And humans have shown skill in becoming intelligent in many different domains and the ability to build machine intelligence in the domains we have little skill in. So whatever its design, a AGI (artificial general intelligence) will have a broad pallet of abilities, and probably the ability to acquire others - hence the details of its design are less important than meta considerations. This is not the case for non-AGI AIs.
I think "how to convince philosophers that high intelligence will not automatically imply certain goals" - ie that they are being incorrectly meta.
Moor's law is a better way of predicting the future than knowing the exact details of the current research into microprocessors. Since we don't have any idea how the first AGI will be built (assuming it can be built), why bother focusing down on the current details when we're pretty certain they won't be relevant?
The AIs that are worrying have to beat the (potentially much simpler) partial AIs, that can be of autistic savant - like levels of ability and beyond, in the fields very relevant to being actually powerful. You can't focus just on the human-level AGIs when you consider the risks. The AGIs have to be able to technologically outperform contemporary human civilization to significant extent. Which would not happen if both AGIs and humans are substantially bottlenecked on running essentially same highly optimized (possibly self optimized) non general purpose algorithms to solve domain specific problems.
The meta considerations in question look identical to the least effective branches of philosophy.
I think so too, albeit in different way: I do not think that high intelligence will automatically imply that the goals are within the class of "goals which we have no clue how to define mathematically but which are really easy to imagine".
I have trouble parsing logical structure of this argument. Since we don't have any idea how the first AGI will be built, wouldn't make reasoning that employs faulty concepts relevant. Furthermore, being certain that something is irrelevant without having studied it, is a very Dunning-Kruger prone form of thought.
Furthermore, I can't see how in the world can you be certain that it is irrelevant that (for example) the AI has to work in peer to peer topology with very substantial lag, efficiently (i.e. no needless simulation of other nodes of itself, significant ignorance of content of other nodes, the local nodes lacking sight of the global picture, etc), when it comes to how it will interact with the other intelligences. We truly do not know that the hyper-morality won't fall out of this as a technological solution, considering that our own morality was produced as the solution for cooperation.
Also, I can't see how it can be irrelevant that (a good guess) AGI is ultimately a mathematical function that calculates outputs from inputs using elementary operations, and the particular instance of the AGI is a machine that's computing this function. That's a meta consideration built from ground up rather than from concept of monolithic 'intelligence'. The 'symbol grounding' may be a logical impossibility (and the feeling that symbols are grounded may well be a delusion that works via fallacies); in any case we don't see how it can be solved. Like free will; a lot of people feel very sure that they have something in their mind, that's clearly not compatible with reductionism. Well, I think there can be a lot of other things that we feel very sure we have, which are not compatible with reductionism in less obvious ways.
edit: to summarize, my opinion is that everything is far, far, far too speculative to warrant investigation. It's like trying to prevent Hindenburg disaster, bombing of Dresden, atomic bombing of Hiroshima and Nagasaki, and the risk of nuclear war, by thinking of the flying carpet as the flying vehicle (because bird-morphizing is not cool).
Yes. The SIAI world view doesn't seem to pay much particular attention to how morality necessarily evolved as the cooperative glue necessary for the social super-organism meta-transition.
Well, my opinion is that this is far too dangerous (as compared with other risks to humanity) to not investigate it. Philosophical tools are weak, but they've yet to prove weak enough that we should shelve the ongoing project.
Contradicts:
If you don't have any idea how AGI will be built, how can you be so confident about the distribution of its goals?
Ignorance widens the space of possible outcomes, it doesn't narrow it.
i.e. it makes no sense to make arguments like "we know nothing about the mind of god, but he doesn't like gay sex"
I'm afraid I cannot say that to you, sir.
What evidence would convince you that your life's work isn't relevant or useful?
Well, the easiest would be a nice, survivable AI being built by people taking no safety precautions at all.
No, the easiest way to make your work irrelevant would be an event that set humanity back to a preindustrial level of technology. The easiest good event is a safe AI without safety precautions.
What Stuart described would be evidence that his work was irrelevant. What you described would be a way of making his work irrelevant. The two are not at all the same.
Indeed. I appear to have misread.
Though I think it's admirable of you to be able to say that, one could rephrase "You've wasted your life. Nothing of what you've done is relevant or useful." since it seems - at least to me - harbor something a moral along the line of "how could be so stupid to dedicate your life to something as useless as . . .". Something that I don't think is true.
So how would you measure whether this was or wasn't in fact a waste of your life? Your post seems mostly to state that the outside view is not useful evidence either way.