You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Wasted life - Less Wrong Discussion

12 Post author: Stuart_Armstrong 24 May 2012 10:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 25 May 2012 10:37:43AM *  3 points [-]

That would indeed be less cheerful. (but still useful to know)

Comment author: private_messaging 25 May 2012 02:30:57PM *  3 points [-]

Imagine pondering creation of artificial life-force from combinations of mechanical parts; that sounds incredibly dangerous, and like a worthwhile area of study. One could spend a lot of time thinking in terms of life-force - how do we ensure that the life-force goo wont eat everything in it's path? Should we stop research into steam locomotive to avoid such scenario?

Would you want to know if you are thinking in terms of irrelevant abstraction? We humans have capability of abstract thought; we love abstract thinking; some concepts are just abstraction porn though, only useful for tickling our 'grand insights feel good' thing.

Comment author: cousin_it 25 May 2012 03:03:56PM *  3 points [-]

If people had reasoned that way in the 18th century, they would have correctly predicted the risks of nanotech and maybe biotech. So I guess you should conclude that unfriendly AI risk is real, though far in the future... Anyway, how do you tell which concepts are "abstraction porn" and which aren't?

Comment author: private_messaging 25 May 2012 06:46:59PM *  5 points [-]

If people had reasoned that way in the 18th century, they would have correctly predicted the risks of nanotech and maybe biotech.

How useful that would have been, though? I don't think you can have a single useful insight about making safe nanotech or biotech from thinking in terms of abstract 'life force'. edit: also one can end up predicting a lot of invalid stuff this way, like zombies...

Anyway, how do you tell which concepts are "abstraction porn" and which aren't?

Concepts that are not built bottom up are usually abstraction porn.

This monolithic "intelligence" concept where you can have grand-feeling insights without having to be concerned with any hard details like algorithmic complexities, existing problem solving algorithms, the different aspects of intelligence such as problem solving, world model, sensory processing, without being consideration that the intelligence has to work in decentralized manner due to speed of light lag (and the parts of it have to implement some sort of efficient protocol for cooperation... mankind got such protocol, we call it morality), etc, is as suspicious as a concept can be. Ditto the 'utility' as per LW (shouldn't be confused with utility as in a mathematical function inside some of current software).

edit: Actually, do you guys even use any concept built from bottom up to think of AI ?

Comment author: Stuart_Armstrong 28 May 2012 08:43:11AM 1 point [-]

How useful would knowing the AI would be using, say, A* search, as opposed to meta reasonings on what it is likely to be searching for? We know both from computer science and our own minds that effective heuristics exist to approximately solve most problems. The precise bottom up knowledge you refer to is akin to knowing that the travelling salesman problem can only be solved (assuming NP not P) in exponential time; the meta-knowledge "good polymomial time heuristics exist for most problems" is much more useful to predicting the future of AI.

Comment author: private_messaging 28 May 2012 01:02:38PM *  2 points [-]

The issue is not merely that you don't have ground up definitions which respect the time constraints. The issue is that you don't seem to have any ground-up definitions at all, i.e. not even for something like AIXI. The goals themselves lack any bottom up definitions.

Worst of all you build stuff from the dubious concepts like that monolithic "intelligence".

Say, we want to make better microchips. We the engineers have to build it from bottom up, so we make some partially implemented intelligence to achieve such a goal, omitting the definition of what exactly is 'best microchip', omitting the real world goals, focussing the search (heuristics are about where you search!) and instead making it design smaller logical gates and then route up the chip, and perhaps figure out manufacturing. All doable with same methods, all to the point, strongly superhuman performance on subhuman hardware.

You build Oracle AI out of that monolithic "intelligence" concept, and tell it - I want a better microchip. This monolithic intelligence figures out how to take over the world to do so. You think, how do we prevent this monolithic intelligence concept from thinking about taking over the world? That looks like an incredibly difficult problem.

Or the orthogonality thesis. You think - are the goals of that monolithic intelligence arbitrary?

Meanwhile if you try to build bottom up or at least from the concepts with known bottom up definitions, well, something like number of paperclips in the universe is clearly more difficult than f(x,n) where x is the output on the screen at the step n and f is 1 if the operator responds with the reward button, 0 otherwise (note that the state if the computer is unplugged has to be explicitly modelled, and its not trivial to build bottom-up the concept of mathematical function 'disappearing', may actually be impossible).

Comment author: Stuart_Armstrong 28 May 2012 02:12:18PM 2 points [-]

Worst of all you build stuff from the dubious concepts like that monolithic "intelligence".

The reason for that is that the AIs that are worrying are those of human-like levels of ability. And humans have shown skill in becoming intelligent in many different domains and the ability to build machine intelligence in the domains we have little skill in. So whatever its design, a AGI (artificial general intelligence) will have a broad pallet of abilities, and probably the ability to acquire others - hence the details of its design are less important than meta considerations. This is not the case for non-AGI AIs.

Or the orthogonality thesis. You think - are the goals of that monolithic intelligence arbitrary?

I think "how to convince philosophers that high intelligence will not automatically imply certain goals" - ie that they are being incorrectly meta.

Meanwhile if you try to build bottom up or at least from the concepts with known bottom up definitions, well...

Moor's law is a better way of predicting the future than knowing the exact details of the current research into microprocessors. Since we don't have any idea how the first AGI will be built (assuming it can be built), why bother focusing down on the current details when we're pretty certain they won't be relevant?

Comment author: private_messaging 28 May 2012 04:54:09PM *  1 point [-]

The reason for that is that the AIs that are worrying are those of human-like levels of ability.

The AIs that are worrying have to beat the (potentially much simpler) partial AIs, that can be of autistic savant - like levels of ability and beyond, in the fields very relevant to being actually powerful. You can't focus just on the human-level AGIs when you consider the risks. The AGIs have to be able to technologically outperform contemporary human civilization to significant extent. Which would not happen if both AGIs and humans are substantially bottlenecked on running essentially same highly optimized (possibly self optimized) non general purpose algorithms to solve domain specific problems.

hence the details of its design are less important than meta considerations.

The meta considerations in question look identical to the least effective branches of philosophy.

I think "how to convince philosophers that high intelligence will not automatically imply certain goals" - ie that they are being incorrectly meta.

I think so too, albeit in different way: I do not think that high intelligence will automatically imply that the goals are within the class of "goals which we have no clue how to define mathematically but which are really easy to imagine".

Since we don't have any idea how the first AGI will be built (assuming it can be built), why bother focusing down on the current details when we're pretty certain they won't be relevant?

I have trouble parsing logical structure of this argument. Since we don't have any idea how the first AGI will be built, wouldn't make reasoning that employs faulty concepts relevant. Furthermore, being certain that something is irrelevant without having studied it, is a very Dunning-Kruger prone form of thought.

Furthermore, I can't see how in the world can you be certain that it is irrelevant that (for example) the AI has to work in peer to peer topology with very substantial lag, efficiently (i.e. no needless simulation of other nodes of itself, significant ignorance of content of other nodes, the local nodes lacking sight of the global picture, etc), when it comes to how it will interact with the other intelligences. We truly do not know that the hyper-morality won't fall out of this as a technological solution, considering that our own morality was produced as the solution for cooperation.

Also, I can't see how it can be irrelevant that (a good guess) AGI is ultimately a mathematical function that calculates outputs from inputs using elementary operations, and the particular instance of the AGI is a machine that's computing this function. That's a meta consideration built from ground up rather than from concept of monolithic 'intelligence'. The 'symbol grounding' may be a logical impossibility (and the feeling that symbols are grounded may well be a delusion that works via fallacies); in any case we don't see how it can be solved. Like free will; a lot of people feel very sure that they have something in their mind, that's clearly not compatible with reductionism. Well, I think there can be a lot of other things that we feel very sure we have, which are not compatible with reductionism in less obvious ways.

edit: to summarize, my opinion is that everything is far, far, far too speculative to warrant investigation. It's like trying to prevent Hindenburg disaster, bombing of Dresden, atomic bombing of Hiroshima and Nagasaki, and the risk of nuclear war, by thinking of the flying carpet as the flying vehicle (because bird-morphizing is not cool).

Comment author: jacob_cannell 09 June 2012 07:56:25PM 0 points [-]

AI has to work in peer to peer topology with very substantial lag, efficiently (i.e. no needless simulation of other nodes of itself, significant ignorance of content of other nodes, the local nodes lacking sight of the global picture, etc), when it comes to how it will interact with the other intelligences. We truly do not know that the hyper-morality won't fall out of this as a technological solution, considering that our own morality was produced as the solution for cooperation.

Yes. The SIAI world view doesn't seem to pay much particular attention to how morality necessarily evolved as the cooperative glue necessary for the social super-organism meta-transition.

Comment author: Stuart_Armstrong 28 May 2012 08:52:20PM 0 points [-]

edit: to summarize, my opinion is that everything is far, far, far too speculative to warrant investigation.

Well, my opinion is that this is far too dangerous (as compared with other risks to humanity) to not investigate it. Philosophical tools are weak, but they've yet to prove weak enough that we should shelve the ongoing project.

Comment author: private_messaging 28 May 2012 09:31:00PM *  0 points [-]

It seems to me that you are a: grossly over estimating the productivity of symbolic manipulation on a significant number of symbols with highly speculative meanings, and b: there is the issue that you do not seem to dedicate due effort to investigating existing software or trying to verify relevance of the symbols or improve it. The symbolic manipulation is only as relevant as the symbols being manipulated.

Comment author: jacob_cannell 09 June 2012 07:52:24PM *  0 points [-]

I think "how to convince philosophers that high intelligence will not automatically imply certain goals"

Contradicts:

Since we don't have any idea how the first AGI will be built

If you don't have any idea how AGI will be built, how can you be so confident about the distribution of its goals?

Comment author: Stuart_Armstrong 10 June 2012 09:03:29AM 2 points [-]

Ignorance widens the space of possible outcomes, it doesn't narrow it.

i.e. it makes no sense to make arguments like "we know nothing about the mind of god, but he doesn't like gay sex"

Comment author: jacob_cannell 11 June 2012 02:51:39AM *  -1 points [-]

If you are ignorant about the nature of superintelligence, then you don't know whether or not it entails certain goals.

Ignorance does not allow you to hold confidence in the proposition that "high intelligence will not automatically imply certain goals".

Adopting this argument from ignorance puts you in the unfortunate position of being like the uninformed layman attempting to convince particle physicists of the grave dangers of supercolliders destroying the earth.

For in fact there is knowledge to be had about intelligence and the nature of future AI, and recognized experts in the field (Norvig, Kurzweil, Hawkins etc) are not dismissing the SIA position out of ignorance.