Armok_GoB comments on AI risk, new executive summary - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (76)
It was just a joke: I meant that I would prove you wrong by showing that I can understand you, despite the difference in our intellectual faculties. I don't really know if we have very different intellectual faculties; it was just a slightly ironic reposte to being called "naive, unimaginative and closed-minded" earlier. You may be right! But then my understanding you is at least a counterexample.
Can we taboo the 'animals can't be made to understand us' analogy? I don't think it's a good analogy, and I assume you can express your point without it. It certainly can't be the substance of your argument.
Anyway, would you be willing to agree to this: "There are at least some sentences in the meta-language (i.e. the kind of language an AGI might be capable of) such that those sentences cannot be translated into even an arbitrarily complex expressions in human language." For example, there will be sentences in the meta-language that cannot be expressed in human language, even if we allow the users of human language (and the AGI) an arbitrarily large amount of time, an arbitrarily large number of attempts at conversation, question and answer, etc. an arbitrarily large capacity for producing metaphor, illustration, etc. Is that your view? Or is that far too extreme? Do you just mean to say that the average human being today couldn't get their heads around an AGI's goals given 40 minutes, pencil, and paper? Or something in between these two claims?
Why do you think this is a strong argument? It strikes me as very indirect and intuitionistic. I mean, I see what you're saying, but I'm not at all confident that the relations between a protozoa and a fish, a dog and a chimp, a 8th century dock worker and a 21st century physicist, and the smartest of (non-uplifted) people and an AGI all fall onto a single continuum of intelligence/complexity of goals. I don't even know what kind of empirical evidence (I mean the sort of think one would find in a scientific journal) could be given in favor of such a conclusion. I just don't really see why you're so confident in this conclusion.
Using "even an arbitrarily complex expressions in human language" seem unfair, given that it's turing complete but describing even a simple program in it fully in it without external tools will far exceed the capability of any actual human except for maybe a few savants that ended up highly specialized towards that narrow kind of task.
I agree, but I was taking the work of translation to be entirely on the side of an AGI: it would take whatever sentences it thinks in a meta-language and translate them into human language. Figuring out how to express such thoughts in our language would be a challenging practical problem, but that's exactly where AGI shines. I'm assuming, obviously, that it wants to be understood. I am very ready to agree that an AGI attempting to be obscure to us will probably succeed.
Thats obvious and not what I meant. I'm talking about the simplest possible in principle expression in the human language being that long and complex.