A Turing machine is a universal computer: it can compute anything that any other computer can compute. A human being can specify a Turing machine and the data it's acting on and carry out the steps that the machine would execute. Human beings have also constructed computers with the same repertoire as a Turing machine, such as the computer on which I am writing this question. There are articles on Less Wrong about mind design space, such as this one:

https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general

in which the author writes:

The main reason you could find yourself thinking that you know what a fully generic mind will (won't) do, is if you put yourself in that mind's shoes - imagine what you would do in that mind's place - and get back a generally wrong, anthropomorphic answer.

But a person thinking about what an AI would do needn't imagine what he would do in that other mind's place. He can simulate that mind with a universal computer.

So what is the Less Wrong position on whether we could understand AIs and how is that claim compatible with the universality of computation?

New Answer
New Comment

4 Answers sorted by

jimrandomh

20

But a person thinking about what an AI would do needn't imagine what he would do in that other mind's place. He can simulate that mind with a universal computer.

This is straightforwardly incorrect. Humans (in 2020) reasoning about what future AIs will do, do not have the source code or full details of those AIs, because they are hypothetical constructs. Therefore we can't simulate them. This is the same as why we can't predict what another human would do by simulating them; we don't have a full-fidelity scan of their brain, or a detailed-enough model of what to do with such a scan, or a computer fast enough to run it.

We can't simulate things of which we currently have no understanding. But if at some point in the future we know how to write AGIs, then we would be able to simulate them. And if we don't know how to write AGIs then they won't exist. So if we can write AGIs in the future then memory capacity and processor speed won't impose a limit on our understanding. Any such limit would have to come from some other factor. So is there such a limit and where would it come from?

2jimrandomh
I think you're missing what the goal of all this is. LessWrong contains a lot of reasoning and prediction about AIs that don't exist, with details not filled in, because we want to decide which AI research paths we should and shouldn't pursue, which AIs we should and shouldn't create, etc. This kind of strategic thinking must necessarily be forward-looking, and based on incomplete information, because if it wasn't, it would be too late to be useful. So yes, after AGIs are already coded up and ready to run, we can learn things about their behavior by running them. This isn't in dispute, it's just not a solution to the questions we want to answer (on the timescales we need the answers).

Slider

10

That I can run or emulate a program usually doesn't much imply that I understand it very much. If I have a exe I need to decompile it or have its source provided to me and even then need to study it quite a bit. If I run it through pen and paper I am not guaranteed to gain more insight than running it via an external computer.

There is also the distinction of a specific program or what a program could be. For example "programs will halt" is wrong althought the question of this or that program haling can right or wrong. There are not many properties that you can deduce from a program from it simply being a program. "Programs have loops" can be a good inductive generalization about programs "found in the wild" but it is a terrible description of a general program.

TAG

10

This is mostly a quantitative issue.

If you define a UTM as having infinite capacity, then a human is not a UTM.

If you are talking about finite TMs, then a smaller finite TM cannot emulate a larger one. A larger finite TM might be able to emulate a smaller one, but cannot necessarily . A human cannot necessarily emulate a TM with less total processing power than a human brain, because a human cannot devote 100% of cognitive resources to the emulation. Your brain is mostly devoted to keeping your body going.

This can easily be seen from the history and methodology of programming. Humans have a very limited ability to devoted their cognitive resources to emulating low level computation, so programmers found it necessary to invent high level languages and tools to minimise their disadvantages and maximise their advantages in terms of higher level thought and pattern recognition.

Humans are so bad at emulating a processor executing billions of low level instructions per second that our chances of being able to predict an AI using that technique in real time are zero.

Periergo

-20

The "Less Wrong position"? Are we all supposed to have 1 position here? Or did you mean to ask what EY's position is?

I don't think I understand your statement/question (?) - In order to know what an AI would do, you just need to simulate it with an AI?

I think you're saying that you could simulate what an AGI would do via any computer. If you're simulating an AGI, are you not building an AGI?

1 comment, sorted by Click to highlight new comments since:

Of possible interest, Roman Yampolskiy's paper, "The Universe Of Minds".

https://arxiv.org/pdf/1410.0369.pdf

The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some interesting properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology. A list of open problems for this new field is presented.