JoshuaZ comments on David Deutsch on How To Think About The Future - Less Wrong

4 Post author: curi 11 April 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (197)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 09 April 2011 11:52:56PM 4 points [-]

Whether you call it new knowledge or not it not relevant. Nor is new types of knowledge what is generally relevant (aside from the not at all small issue that "type" isn't a well-defined notion in this context.)

Like writers can predict they will finish writing a book (even if they haven't worked out 100% of the plot yet) in advance.

Actually, writers sometimes start a book and find part way through that they don't want to finish, or the book might even change genres in the process of writing. If you prefer an example, I can predict that Brandon Sanderson's next Mistborn book will be awesome. I can predict that it will sell well, and get good reviews. I can even predict a fair number of plot points just based on stuff Sanderson has done before and various comments he has made. But, at the same time, I can't write a novel nearly as well as he does, and if he and I were to have a novel writing contest, he will beat me. I don' t know how he will beat me, but he will.

Similarly, a sufficiently smart AI has the same problem. If it decides that human existence is non-optimal for it to carry out its goals, then it will try to find ways to eliminate us. It doesn't matter if all the ways it comes up with of doing so are in a fairly limited set of domains. If it is really good at chemistry is might make nasty nanotech to reduce organic life into constituent atoms. If it is really good at math it might break all our cryptography, and then hack into our missiles and trigger a nuclear war (this one is obvious enough that there are multiple movies about it). If it is really good at social psychology it might manipulate us over a few years into just handing over control to it.

Just as I don't know how Kasparov will beat me but I know he will, I don't know how a sufficiently intelligent AI will beat me, but I know it will. There may be issues with how sufficiently intelligent it needs to be and whether or not an AGI will be likely undergo fast, substantial, recursive self-improvement to get to be that intelligent is an issue of much discussion on LW (Eliezer considers it likely. Some other people such as myself consider it to be unlikely.) but the basic point about sufficient intelligent seems clear.

Comment author: curi 10 April 2011 12:00:11AM -1 points [-]

Whether you call it new knowledge or not it not relevant.

Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.

Actually, writers sometimes start a book and find part way through that they don't want to finish,

I know that? And if I played Kasparov I might win. It's not a 100% guaranteed prediction.

@Sanderson: you understand what kind of thing he's doing pretty well. writers are a well known phenomenon. the less you know what processes he uses to write, what tradition he's following -- in general what's going on -- the less you can make any kind of useful predictions.

If it decides that human existence is non-optimal for it to carry out its goals

why would it?

Deutsch doesn't think AGI's will do fast recursive self-improvement. They can't because the first ones will already be universal and there's nothing much left to improve, besides their knowledge (not their design, besides making it faster). Improving knowledge with intelligence is the same process for AGI and humans. It won't magically get super fast.

Comment author: Vladimir_Nesov 10 April 2011 12:10:07AM 2 points [-]

And if I played Kasparov I might win. It's not a 100% guaranteed prediction.

The fallacy of gray? Between zero chance of winning a lottery, and epsilon chance, there is an order-of-epsilon difference. If you doubt this, let epsilon equal one over googolplex.

Comment author: JoshuaZ 10 April 2011 12:18:38AM *  2 points [-]

Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.

Then the define the term.

I know that? And if I played Kasparov I might win. It's not a 100% guaranteed prediction.

So what? How is that at all relevant. It isn't 100% guaranteed that if I jump off a tall building that I will then die. That doesn't mean I'm going to try. You can't use the fact that something isn't definite as an argument to ignore the issue wholesale.

Deutsch doesn't think AGI's will do fast recursive self-improvement. They can't because the first ones will already be universal and there's nothing much left to improve, besides their knowledge (not their design, besides making it faster).

Ok. So I'm someone who finds extreme recursive self-improvement to be unlikely and I find this to be a really unhelpful argument. Improvements in speed matter. A lot. Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small. That means that the AI will do pretty much everything faster, and the more computing power it gets the more disparity there will be between it and the entities that don't have access to this algorithm. It wants to engineer a new virus? Oh what luck, protein folding is under many models NP-compete. The AI decides to improve its memory design? Well, that involves graph coloring and the traveling salesman, also NP-complete problems. The AI decides that it really wants access to all the world's servers and add them to its computational power? Well most of those have remote access capability that is based on cryptographic problems which are much weaker than NP-complete. So, um, yeah. It got those too.

Now, this scenario seems potentially far-fetched. After all, most experts consider it to be unlikely that P=NP, and consider it to be extremely unlikely that there's any sort of fast algorithm for NP complete problems. So let's just assume instead that the AI tries to make itself a lot faster. Well, let's see, what can our AI do. It could give itself some nice quantum computing hardware and then use Shor's algorithm to break factoring in polynomial time and then all AI can just take over lots of computers and have fun that way.

Improving knowledge with intelligence is the same process for AGI and humans. It won't magically get super fast

This is not at all obvious. Humans can't easily self-modify our hardware. We have no conscious access to most of our computational capability, and our computational capability is very weak. We're pathetic sacks of meat that can't even multiply four or five digits numbers in our heads. We also can't save states and swap out cognitive modules. An AGI can potentially do all of that.

Don't underestimate the dangers of a recursively self-improving entity or the value of speed.