Perplexed comments on John Baez Interviews with Eliezer (Parts 2 and 3) - Less Wrong

7 Post author: multifoliaterose 29 March 2011 05:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: Perplexed 29 March 2011 07:23:22PM 1 point [-]

In Part 2:

JB: So when you imagine "seed AIs" that keep on improving themselves and eventually become smarter than us, how can you reasonably hope that they’ll avoid making truly spectacular mistakes? How can they learn really new stuff without a lot of risk?

EY: The best answer I can offer is that they can be conservative externally and deterministic internally.

Eliezer never justifies why he wants determinism. It strikes me as a fairly bizarre requirement to impose. Or perhaps he means something different by determinism than does everyone else familiar with computers. Does he simply mean that he wants the hardware to be reliable?

Comment author: jsalvatier 29 March 2011 11:53:00PM 2 points [-]

What do you (and 'everyone else familiar with computers') mean by determinism?

Comment author: Perplexed 30 March 2011 12:34:36AM *  5 points [-]

A deterministic algorithm, if run twice with the same inputs, follows the same steps and produces the same outputs each time. A non-deterministic algorithm will not necessarily follow the same steps, and may not even generate the same result.

It has been part of the folklore since Dijkstra's "A Discipline of Programming" that well-written non-deterministic programs may be even easier to understand and prove correct than their deterministic counterparts.

Comment author: orthonormal 10 April 2011 01:15:56AM 1 point [-]

From the context, I think what EY means is that the AI must be structured so that all changes to source code can be proved safe-to-the-goal-system before being implemented.

On the other hand, I'm not sure why EY calls that "deterministic" rather than using another adjective.

Comment author: arundelo 29 March 2011 07:35:55PM 1 point [-]

The hardware and the software. Think of a provably correct compiler.

The main relevant paragraph in this interview is the one in part 2 whose first sentence is "The catastrophic sort of error, the sort you can’t recover from, is an error in modifying your own source code."

Comment author: jimrandomh 29 March 2011 07:58:47PM 5 points [-]

Interesting fact: The recent paper Finding and Understanding Bugs in C Compilers found miscompilation bugs in all compilers tested except for one, CompCert, which was unique in that its optimizer was built on a machine-checked proof framework.

Comment author: Perplexed 29 March 2011 08:41:24PM 2 points [-]

Yes, but I don't see what relevance that paragraph has to his desire for 'determinism'. Unless he has somehow formed the impression that 'non-deterministic' means 'error-prone' or that it is impossible to formally prove correctness of non-deterministic algorithms. In fact, hardware designs are routinely proven correct (ironically, using modal logic) even though the hardware being vetted is massively non-deterministic internally.

Comment author: timtyler 29 March 2011 08:01:13PM 0 points [-]

Does the worse than random essay help to explain?

Comment author: Perplexed 29 March 2011 08:28:04PM *  1 point [-]

Not at all. That essay simply says that non-deterministic algorithms don't perform better than deterministic ones (for some meanings of 'non-deterministic algorithms'). But the claim that needs to be explained is how determinism helps to prevent "making truly spectacular mistakes".

Comment author: timtyler 29 March 2011 09:16:49PM *  1 point [-]

Right. No doubt he is thinking he doesn't want a cosmic ray hitting his friendly algorithm, and turning it into an unfriendly one. That means robustness - or error detection and correction. Determinism seems to be a reasonable approach to this which makes proving things about the results about as easy as possible.