You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on What does the world look like, the day before FAI efforts succeed? - Less Wrong Discussion

23 Post author: michaelcurzi 16 November 2012 08:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 18 November 2012 11:24:55PM *  2 points [-]

We don't seem to agree. This isn't how technology gets built. Nobody proved the first aeroplane was safe. Nobody proved the first space rocket was safe. They weren't safe. No areoplanes or spaceships have ever been proven safe. You may be able to prove some things about the design process, but not safety - security doesn't work like that.

There is something called "provable security" in cryptography - but it doesn't really mean what its title says. It means that you can prove something relating to security in a particular model - not that you can prove something is safe.

Comment author: loup-vaillant 19 November 2012 12:54:29AM 2 points [-]

I made 2 assumptions here

  1. On this day of the Great Launch, we have won. That's the whole point of the thread.
  2. If we didn't take the extreme precautions I mentioned, we would most certainly have lost. That's the Singularity Institute Scary Idea, which I actually believe.

You, on the other hand, say that we will most certainly not take those drastic precautions. And you know the worst part? I agree. Which take us back to square one: by default, we're doomed. (There. You've done it. Now I'm scared.)

Comment author: timtyler 19 November 2012 02:00:16AM *  0 points [-]

Evolution isn't really a win/lose game. Organisms succeed in perpetuating themselves - and the things they value - to varying degrees. Humans seem set to survive in the history books, but our overall influence on the future looks set to be quite marginal - due to the substantial influence of technological determinism - except in the unlikely case where we permanently destroy civilization. Of course we can still try - but I just don't think our best shot looks much like what you described. Of course it might be fun if we had time for all that stuff about provable security - but at the moment, the situation looks a lot like a frantic race, and that looks like exactly the sort of thing that will be first to go up against the wall.

Comment author: loup-vaillant 19 November 2012 07:14:56AM *  1 point [-]

except in the unlikely case where we permanently destroy civilization.

 

the situation looks a lot like a frantic race

Are you saying that you don't buy the scary idea?

Comment author: timtyler 19 November 2012 11:37:35PM *  0 points [-]

I said I considered destroying "civilization" to be unlikely. Going by this:

progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race.

...the scary idea claims to be about "the human race". I don't define "civilization" in a human-centric way - so I don't class those as being the same thing - for instance, I think that civilization might well continue after an "involuntary" robot takeover.

Comment author: loup-vaillant 20 November 2012 11:15:02AM *  1 point [-]

Well, a civilization with humanity all dead is pretty much certainly not what we want. I don't care if in the grand scheme of things, this isn't a win/lose game. I think I have something like a utility function, and I want it maximized, period.

Back to my question: do you see any other path to building a future we want than the one I described?

Comment author: timtyler 20 November 2012 11:39:16PM *  1 point [-]

Well, a civilization with humanity all dead is pretty much certainly not what we want.

Well, humans will live on via historical simulations, with pretty good probability. Humans won't remain the dominant species, though. Those hoping for that have unrealistic expectations. Machines won't remain human tools, they are likely to be in charge.

I think I have something like a utility function, and I want it maximized, period.

Sure, but it's you and billions of other organisms - with often-conflicting futures in mind - and so most won't have things their way.

do you see any other path to building a future we want than the one I described?

IIRC, your proposal put considerable emphasis on proof. We'll prove what we can, but proof often lags far behind the leading edge of computer science. There are many other approaches to building mission critical systems incrementally - I expect we will make more use those.

Comment author: loup-vaillant 21 November 2012 08:12:13AM *  0 points [-]

Historical simulations: assuming it preserves identity etc, why not…

Utility function: I know that my chances of maximizing my utiliy function are quite… slim, to say the least.

Path to best future(humanity): proofs do not lag so far behind right now. Modern type systems are now pretty good, and we have proof assistants that makes the "prove your whole program" quite feasible –though not cheap yet. Plus, the leading edge is generally the easiest to prove, because it tends to lie on solid mathematical ground. We don't do proofs because they're generally expensive, and we use ancient technologies that leak lots of low-level details, and make proofs much harder. (I program for a living.)

But I see at least the possibility of a slightly different path: still take precautions, just don't prove the thing.

Oh, and I forgot: if we solve safety before capability, incrementally designing the AI by trial-and-error would be quite reasonable. The definite milestone will be harder to define in this case. I guess I'll have to update a bit.