timtyler comments on Reason as memetic immune disorder - Less Wrong

215 Post author: PhilGoetz 19 September 2009 09:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 20 September 2009 03:01:01AM 2 points [-]

the only safe AI would be a logic system using a consistent logic, so that we could verify that certain undesirable statements were false in that system

Could be correct or wildly incorrect, depending on exactly what he meant by it. Of course you have to delete "the only", but I'd be pretty doubtful of any humans trying to do recursive self-modification in a way that didn't involve logical proof of correctness to start with.

Comment author: timtyler 20 September 2009 07:52:35AM *  1 point [-]

Companies are the self-improving systems of today - e.g. see Google.

They don't hack the human brain much - but they don't need to. Brains are not perfect - but they can have their inputs preprocessed, their outputs post-processed, and they can be replaced entirely by computers - via the well-known process of automation.

Do the folk at Google proceed without logical proofs? Of course they do! Only the slowest and most tentative programmer tries to prove the correctness of their programs before they deploy them. Instead most programmers extensively employ testing methodologies. Testing is the mantra of modern programmers. Test, test, test! That way they get their products to the market before the sun explodes.

Comment author: Technologos 20 September 2009 08:08:24PM 0 points [-]

As Eliezer has already showed, "test, test, test"ing AIs that aren't provably Friendly (their recursive self-modification leads to Friendly results) can have disastrous consequences.

I'd rather wait until the sun explodes rather than deploying an unFriendly AI by accident.

Comment author: timtyler 20 September 2009 08:52:27PM *  2 points [-]

The consequences of failing to adopt rapid development technologies when it comes to the development of intelligent machines should be pretty obvious - the effect is to pass the baton to another team with a different development philosophy.

Waiting until the sun explodes is not one of the realistic options.

The box experiments seem irrelevant to the case of testing machine intelligence. When testing prototypes in a harness, you would use powerful restraints - not human gatekeepers.

Comment author: Technologos 21 September 2009 02:30:28AM 3 points [-]

What powerful restraints would you suggest that would not require human judgment or human-designed decision algorithms to remove?

Comment author: RichardKennaway 21 September 2009 09:09:22AM 8 points [-]

Turn it off, encase it in nanofabricated diamond, and bury it in a deep pit. Destroy the experimental records, retaining only enough information to help future, wiser generations to one day take up again the challenge of building a Friendly AI. Scatter the knowledge in fragments, hidden in durable artifacts, scatter even the knowledge of how to find the knowledge likewise, and arrange a secret brotherhood to pass down through the centuries the ultimate keys to the Book That Does Not Permit Itself To Be Read.

Tens of thousands of years later, when civilisation has (alas) fallen and risen several times over, a collect-all-the-plot-coupons fantasy novel takes place.