XiXiDu comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 13 March 2012 07:46:32PM *  28 points [-]

P1 Fast, and therefore dangerous, recursive self-improvement is logically possible.

All your counter-arguments are enthymematic; as far as I can tell, you are actually arguing against a proposition which looks more like

P1 Recursive self-improvement of arbitrary programs towards unalterable goals is possible with very small constant factors and P or better general asymptotic complexity

I would find your enthymematic far more convincing if you explained why things like Goedel machines are either fallacious or irrelevant.

P1.b The fast computation of a simple algorithm is sufficient to outsmart and overpower humanity.

Your argument is basically an argument from fiction; it's funny that you chose that example of the Roman Empire when recently Reddit spawned a novel arguing that a Marine Corps (surely less dangerous than your 100) could do just that. I will note in passing that black powder's formulation is so simple and famous that even I, who prefers archery, knows it: saltpeter, charcoal, and sulfur. I know for certain that the latter two are available in the Roman empire and suspect the former would not be hard to get. EDIT: and this same day, a Mafia-related paper I was reading for entertainment mentioned that Sicily - one of the oldest Roman possessions - was one of the largest global exporters of sulfur in the 18th/19th centuries. So that ingredient is covered, in spades!

Consider that it takes a whole technological civilization to produce a modern smartphone.

A civilization which exists and is there for the taking.

If you were going to speed up a chimp brain a million times, would it quickly reach human-level intelligence? If not, why then would it be different for a human-level intelligence trying to reach transhuman intelligence?

Chimp brains have not improved at all, even to the point of building computers. There is an obvious disanalogy here...

And to do so efficiently it takes random mutation, a whole society of minds

All of which are available to a 'simple algorithm'. Artificial life was first explored by von Neumann himself!

An AI with simple values will simply lack the creativity, due to a lack of drives, to pursue the huge spectrum of research that a society of humans does pursue.

Are you serious? Are you seriously claiming this? Dead-simple chess and Go algorithms routinely turn out fascinating moves. Genetic algorithms are renowned for producing results which are bizarre and inhuman and creative. Have you never read about the famous circuit which has disconnected parts but won't function without them?

What is this bullshit 'computers can't exhibit creativity' doing here? Searle, why did you steal XiXiDu's account and post this?

Yet even if we assume that there is one complete theory of general intelligence, once discovered, one just has to throw more resources at it. It might be able to incorporate all human knowledge, adapt it and find new patterns. But would it really be vastly superior to human society and their expert systems?

'I may be completely wrong, but hey, I can still ask rhetorically whether I'm not actually right!'

P3 Fast, and therefore dangerous, recursive self-improvement is economically feasible.

This implies P2.

So if the AI can do that, why wouldn't humans be able to use the same algorithms to predict what the initial AI is going to do? And if the AI can't do that, how is it going to maximize expected utility if it is unable to predict what it is going to do?

Why can't I predict the next move of my chess algorithm? Why is there no algorithm to predict the AI algorithm simpler and faster than the original AI algorithm?

A plan for world domination seems like something that can't be concealed from its creators. Lying is no option if your algorithms are open to inspection.

This is just naive. Source code can be available and either the maliciousness not obvious (see the Underhanded C Contest) or not prove what you think it proves (see Reflections on Trusting Trust, just for starters). Assuming you are even inspecting all the existing code rather than a stub left behind to look like an AI.

Therefore the probability of an AI to undergo explosive recursive self-improvement (P(FOOM)) is the probability of the conjunction (P#∧P#) of its premises:

No. Not all the premises are necessary, so a conjunction is inappropriate and establishes a lower bound, at best.

I'm going to stop here. This might have been a useful exercise if you were trying to establish solely necessary premises, in the same vein as Chalmer's paper or Drake equation-style examination of cryonics, but you're not doing that.