CarlShulman comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 14 August 2010 07:48:25PM *  3 points [-]

...and "FOOM" means way the hell smarter than anything else around...

Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.

Not, "ooh, it's a little Einstein but it doesn't have any robot hands, how cute".

Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions), but how is it going to make use of the things it orders?

Optimizing yourself is a special case, but it's one we're about to spend a lot of time talking about.

I believe that self-optimization is prone to be very limited. Changing anything substantial might lead Gandhi to swallow the pill that will make him want to hurt people, so to say.

...humans developed the idea of science, and then applied the idea of science...

Sound argumentation that gives no justification to extrapolate it to an extent that you could apply it to the shaky idea of a superhuman intellect coming up with something better than science and applying it again to come up...

In an AI, the lines between procedural and declarative knowledge are theoretically blurred, but in practice it's often possible to distinguish cognitive algorithms and cognitive content.

All those ideas about possible advantages of being an entity that can reflect upon itself to the extent of being able to pinpoint its own shortcoming is again, highly speculative. This could be a disadvantage.

Much of the rest is about the plateau argument, once you got a firework you can go to the moon. Well yes, I've been aware of that argument. But that's weak, that there are many hidden mysteries about reality that we completely missed yet is highly speculative. I think even EY admits that whatever happens, quantum mechanics will be a part of it. Is the AI going to invent FTL travel? I doubt it, and it's already based on the assumption that superhuman intelligence, not just faster intelligence, is possible.

Insights are items of knowledge that tremendously decrease the cost of solving a wide range of problems.

Like the discovery that P ≠ NP? Oh wait, that would be limiting. This argument runs in both directions.

If you go to a sufficiently sophisticated AI - more sophisticated than any that currently exists...

Assumption.

But it so happens that the AI itself uses algorithm X to store associative memories, so if the AI can improve on this algorithm, it can rewrite its code to use the new algorithm X+1.

Nice idea, but recursion does not imply performance improvement.

You can't draw detailed causal links between the wiring of your neural circuitry, and your performance on real-world problems.

How can he make any assumptions then about the possibility to improve them recursively, given this insight, to an extent that they empower an AI to transcendent into superhuman realms?

Well, we do have one well-known historical case of an optimization process writing cognitive algorithms to do further optimization; this is the case of natural selection, our alien god.

Did he just attribute intention to natural selection?

Comment author: CarlShulman 15 August 2010 09:32:28AM *  6 points [-]

Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions),

Any specific scenario is going to have burdensome details, but that's what you get if you ask for specific scenarios rather than general pressures, unless one spends a lot of time going through detailed possibilities and vulnerabilities. With respect to the specific example, regular human criminals routinely swindle or earn money anonymously online, and hack into and control millions of computers in botnets. Cloud computing resources can be rented with ill-gotten money.

but how is it going to make use of the things it orders?

In the unlikely event of a powerful human-indifferent AI appearing in the present day, a smartphone held by a human could provide sensors and communication to use humans for manipulators (as computer programs direct the movements of some warehouse workers today). Humans can be paid, blackmailed, deceived (intelligence agencies regularly do these things) to perform some tasks. An AI that leverages initial capabilities could jury-rig a computer-controlled method of coercion [e.g. a cheap robot arm holding a gun, a tampered-with electronic drug-dispensing implant, etc]. And as time goes by and the cumulative probability of advanced AI becomes larger, increasing quantities of robotic vehicles and devices will be available.

Comment author: XiXiDu 15 August 2010 09:55:03AM *  1 point [-]

Thanks, yes I know about those arguments. One of the reasons I'm actually donating and accept AI to be one existential risk. I'm inquiring about further supporting documents and transparency. More on that here, especially check the particle collider analogy.

Comment author: CarlShulman 15 August 2010 10:14:27AM 1 point [-]

With respect to transparency, I agree about a lack of concise, exhaustive, accessible treatments. Reading some of the linked comments about marginal evidence from hypotheses I'm not quite sure what you mean, beyond remembering and multiplying by the probability that particular premises are false. Consider Hanson's "Economic Growth Given Machine Intelligence". One might support it with generalizations from past population growth in plants and animals, from data on capital investment and past market behavior and automation, but what would you say would license drawing probabilistic inferences using it?

Comment author: Unknowns 17 August 2010 06:49:03AM 0 points [-]

Note that such methods might not result in the destruction of the world within a week (the guaranteed result of a superhuman non-Friendly AI according to Eliezer.)

Comment author: CarlShulman 17 August 2010 10:41:16AM *  2 points [-]

destruction of the world within a week (the guaranteed result of a superhuman non-Friendly AI according to Eliezer.)

What guarantee?.

Comment author: Unknowns 17 August 2010 11:48:27AM -1 points [-]

With a guarantee backed by $1000.

Comment author: CarlShulman 17 August 2010 12:33:16PM *  2 points [-]

The linked bet doesn't reference "a week," and the "week" reference in the main linked post is about going from infrahuman to superhuman, not using that intelligence to destroy humanity.

That bet seems underspecified. Does attention to "Friendliness" mean any attention to safety whatsoever, or designing an AI with a utility function such that it's trustworthy regardless of power levels? Is "superhuman" defined relative to the then-current level of human (or upload, or trustworthy less intelligent AI) capacity with any enhancements (or upload speedups, etc)? What level of ability counts as superhuman? You two should publicly clarify the terms.

Comment author: Unknowns 17 August 2010 12:42:49PM 0 points [-]

A few comments later on the same comment thread someone asked me how much time was necessary, and I said I thought a week was enough, based on Eliezer's previous statements, and he never contradicted this, so it seems to me that he accepted it by default, since some time limit will be necessary in order for someone to win the bet.

I defined superhuman to mean that everyone will agree that it is more intelligent than any human being existing at that time.

I agree that the question of whether there is attention to Friendliness might be more problematic to determine. But "any attention to safety whatsoever" seems to me to be clearly stretching the idea of Friendliness-- for example, someone could pay attention to safety by trying to make sure that the AI was mostly boxed, or whatever, and this wouldn't satisfy Eliezer's idea of Friendliness.

Comment author: CarlShulman 17 August 2010 12:48:55PM 1 point [-]

Ah. So an AI could, e.g. be only slightly superhuman and require immense quantities of hardware to generate that performance in realtime.

Comment author: Unknowns 17 August 2010 01:03:48PM 0 points [-]

Right. And if this scenario happened, there would be a good chance that it would not be able to foom, or at least not within a week. Eliezer's opinion seems to be that this scenario is extremely unlikely, in other words that the first AI will already be far more intelligent than the human race, and that even if it is running on an immense amount of hardware, it will have no need to acquire more hardware, because it will be able to construct nanotechnology capable of controlling the planet through actions originating on the internet as you suggest. And as you can see, he is very confident that all this will happen within a very short period of time.