Comment author: Armarren 11 June 2012 08:48:02AM 2 points [-]

Just because software is built line by line doesn't mean it automatically does exactly what you want. In addition to outright bugs any complex system will have unpredictable behaviour, especially when exposed to real word data. Just because the system can restrict the search space sufficiently to achieve an objective doesn't mean it will restrict itself only to the parts of the solution space the programmer wants. The basic purpose of Friendly AI project is to formalize human value system sufficiently that it can be included into the specification of such restriction. The argument made by SI is that there is a significant risk a self-improving AI can increase in power so rapidly, that unless such restriction is included from the outset it might destroy humanity.

Comment author: roll 20 June 2012 05:49:16PM *  -1 points [-]

Just because it doesn't do exactly what you want doesn't mean it is going to fail in some utterly spectacular way.

You aren't searching for solutions to a real world problem, you are searching for solutions to a model (ultimately, for solutions to systems of equations), and not only you have limited solution space, you don't model anything irrelevant. Furthermore, the search space is not 2d and not 3d, and not even 100d, the volume increases really rapidly with size. The predictions of many systems are fundamentally limited by Lyapunov's exponent. I suggest you stop thinking in terms of concepts like 'improve'.

If something self improves at software level, that'll be a piece of software created with very well defined model of changes to itself, and the very self improvement will be concerned with cutting down the solution space and cutting down the model. If something self improves at hardware level, likewise for the model of physics. Everyone wants artificial rainman. The autism is what you get from all sorts of random variations to baseline human brain; looks like the general intelligence that expands it's model and doesn't just focus intensely is a tiny spot in the design space. I don't see why expect general intelligence to suddenly overtake specialized intelligences; the specialized intelligences have better people working on them, have the funding, and the specialization massively improves efficiency; superhuman specialized intelligences require lower hardware power.

Comment author: Armarren 10 June 2012 06:18:40PM 3 points [-]

Long before you have to worry about the software finding an unintended way to achieve the objective, you encounter the problem of software not finding any way to achieve the objective

Well, obviously, since it is pretty much the problem we have now. The whole point of the Friendly AI as formulated by SI is that you have to solve the former problem before the latter is solved, because once the software can achieve any serious objectives it will likely cause enormous damage on its way there.

Comment author: roll 11 June 2012 08:07:37AM *  0 points [-]

Well, if that's the whole point, SI should dissolve today (shouldn't even have formed in first place). The software is not magic; "once the software can achieve any serious objectives" is when we know how to restrict the search space; it won't happen via mere hardware improvement. We don't start with philosophical ideal psychopathic 'mind', infinitely smart, and carve friendly mind out of it. We build our sculpture grain by grain using glue.

Comment author: roll 10 June 2012 05:52:27PM *  3 points [-]

I think the bigger issue is the collapsing of the notion of 'incredibly useful software that would be able to self improve and solve engineering problems' with philosophical notion of mind. The philosophical problem of how do we make the artificial mind not think about killing mankind, may not be solvable over the philosophical notion of the mind, and the solutions may be useless. However, practically it is a trivial part of much bigger problem of 'how do we make the software not explore the useless parts of the solution space'; it's not the killing of mankind that is problematic, but the fact that even on Jupiter sized computer the brute force solutions that explore such big and ill defined solution spaces would be useless. Long before you have to worry about the software finding an unintended way to achieve the objective, you encounter the problem of software not finding any way to achieve the objective because it was looking in the space >10^1000 times larger than it could search. The 'artificial intelligence', as in, useful software which does tasks we regarded as intelligent, is much broader and diverse concept than philosophical notion of mind.

Comment author: roll 08 June 2012 09:12:03AM 0 points [-]

What does Solomonoff Induction actually say?

I believe this one has been closed ages ago by Alan Turing, and practically demonstrated for approximations by the investigation into busy beaver function for example. We won't be able to know BB(10) from God almighty. Ever.

Comment author: NancyLebovitz 17 May 2012 03:56:17AM 3 points [-]

I think there's a difference between falling prey to one of the usual biases and just not having enough information.

Comment author: roll 21 May 2012 12:40:17PM 0 points [-]

Of course, but one can lack information and conclude "okay, I don't have enough information", or one may not arrive at such conclusion due to the overconfidence (for example).

Comment author: roll 21 May 2012 12:27:31PM 0 points [-]

That's an interesting question. Eliezer has said on multiple occasions that most AI researchers now are lunatics, and he is probably correct; how would outsider distinguish Friendly AI team from the most? The fact of concern with safety, alone, is a poor indicator of sanity; many insane people are obsessed with safety of foods, medications, air travel, safety from the government, etc etc.

Comment author: DanielLC 20 May 2012 11:32:31PM 0 points [-]

You were talking about instrumental values? I thought you were talking about terminal values.

Comment author: roll 21 May 2012 12:14:51PM *  0 points [-]

Well, the value of life, lacking specifiers, should be able to refer to the total of the value of life (as derived from other goals and intrinsic value if any); my post is rather explicit in that it speaks of the total. Of course you can take 'value life' to mean only the intrinsic value of life, but it is pretty clear that is not what OP meant if we assume that OP is not entirely stupid. He is correct in the sense that the full value of life is affected by rationality. Rational person should only commit suicide in some very few circumstances where it truly results in maximum utility given the other values not accomplished if you are dead (e.g. so that your children can cook and eat your body, or like in "28 days later" killing yourself in the 10 seconds after infection to avoid becoming a hazard, that kind of stuff). It can be said that irrational person can't value the life correctly (due to incorrect propagation).

Comment author: Vladimir_Nesov 19 May 2012 10:04:07PM 17 points [-]

This calls for a link to simulated annealing, an optimization heuristic. Here, initial sampling is "provocation" and the jumps later in the process of cooling are "movement".

Comment author: roll 20 May 2012 08:35:15AM *  0 points [-]

This raises interesting off-topic question: does 'intelligence' itself confer significant advantage over such methods (which can certainly be implemented without anything resembling agent's real world utility)?

We are transitioning to being bottlenecked (in our technological progress, at least) by optimization software implementing such methods, rather than being bottlenecked by our intelligence (that is in part how the exponential growth is sustained despite constant human intelligence); if the AI can't do a whole lot better than our brainstorming, it probably won't have upper hand over dedicated optimization software.

Comment author: roll 20 May 2012 08:18:56AM *  0 points [-]

I think something is missing here. Suppose that water has some unknown property Y that may allow us to do Z. This very statement requires that water somehow refers to object in the real world, so that we would be interested in experimenting with the water in the real world instead of doing some introspection into our internal notion of 'water'. We want our internal model of water to match something that is only fully defined externally.

Other example, if water is the only liquid we know, we may have combined notions of 'liquid' and 'water', but as we explore properties of 'liquid/water' we find it necessary to add more references to external world: water, alcohol, salt water, liquid... those are in our head but they did not pop into existence out of nothing (unless you are a solipsist).

Comment author: DanielLC 20 May 2012 07:18:56AM 4 points [-]

How does a purely rational mind feel about the inevitable over-population issue that will occur if more and more lives are saved and/or extended by technology?

Overpopulation isn't caused by technology. It's caused by having too many kids, and not using resources well enough. Technology has drastically increased our efficiency with resources, allowing us to easily grow enough to feed everyone.

Does a purely rational mind value life less or more?

The utility function is not up for grabs. Specifying that a mind is rational does not specify how much it values life.

I was answering based on the idea that these are altruistic people. I really don't know what would happen in a society full of rational egoists.

In other words, pure rationality is cold and mathematical and would consider compassion a weakness. While this may be true...

It isn't.

Comment author: roll 20 May 2012 08:13:21AM *  -1 points [-]

Does a purely rational mind value life less or more?

Specifying that a mind is rational does not specify how much it values life.

That is correct but it is also probably the case that rational mind would propagate better from it's other values, to the value of it's own life. For instance if your arm is trapped under boulder, human as is would either be unable to cut off own arm, or do it at suboptimal time (too late), compared to the agent that can propagate everything that it values in the world, to the value of it's life, and have that huge value win vs the pain. Furthermore, it would correctly propagate pain later (assuming it knows it'll eventually have to cut off own arm) into the decision now. So it would act as if it values life more and pain less.

View more: Prev | Next