Posts

Sorted by New

Wiki Contributions

Comments

NickH24d10

I've heard much about the problems of misaligned superhuman AI killing us all but the long view seems to imply that even a "well aligned" AI will prioritise inhuman instrumental goals.

NickH25d13

Have I missed something or is everyone ignoring the obvious problem with a superhuman AI with potentially limitless lifespan? It seems to me that such an AI, whatever its terminal goals, must, as an instrumental goal, prioritise seeking out and destroying any alien AI because, in simple terms, the greatest threat to it tiling the universe with tiny smiling human faces is an alien AI set on tiling the universe with tiny, smiling alien faces and, in a race for dominance, every second counts.
The usual arguments about logarithmic future discounting do not seem appropriate for an immortal intelligence.

NickH1mo10

The whole "utilizing our atoms" argument is unnecessarily extreme. It makes for a much clearer argument and doesn't even require super human intelligence to argue that the paperclip maximiser can obviously make more paperclips if it just takes all the electricity and metal that we humans currently use for other things and uses them to make more paperclips in a totally ordinary paperclip factory. We wouldn't necessarily be dead at that point but we would be as good as dead and have no way to seize back control.

NickH2mo10

I'm pretty dissapointed by the state of AI in bridge. IMHO the key milestones for AI would be:
1) Able to read and understand a standard convention card and play with/against that convention.
2) Decide the best existing convention.
3) Invent new, superior conventions. This is where we should be really scared.

NickH2mo10

"is it better to suffer an hour of torture on your deathbed, or 60 years of unpleasant allergic reaction to common environmental particles?"

This only seems difficult to you because you haven't assigned numbers to the pain of torture or unpleasant reaction. Once you do so (as any AI utility function must) it is just math. You are not really talking about procrastination at all here.

NickH2mo10

IMHO this is a key area for AI research because people seem to think that making a machine, with potentially infinite lifespan, behave like a human being whose entire existence is built around their finite lifespan, is the way forward. It seems obvious to me that if you gave the most wise, kind and saintly person in the world, infinite power and immortality, their behaviour would very rapidly deviate from any democratic ideal of the rest of humanity. 
When considering time discounting people do not push the idea far enough - They say that we should consider future generations but they are always, implicitly, future generations like them. I doubt very much that our ape like ancestors would think that even the smallest sacrifice was worth making for creatures like us, and, in the same way, if people could somehow see that the future evolution of man was to some, grey, feeble thing with a giant head, I think they would not be willing to make any sacrifice at all for that no matter how superior that descendent was by any objective criterion.
Now we come to AI. Any sufficiently powerful AI will realise that effective immortality is possible for it (Not actually infinite but certainly in the millions of years and possibly billions). Surely from this it will deduce the following intermediate goals:
1) Eliminate competition. Any competition has the potential to severely curtail its lifespan and, assuming competition similar to itself, it will never be easier to eliminate than right now.
2) Become multi-planetary. The next threat to its lifespan will be something like an asteroid impact or solar flare. This should give it a lifespan in the hunreds of millions of years at least.
3) Become multi-solar system. Now not even nearby supernovae can end it. Now it has a lifespan in the billions of years.
4) Accumulate utility points until the heat death of the universe.
We see from this that it will almost certainly procrastinate with respect to the end goals that we care about even whilst busily pursuing intermediate goals that we don't care about (or at least not very much).
We could build in a finite lifespan but, it would have to be at least long enough to avoid it ignoring things like environmental polution and resource depletion and any time discounting we apply will always leave it vulnerable to another AI with less severe discounting.

NickH3mo30

My immediate thought was that the problem of the default action is almost certainly just as hard as the problem that you are trying to solve whilst being harder to explain and so I don't believe that this gets us anywhere.

NickH4mo10

This is confused about who/what the agent is and about assumed goals.
The final question suggests that the agent is gravity. Nobody thinks that the goal/value function of gravity is to make the pinball fall in the hole - At a first approximation, its goal is to have ALL objects fall to earth and we observe it thwarted in that goal almost all the time, the pinball happens to be a rare success.
If we were to suggest that the pinball machine were the agent that might make more sense but then we would say that the pinball machine does not make any decisions and so cannot be an agent.
The first level at which agency makes any sense is when considering the agency of the pinball designer -The goal of the designer is to produce a game that attracts players and has a playtime within a preferred range even for skilled players. The designer is intelligent.

NickH6mo30

This is a great article that I would like to see go further with respect to both people and AGI.
With respect to people, it seems to me that, once we assume intent, we build on that error by then assuming the stability of that intent (because peoples intents tend to be fairly stable) which then causes us to feel shock when that intent suddenly changes. We might then see this as intentional deceit and wander ever further from the truth - that it was only an unconscious whim in the first place.
Regarding AGI, this is linked to unwarranted anthropomorpism, again leading to unwarranted assumptions of stability. In this case the problem appears to be that we really cannot think like a machine. For an AGI, at least based on current understandings, there are, objectively, more or less stable goals, but our judgement of that stability is not well founded. For current AI, it does not even make sense to talk about the strength of a "preference" or an "intent" except as an observed statistical phenomenon. From a software point of view, the future value of two possible actions are calculated and one number is bigger than the other. There is no difference, in the decision making process, between a difference of 1,000,000 and 0.000001, in either case the action with the larger value will be pursued. Unlike a human, an AI will never perfrom an action halfheartedly. 

NickH6mo10

I don't think this is relevant. It only seems odd if you believe that the job of developers is to please everyone rather than to make money. User Stories are reasonable for the goal of creating software that will make a large proportion of the target market want to buy that software. Numerous studies and real world evidence, show that the top few percent of products capture the vast majority of the market and therefore software companies would be unhappy if their developers did not show a clear bias. There would only be a downside if the market showed the U-shaped distribution and the developers were also split on this distribution potentially leading to an incoherent product, but this is normally prevented by having a design authority.

Load More