TheAncientGeek comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 05 May 2015 05:34:15PM *  7 points [-]

Excuse me, but you are really failing to clarify the issue. The basic UFAI doomsday scenario is: the AI has vast powers of learning and inference with respect to its world-model, but has its utility function (value system) hardcoded. Since the hardcoded utility function does not specify a naturalization of morality, or CEV, or whatever, the UFAI proceeds to tile the universe in whatever it happens to like (which are things we people don't like), precisely because it has no motivation to "fix" its hardcoded utility function.

A similar problem would occur if, for some bizarre-ass reason, you monkey-patched your AI to use hardcoded machine arithmetic on its integers instead of learning the concept of integers from data via its, you know, intelligence, and the hardcoded machine math had a bug. It would get arithmetic problems wrong! And it would never realize it was getting them wrong, because every time it tried to check its own calculations, your monkey-patch would cut in and use the buggy machine arithmetic again.

The lesson is: do not hard-code important functionality into your AGI without proving it correct. In the case of a utility/value function, the obvious research path is to find a way to characterize finding out the human operators' desires as an inference problem, thus ensuring that the AI cares about learning correctly from the humans and then implementing what it learned rather than anything hard-coded. Moving moral learning into inference also helps minimize the amount of code we have to prove correct, since it simply isn't AI without correct, functioning learning and inference abilities.

Also, little you've written about CLAI or Swarm Connectionist AI corresponds well to what I've seen of real-world cognitive science, theoretical neuroscience, or machine learning research, so I can't see how either of those blatantly straw-man designs are going to turn into AGI. Please go read some actual scientific material rather than assuming that The Metamorphosis of Prime Intellect is up-to-date with the current literature ;-).

Comment author: TheAncientGeek 06 May 2015 08:42:26AM *  1 point [-]

but has its utility function (value system) hardcoded

What does that mean? That any AI will necessarily have a hardcoded, mathematical UF, .or that MIRIs UFAI scenario only applies to certain AI architectures? If the latter, then doing things differently is a reasonable response. Alternatives could involve corrigibility, .or expressing goals in natural language. Talking about alternatives isnt irrelevance,in the absence of a proof that MIRIs favoured architecture doesn't subsume everything.

Comment author: [deleted] 06 May 2015 02:19:39PM 3 points [-]

That any AI will necessarily have a hardcoded, mathematical UF,

It's entirely possible to build a causal learning and inference engine that does not output any kind of actions at all. But if you have made it output actions, then the cheapest, easiest way to describe which actions to output is hard-coding (ie: writing program code that computes actions from models without performing an additional stage of data-based inference). Since that cheap-and-easy, quick-and-dirty design falls within the behavior of a hardcoded utility function, and since that design is more-or-less what AI practitioners usually talk about, we tend to focus the doomsaying on that design.

There are problems with every design except for the right design, when you are talking about an agent you expect to become more powerful than yourself.

Comment author: TheAncientGeek 07 May 2015 08:51:36AM 1 point [-]

How likely is that cheap and easy architecture to be used in an AI of mire than human intelligence?

Comment author: [deleted] 07 May 2015 02:01:46PM *  4 points [-]

Well, people usually build the cheapiest and easiest architecture of anything they can, at first, so very likely.

And remember, "higher than human intelligence" is not some superpower that gets deliberately designed into the AI. The AI is designed to be as intelligent as its computational resources allow for: to compress data well, to perform learning and inference quickly in its models, and to integrate different domains of features and models (again: for compression and generalization). It just gets "higher than human" when it starts integrating feature data into a broader, deeper hierarchy of models faster and with better compression than a human can.

Comment author: TheAncientGeek 11 May 2015 10:12:18AM 0 points [-]

It's likely to be used, but is it likely to both be used and achieve almost accidental higher intelligence.?

Comment author: [deleted] 11 May 2015 11:04:28AM 1 point [-]

Yes. "Higher than human intelligence" does not require that the AI take particular action. It just requires that it come up with good compression algorithms and integrate a lot of data.

Comment author: TheAncientGeek 11 May 2015 12:11:37PM 1 point [-]

Your not really saying why it's likely.

Comment author: [deleted] 11 May 2015 05:54:38PM *  0 points [-]

Because "intelligence", in terms like IQ that make sense to a human being, is not a property of the algorithm, it's (as far as my investigations can tell) a function of:

  • FLOPS (how many computational operations can be done in a period of wall-clock time)
  • Memory space (and thus, how large the knowledge base of models can get)
  • Compression/generalization power (which actually requires solving difficult information-theoretic and algorithmic problems)

So basically, if you just keep giving your AGI more CPU power and storage space, I do think it will cross over into something dangerously like superintelligence, which I think really just reduces to:

  • Building and utilizing a superhuman base of domain knowledge
  • Doing so more quickly than a human being can do
  • With greater surety than a human being can obtain

There is no gap-in-kind between your reasoning abilities and those of a dangerously superintelligent AGI. It just has a lot more resources for doing the same kinds of stuff.

An easy analogy for beginners shows up the first time you read about sampling-based computational Bayesian statistics: the accuracy of the probabilities inferred depends directly on the sample size. Since additional computational power can always be put towards more samples on the margin, you can always get your inferred estimates marginally closer to the real probabilities just by adding compute time.

Comment author: VoiceOfRa 18 May 2015 12:06:48AM 3 points [-]

Since additional computational power can always be put towards more samples on the margin, you can always get your inferred estimates marginally closer to the real probabilities just by adding compute time.

By adding exponentially more time.

Computational complexity can't simply be waived away by saying "add more time/memory".

Comment author: Lumifer 11 May 2015 08:16:11PM 1 point [-]

the accuracy of the probabilities inferred depends directly on the sample size. Since additional computational power can always be put towards more samples on the margin, you can always get your inferred estimates marginally closer to the real probabilities just by adding compute time.

Hold on, hold on. There are at least two samples involved.

Sample 1 is your original data sampled from reality. Its size is fixed -- additional computational power will NOT get you more samples from reality.

Sample 2 is an intermediate step in "computational Bayesian statistics" (e.g MCMC). Its size is arbitrary and yes, you can always increase it by throwing more computational power at the problem.

However by increasing the size of sample 2 you do NOT get "marginally closer to the real probabilities", for that you need to increase the size of sample 1. Adding compute time gets you marginally closer only to the asymptotic estimate which in simple cases you can even calculate analytically.