eli_sennesh comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 06 May 2015 08:42:26AM *  1 point [-]

but has its utility function (value system) hardcoded

What does that mean? That any AI will necessarily have a hardcoded, mathematical UF, .or that MIRIs UFAI scenario only applies to certain AI architectures? If the latter, then doing things differently is a reasonable response. Alternatives could involve corrigibility, .or expressing goals in natural language. Talking about alternatives isnt irrelevance,in the absence of a proof that MIRIs favoured architecture doesn't subsume everything.

Comment author: [deleted] 06 May 2015 02:19:39PM 3 points [-]

That any AI will necessarily have a hardcoded, mathematical UF,

It's entirely possible to build a causal learning and inference engine that does not output any kind of actions at all. But if you have made it output actions, then the cheapest, easiest way to describe which actions to output is hard-coding (ie: writing program code that computes actions from models without performing an additional stage of data-based inference). Since that cheap-and-easy, quick-and-dirty design falls within the behavior of a hardcoded utility function, and since that design is more-or-less what AI practitioners usually talk about, we tend to focus the doomsaying on that design.

There are problems with every design except for the right design, when you are talking about an agent you expect to become more powerful than yourself.

Comment author: TheAncientGeek 07 May 2015 08:51:36AM 1 point [-]

How likely is that cheap and easy architecture to be used in an AI of mire than human intelligence?

Comment author: [deleted] 07 May 2015 02:01:46PM *  4 points [-]

Well, people usually build the cheapiest and easiest architecture of anything they can, at first, so very likely.

And remember, "higher than human intelligence" is not some superpower that gets deliberately designed into the AI. The AI is designed to be as intelligent as its computational resources allow for: to compress data well, to perform learning and inference quickly in its models, and to integrate different domains of features and models (again: for compression and generalization). It just gets "higher than human" when it starts integrating feature data into a broader, deeper hierarchy of models faster and with better compression than a human can.

Comment author: TheAncientGeek 11 May 2015 10:12:18AM 0 points [-]

It's likely to be used, but is it likely to both be used and achieve almost accidental higher intelligence.?

Comment author: [deleted] 11 May 2015 11:04:28AM 1 point [-]

Yes. "Higher than human intelligence" does not require that the AI take particular action. It just requires that it come up with good compression algorithms and integrate a lot of data.

Comment author: TheAncientGeek 11 May 2015 12:11:37PM 1 point [-]

Your not really saying why it's likely.

Comment author: [deleted] 11 May 2015 05:54:38PM *  0 points [-]

Because "intelligence", in terms like IQ that make sense to a human being, is not a property of the algorithm, it's (as far as my investigations can tell) a function of:

  • FLOPS (how many computational operations can be done in a period of wall-clock time)
  • Memory space (and thus, how large the knowledge base of models can get)
  • Compression/generalization power (which actually requires solving difficult information-theoretic and algorithmic problems)

So basically, if you just keep giving your AGI more CPU power and storage space, I do think it will cross over into something dangerously like superintelligence, which I think really just reduces to:

  • Building and utilizing a superhuman base of domain knowledge
  • Doing so more quickly than a human being can do
  • With greater surety than a human being can obtain

There is no gap-in-kind between your reasoning abilities and those of a dangerously superintelligent AGI. It just has a lot more resources for doing the same kinds of stuff.

An easy analogy for beginners shows up the first time you read about sampling-based computational Bayesian statistics: the accuracy of the probabilities inferred depends directly on the sample size. Since additional computational power can always be put towards more samples on the margin, you can always get your inferred estimates marginally closer to the real probabilities just by adding compute time.

Comment author: VoiceOfRa 18 May 2015 12:06:48AM 3 points [-]

Since additional computational power can always be put towards more samples on the margin, you can always get your inferred estimates marginally closer to the real probabilities just by adding compute time.

By adding exponentially more time.

Computational complexity can't simply be waived away by saying "add more time/memory".

Comment author: [deleted] 18 May 2015 07:38:15PM -1 points [-]

A) I did say marginally.

B) It's a metaphor intended to convey the concept to people without the technical education to know or care where the diminishing returns line is going to be.

C) As a matter of fact, in sampling-based inference, computation time scales linearly with sample size: you're just running the same code n times with n different random parameter values. There will be diminishing returns to sample size once you've got a large enough n for relative frequencies in the sample to get within some percentage of the real probabilities, but actually adding more is a linearly-scaling cost.

Comment author: Lumifer 11 May 2015 08:16:11PM 1 point [-]

the accuracy of the probabilities inferred depends directly on the sample size. Since additional computational power can always be put towards more samples on the margin, you can always get your inferred estimates marginally closer to the real probabilities just by adding compute time.

Hold on, hold on. There are at least two samples involved.

Sample 1 is your original data sampled from reality. Its size is fixed -- additional computational power will NOT get you more samples from reality.

Sample 2 is an intermediate step in "computational Bayesian statistics" (e.g MCMC). Its size is arbitrary and yes, you can always increase it by throwing more computational power at the problem.

However by increasing the size of sample 2 you do NOT get "marginally closer to the real probabilities", for that you need to increase the size of sample 1. Adding compute time gets you marginally closer only to the asymptotic estimate which in simple cases you can even calculate analytically.

Comment author: [deleted] 12 May 2015 12:26:23AM 0 points [-]

Yes, there is an asymptotic limit where eventually you just approach the analytic estimator, and need more empirical/sensory data. There are almost always asymptotic limits, usually the "platonic" or "true" full-information probability.

But as I said, it was an analogy for beginners, not a complete description of how I expect a real AI system to work.

Comment author: Nornagest 11 May 2015 10:50:20PM *  0 points [-]

There are at least two samples involved. [Y]our original data sampled from reality [...] is fixed -- additional computational power will NOT get you more samples from reality.

That's true for something embodied as Human v1.0 or e.g. in a robot chassis, though the I/O bound even in that case might end up being greatly superhuman -- certainly the most intelligent humans can glean much more information from sensory inputs of basically fixed length than the least intelligent can, which suggests to me that the size of our training set is not our limiting factor. But it's not necessarily true for something that can generate its own sensors and effectors, suitably generalized; depending on architecture, that could end up being CPU-bound or I/O-bound, and I don't think we have enough understanding of the problem to say which.

The first thing that comes to mind, scaled up to its initial limits, might look like a botnet running image interpretation over the output of every poorly secured security camera in the world (and there are a lot of them). That would almost certainly be CPU-bound. But there are probably better options out there.