Comment author: TheAncientGeek 01 February 2016 07:17:28PM 2 points [-]

virtually nothing

OK, but does anything survive? How about the idea that

  • Some systems will be opaque to human programmers

  • ...they will also be opaque to themselves

  • ..which will stymie recursive self-improvement.

Comment author: Richard_Loosemore 03 February 2016 05:40:32PM 1 point [-]

Well, here is my thinking.

Neural net systems have one major advantage: they use massive weak-constraint relaxation (aka the wisdom of crowds) to do the spectacular things they do.

But they have a cluster of disadvantages, all related to their inability to do symbolic, structured cognition. These have been known for a long time -- Donald Norman, for example, wrote down a list of issues in his chapter at the end of the two PDP volumes (McClelland and Rumelhart, 1987.

But here's the thing: most of the suggested ways to solve this problem (including the one I use) involve keeping the massive weak constraint relaxation, throwing away all irrelevant assumptions, and introducing new features to get the structured symbolic stuff. And that revision process generally leaves you with hybrid systems in which all the important stuff is NO LONGER particularly opaque. The weak constraint aspects can be done without forcing (too much) opaqueness into the system.

Are there ways to develop neural nets in a way that do cause them to stay totally opaque, while solving all the issues that stand between the current state of the art and AGI? Probably. Well, certainly there is one .... whole brain emulation gives you opaqueness by the bucketload. But I think those approaches are the exception rather than the rule.

So the short answer to your question is: the opaqueness, at least, will not survive.

Comment author: Richard_Loosemore 31 January 2016 07:32:47PM 1 point [-]

The biggest problem here is that you start from the assumption that current neural net systems will eventually be made into AI systems with all the failings and limitations they have now. You extrapolate massively from the assumption.

But there is absolutely no reason to believe that the evolutionary changes to NN that are required in order to make them fully intelligent (AGI) will leave them with the all the same characteristics they have now. There will be SO MANY changes, that virtually nothing about the current systems will be true of those future systems.

Which renders your entire extrapolation moot.

Comment author: Richard_Loosemore 31 January 2016 08:21:29PM 1 point [-]

Well, the critical point is whether NN are currently on a track to AGI. If they are not, then one cannot extrapolate anything. Compare: steam engine technology is also not going to eventually become AGI, so how would it look if someone wrote about the characteristics of steam engine technology and tried to predict the future of AGI based on those characteristics?

My own research (which started with NN, but tried to find ways to get it to be useful for AGI) is already well beyond the point where the statements you make about NN are of any relevance. Never mind what will be happening in 5, 10 or 20 years.

Comment author: Richard_Loosemore 31 January 2016 07:32:47PM 1 point [-]

The biggest problem here is that you start from the assumption that current neural net systems will eventually be made into AI systems with all the failings and limitations they have now. You extrapolate massively from the assumption.

But there is absolutely no reason to believe that the evolutionary changes to NN that are required in order to make them fully intelligent (AGI) will leave them with the all the same characteristics they have now. There will be SO MANY changes, that virtually nothing about the current systems will be true of those future systems.

Which renders your entire extrapolation moot.

Comment author: [deleted] 30 June 2015 11:37:58PM *  2 points [-]

Excuse me, but as much as I think the SIAI bunch were being rude to you, if you had presented, at a serious conference on a serious topic, a paper that waves its hands, yells "Complexity! Irreducible! Parallel!" and expected a good reception, I would have been privately snarking if not publicly. That would be me acting like a straight-up asshole, but it would also be because you never try to understand a phenomenon by declaring it un-understandable. Which is not to say that symbolic, theorem-prover, "Pure Maths are Pure Reason which will create Pure Intelligence" approaches are very good either -- they totally failed to predict that the brain is a universal learning machine, for instance.

(And so far, the "HEY NEURAL NETS LEARN WELL" approach is failing to predict a few things I think they really ought to be able to see, and endeavor to show.)

That anyone would ever try to claim a technological revolution is about to arise from either of those schools of work is what constantly discredits the field of artificial intelligence as a hype-driven fraud!

Comment author: Richard_Loosemore 09 July 2015 03:31:45PM 0 points [-]

Okay, so I am trying to understand what you are attacking here, and I assume you mean my presentation of that paper at the 2007 AGIRI workshop.

Let me see: you reduced the entire paper to the statement that I yelled "Complexity! Irreducible! Parallel!".

Hmmmm...... that sounds like you thoroughly understood the paper and read it in great detail, because you reflected back all the arguments in the paper, showed good understanding of the cognitive science, AI and complex-systems context, and gave me a thoughtful, insightful list of comments on some of the errors of reasoning that I made in the paper.

So I guess you are right. I am ignorant. I have not been doing research in cognitive psychology, AI and complex systems for 20 years (as of the date of that workshop). I have nothing to say to defend any of my ideas at all, when people make points about what is wrong in those ideas. And, worse still, I did not make any suggestions in that paper about how to solve the problem I described, except to say "HEY NEURAL NETS LEARN WELL".

I wish you had been around when I wrote the paper, because I could have reduced the whole thing to one 3-word and one 5-word sentence, and saved a heck of a lot of time.

P.S. I will forward your note to the Santa Fe Institute and the New England Complex Systems Institute, so they can also understand that they are ignorant. I guess we can expect an unemployment spike in Santa Fe and Boston, next month, when they all resign en masse.

Comment author: Richard_Loosemore 09 July 2015 03:18:35PM 0 points [-]

Following some somewhat misleading articles quoting me, I thought I’d present the top 10 myths about the AI risk thesis:

1) That we’re certain AI will doom us. Certainly not. It’s very hard to be certain of anything involving a technology that doesn’t exist; we’re just claiming that the probability of AI going bad isn’t low enough that we can ignore it.

MISLEADING.

If by "we" you mean the people who have published their thoughts on this matter. I believe I am right in saying that you have in the past referenced Steve Omohundro's paper, in which he says:

Without special precautions, [the AGI] will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems (Omohundro, 2008).

Although this begins with "without special precautions" it then goes on to ONLY list all the ways in which this could happen, with no suggestions about how "special precautions" are even possible.

This cannot be construed as "we’re just claiming that the probability of AI going bad isn’t low enough that we can ignore it." The quote is also inconsistent with your statement "It’s very hard to be certain of anything involving a technology that doesn’t exist", because Omohundro says categorically that this "will occur ... because of the intrinsic nature of goal driven systems".

I picked Omohundro's paper as an example, but there are numerous similar writings from MIRI and FHI. I listed several examples in my AAAI paper (Loosemore, 2014).

2) That humanity will survive, because we’ve always survived before. Many groups of humans haven’t survived contact with more powerful intelligent agents. In the past, those agents were other humans; but they need not be. The universe does not owe us a destiny. In the future, something will survive; it need not be us.

STRAWMAN.

I haven't seen anyone of significance make that claim, so how can it be a "myth about the AI risk thesis"?

3) That uncertainty means that you’re safe. If you’re claiming that AI is impossible, or that it will take countless decades, or that it’ll be safe... you’re not being uncertain, you’re being extremely specific about the future. “No AI risk” is certain; “Possible AI risk” is where we stand.

STRAWMAN.

Again, only a tiny minority have said anything resembling those claims, so how can they be a "myth about the AI risk thesis"?

4) That Terminator robots will be involved. Please? The threat from AI comes from its potential intelligence, not from its ability to clank around slowly with an Austrian accent.

STRAWMAN.

Journalists and bloggers love to put a Terminator picture on their post as an Eyeball Magnet. Why elevate an Eyeball Magnet to the level of a "myth about the AI risk thesis"?

5) That we’re assuming the AI is too dumb to know what we’re asking it. No. A powerful AI will know what we meant to program it to do. But why should it care? And if we could figure out how to program “care about what we meant to ask”, well, then we’d have safe AI.

WRONG.

I published a paper giving a thorough analysis anddebunking of your (MIRI and FHI) claims in that regard -- and yet neither you nor anyone else at MIRI or FHI has ever addressed that analysis. Instead, you simply repeat the nonsense as if no one has ever refuted it. MIRI was also invited to respond when the paper was presented. They refused the invitation.

6) That there’s one simple trick that can solve the whole problem. Many people have proposed that one trick. Some of them could even help (see Holden’s tool AI idea). None of them reduce the risk enough to relax – and many of the tricks contradict each other (you can’t design an AI that’s both a tool and socialising with humans!).

INCOHERENT.

I have heard almost no one propose that there is "one simple trick", so how can it be a "myth" (you need more than a couple of suggestions, for something to be a myth).

More importantly, what happened to your own Point 1, above? You said "It’s very hard to be certain of anything involving a technology that doesn’t exist" -- but now you are making categorical statements (e.g. "None of them reduce the risk enough to relax" and "you can’t design an AI that’s both a tool and socialising with humans!") about the effectiveness of various ideas about that technology that doesn’t exist.

7) That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.

PARTLY TRUE

......... except for all the discussion at MIRI and FHI regarding Hard Takeoff scenarios. And, again, whence cometh the certainty in the statement "Current AI research is very far from the risky areas and abilities."?

8) That AIs will be more intelligent than us, hence more moral. It’s pretty clear than in humans, high intelligence is no guarantee of morality. Are you really willing to bet the whole future of humanity on the idea that AIs might be different? That in the billions of possible minds out there, there is none that is both dangerous and very intelligent?

MISLEADING, and a STRAWMAN.

Few if any people have made the claim that increased intelligence BY ITSELF guarantees greater morality.

This is misleading because some people have discussed a tendency (not a guarantee) for higher intelligence to lead to greater morality (Mark Waser's papers go into this in some detail). Combining that with the probability of AI going through a singleton bottleneck, and there is a plausible scenario in which AIs themselves enforce a post-singleton constraint on the morality of future systems.

You are also profoundly confused (or naive) about how AI works, when you ask the question "Are you really willing to bet the whole future of humanity on the idea that AIs might be different?" One does not WAIT to find out if the motivation system of a future AI "is different", one DESIGNS the motivation system of a future AI to be either this way or that way.

It could be that an absolute correlation between increased intelligence and increased morality, in humans, is undermined by the existence of a psychopathic-selfish module in the human motivation system. Solution? Remove the module. Not possible to do in humans because of the biology, but trivially easy to do if you are designing an AI along the same lines. And if this IS what is happening in humans, then you can deduce nothing about future AI systems from the observation that "in humans, high intelligence is no guarantee of morality".

9) That science fiction or spiritual ideas are useful ways of understanding AI risk. Science fiction and spirituality are full of human concepts, created by humans, for humans, to communicate human ideas. They need not apply to AI at all, as these could be minds far removed from human concepts, possibly without a body, possibly with no emotions or consciousness, possibly with many new emotions and a different type of consciousness, etc... Anthropomorphising the AIs could lead us completely astray.

MISLEADING and CONFUSED.

This is a confusing mishmash of speculation and assumption.

A sentence like "minds far removed from human concepts" is not grounded in any coherent theory of what a 'mind' is or what a 'concept' is, or how to do comparative measures across minds and concepts. The sentence is vague, science-fictional handwaving.

The same goes for other statements like that the AI might have "no emotions or consciousness". Until you define what you mean by those terms, and give some kind of argument about why the AI would or would not be expected to have them, and what difference it would make, in either case, the statement is just folk psychology dressed up as science.

Lists cannot be comprehensive, but they can adapt and grow, adding more important points: 1) That AIs have to be evil to be dangerous. The majority of the risk comes from indifferent or partially nice AIs. Those that have sone goal to follow, with humanity and its desires just getting in the way – using resources, trying to oppose it, or just not being perfectly efficient for its goal.

MISLEADING and a STRAWMAN.

Yet again, I demonstrated in my 2014 paper that that claim is incoherent. It is predicated on a trivially stupid AI design, and there is no evidence that such a design will ever work in the real world.

If you, or anyone else at MIRI or FHI think that you can answer the demolition of this idea that I presented in the AAAI paper, it is about time you published it.

2) That we believe AI is coming soon. It might; it might not. Even if AI is known to be in the distant future (which isn't known, currently), some of the groundwork is worth laying now).

ACCURATE.

References

Loosemore, R.P.W. (2014). The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation. Association for the Advancement of Artificial Intelligence 2014 Spring Symposium, Stanford, CA.

Omohundro, Stephen M. 2008. The Basic AI Drives. In Wang, P., Goertzel, B. and Franklin, S. (Eds), Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Amsterdam: IOS.

Comment author: [deleted] 27 June 2015 12:06:21AM 1 point [-]

Wait wait wait. You didn't head to the dinner, drink some fine wine, and start raucously debating the same issue over again?

Bah, humbug!

Also, how do I get invited to these conferences again ;-)?

It is a scandalously unjustified assumption, made very hard to attack by the fact that it is repeated so frequently that everyone believes it be true just because everyone else believes it.

Very true, at least regarding AI. Personally, my theory is that the brain does do reinforcement learning, but the "reward function" isn't a VNM-rational utility function, it's just something the body signals to the brain to say, "Hey, that world-state was great!" I can't imagine that Nature used something "mathematically coherent", but I can imagine it used something flagrantly incoherent but really dead simple to implement. Like, for instance, the amount of some chemical or another coming in from the body, to indicate satiety, or to relax after physical exertion, or to indicate orgasm, or something like that.

Comment author: Richard_Loosemore 30 June 2015 05:53:21PM 1 point [-]

Hey, ya pays yer money and walk in the front door :-) AGI conferences run about $400 a ticket I think. Plus the airfare to Berlin (there's one happening in a couple of weeks, so get your skates on).

Re the possibility that the human system does do reinforcement learning .... fact is, if one frames the meaning of RL in a sufficiently loose way, the human cogsys absolutely DOES do RL, no doubt about it. Just as you described above.

But if you sit down and analyze what it means to make the claim that a system uses RL, it turns out that there is a world of difference between the two positions:

The system CAN BE DESCRIBED in such a way that there is reinforcement of actions/internal constructs that lead to positive outcomes in some way,

and

The system is controlled by a mechanism that explicitly represents (A) actions/internal constructs, (B) outcomes or expected outcomes, and (C) scalar linkages between the A and B entities .... and behavior is completely dominated by a mechanism that browses the A, B and C in such a way as to modify one of the C linkages according to the cooccurrence of a B with an A.

The difference is that the second case turns the descriptive mechanism into an explicit mechanism.

It's like Ptolemy's Epicycle model of the solar system. Was Ptolemy's fancy little wheels-within-wheels model a good descriptive model of planetary motion? You bet ya! Would it have been appropriate to elevate that model and say that the planets actually DID move on top of some epicycle-like mechanism? Heck no! As a functional model it was garbage, and it held back a scientific understanding of what was really going on for over a thousand years.

Same deal with RL. Our difficulty right now is that so many people slip back and forth between arguing for RL as a descriptive model (which is fine) and arguing for it as a functional model (which is disastrous, because that was tried in psychology for 30 years, and it never worked).

Comment author: [deleted] 27 June 2015 12:06:39AM 0 points [-]

Meanwhile, I went beyond that problem and outlined a solution, soon after I started working in this field in the mid-80s. And by 2006 I had clarified my ideas enough to present them at the AGIRI workshop held in Bethesda that year.

Link?

Comment author: Richard_Loosemore 30 June 2015 05:35:09PM 1 point [-]

Sorry, was in too much of a rush to give link.....

Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence and Theoretical Psychology. In B. Goertzel & P. Wang (Eds.), Proceedings of the 2006 AGI Workshop. IOS Press, Amsterdam.

http://richardloosemore.com/docs/2007_ComplexSystems_rpwl.pdf

Comment author: TheAncientGeek 26 June 2015 10:27:55AM 3 points [-]

. It is a scandalously unjustified assumption, made very hard to attack by the fact that it is repeated so frequently that everyone believes it be true just because everyone else believes it.

I don't think that is an overstatement. If MIRI is basicatly wrong about UFs, then most of its case unravels. Why isnt the issue bring treated as a matter of urgency?

Comment author: Richard_Loosemore 26 June 2015 04:31:16PM *  1 point [-]

A very good question indeed. Although ... there is a depressing answer.

This is a core-belief issue. For some people (like Yudkowsky and almost everyone in MIRI) artificial intelligence must be about the mathematics of artificial intelligence, but without the utility-function approach, that entire paradigm collapses. Seriously: it all comes down like a house of cards.

So, this is a textbook case of a Kuhn / Feyerabend - style clash of paradigms. It isn't a matter of "Okay, so utility functions might not be the best approach: so let's search for a better way to do it" .... it is more a matter of "Anyone who thinks that an AI cannot be built using utility functions is a crackpot." It is a core belief in the sense that it is not allowed to be false. It is unthinkable, so rather than try to defend it, those who deny it have to be personally attacked. (I don't say this because of personal experience, I say it because that kind of thing has been observed over and over when paradigms come into conflict).

Here, for example, is a message sent to the SL4 mailing list by Yudkowsky in August 2006:

Dear Richard Loosemore:

When someone doesn't have anything concrete to say, of course they always trot out the "paradigm" excuse.

Sincerely, Eliezer Yudkowsky.

So the immediate answer to your question is that it will never be treated as a matter of urgency, it will be denied until all the deniers drop dead.

Meanwhile, I went beyond that problem and outlined a solution, soon after I started working in this field in the mid-80s. And by 2006 I had clarified my ideas enough to present them at the AGIRI workshop held in Bethesda that year. The MIRI (then called SIAI) crowd were there, along with a good number of other professional AI people.

The response was interesting. During my presentation the SIAI/MIRI bunch repeatedly interrupted with rude questions or pointed, very loud, laughter. Insulting laughter. Loud enough to make the other participants look over and wonder what the heck was going on.

That's your answer, again, right there.

But if you want to know what to do about it, the paper I published after the workshop is a good place to start.

Comment author: jacob_cannell 23 June 2015 04:16:43AM *  3 points [-]

Since the utility function is approximated anyway, it becomes an abstract concept - especially in the case of evolved brains. For an evolved creature, the evolutionary utility function can be linked to long term reproductive fitness, and the value function can then be defined appropriately.

For a designed agent, it's a useful abstraction. We can conceptually rate all possible futures, and then roughly use that to define a value function that optimizes towards that goal.

It's really just a mathematical abstraction of the notion of X is better than Y. It's not worth arguing about. It's also proven in the real world - agents based on utility formalizations work. Well.

Comment author: Richard_Loosemore 24 June 2015 02:28:43PM 0 points [-]

It certainly is worth discussing, and I'm sorry but you are not correct that "agents based on utility formalizations work. Well."

That topic came up at the AAAI symposium I attended last year. Specifically, we had several people there who built real-world (as opposed to academic, toy) AI systems. Utility based systems are generally not used, except as a small component of a larger mechanism.

Comment author: jacob_cannell 22 June 2015 08:34:19PM *  2 points [-]

I am not sure why you say I am hung up on RL: you quoted that as the only mechanism to be discussed in the context, so I went with that.

Upon consideration, I changed my own usage of "Universal Reinforcement Learning Machine" to "Universal Learning Machine".

The several remaining uses of "reinforcement learning" are contained now to the context of the BG and the reward circuitry.

And you are (like many people) not correct to say that RL is the most general framework,

Again we are probably talking about very different RL conceptions. So to be clear, I summarized my general viewpoint of an ULM. I believe it is an extremely general model, that basically covers any kind of universal learning agent. The agent optimizes/steers the future according to some sort of utility function (which is extremely general), and self-optimization emerges naturally just by including the agent itself as part of the system to optimize.

Do you have a conception of a learning agent which does not fit into that framework?

or that there is good evidence for RL in the brain. That is a myth: the evidence is very poor indeed.

The evidence for RL in the brain - of the extremely general form I described - is indeed very strong, simply because any type of learning is just a special case of universal learning. Taboo 'reinforcement' if you want, and just replace with "utility driven learning".

AIXI specifically has a special reward channel, and perhaps you are thinking of that specific type of RL which is much more specific than universal learning. I should perhaps clarify and or remove the mention of AIXI.

A ULM - as I described - does not have a reward channel like AIXI. It just conceptually has a value and or utility function initially defined by some arbitrary function that conceptually takes the whole brain/model as input. In the case of the brain, the utility function is conceptual, in practice it is more directly encoded as a value function.

Comment author: Richard_Loosemore 23 June 2015 02:41:54AM 5 points [-]

About the universality or otherwise of RL. Big topic.

There's no need to taboo "RL" because switching to utility-based learning does not solve the issue (and the issue I have in mind covers both).

See, this is the problem. It is hard for me to fight the idea that RL (or utility-driven learning) works, because I am forced to fight a negative; a space where something should be, but which is empty ....... namely, the empirical fact that Reinforcement Learning has never been made to work in the absence of some surrounding machinery that prepares or simplifies the ground for the RL mechanism.

It is a naked fact about traditional AI that it puts such an emphasis on the concept of expected utility calculations without any guarantees that a utility function can be laid on the world in such a way that all and only the intelligent actions in that world are captured by a maximization of that quantity. It is a scandalously unjustified assumption, made very hard to attack by the fact that it is repeated so frequently that everyone believes it be true just because everyone else believes it.

If anyone ever produced a proof why it should work, there would be a there there, and I could undermine it. But .... not so much!

About AIXI and my conversation with Marcus: that was actually about the general concept of RL and utility-driven systems, not anything specific to AIXI. We circled around until we reached the final crux of the matter, and his last stand (before we went to the conference banquet) was "Yes, it all comes down to whether you believe in the intrinsic reasonableness of the idea that there exists a utility function which, when maximized, yields intelligent behavior .......... but that IS reasonable, .... isn't it?"

My response was "So you do agree that that is where the buck stops: I have to buy the reasonableness of that idea, and there is no proof on the table for why I SHOULD buy it, no?"

Hutter: "Yes."

Me: "No matter how reasonable it seems, I don't buy it"

His answer was to laugh and spread his arms wide. And at that point we went to the dinner and changed to small talk. :-)

View more: Next