The highly specific predictions should be lowered in their probability when updating on the statement like 'unpredictable'.
That depends what your initial probability is and why. If it already low due to updates on predictions about the system, then updating on "unpredictable" will increase the probability by lowering the strength of those predictions. Since destruction of humanity is rather important, even if the existential AI risk scenario is of low probability it matters exactly how low.
This of course has the same shape as Pascal's mugging, but I do not believe that SI claims are of low enough probability to be dismissed as effectively zero.
Not everything is equally easy to describe as equations.
That was in fact my point, which might indicate that we are likely to be talking past each other. What I tried to say is that an artificial intelligence system is not necessarily constructed as an explicit optimization process over an explicit model. If the model and the process are implicit in its cognitive architecture then making predictions about what the system will do in terms of a search are of limited usefulness.
And even talking about models, getting back to this:
cutting down the solution space and cutting down the model
On further thought, this is not even necessarily true. The solution space and the model will have to be pre-cut by someone (presumably human engineers) who doesn't know where the solution actually is. A self-improving system will have to expand both if the solution is outside them in order to find it. A system that can reach a solution even when initially over-constrained is more useful than the one that can't, and so someone will build it.
I think you have a very narrow vision of 'unstable'.
I do not understand what you are saying here. If you mean that by unstable I mean a highly specific trajectory a system that lost stability will follow, then it is because all those trajectories where the system crashes and burns are unimportant. If you have a trillion optimization systems on a planet running at the same time you have to be really sure that nothing can't go wrong.
I just realized I derailed the discussion. The whole AGI in specialized AI world is irrelevant to what started this thread. In the sense of chronology of being developed I cannot tell how likely it is that AGI could overtake specialized intelligences. It really depends whether there is a critical insight missing for the constructions of AI. If it is just an extension of current software then specialized intelligences will win for reasons you state. Although some of the caveats I wrote above still apply.
If there is a critical difference in architecture between current software and AI then whoever hits that insight will likely overtake everyone else. If they happen to be working on AGI or even any system entangled with the real world, I don't see how once can guarantee that the consequences will not be catastrophic.
Too much anthropomorphization.
Well, I in turn believe you are applying overzealous anti-anthropomorphization. Which is normally a perfectly good heuristic when dealing with software, but the fact is human intelligence is the only thing in "intelligence" reference class we have, and although AI will almost certainly be different they will not necessarily be different in every possible way. Especially considering the possibility of AI that are either directly base on human-like architecture or even are designed to directly interact with humans, which requires having at least some human-compatible models and behaviours.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Recent published SI work concerns AI safety. They have not recently published results on AGI, to whatever extent that is separable from safety research, for which I am very grateful. Common optimization algorithms do apply to mathematical models, but that doesn't limit their real world use; an implemented optimization algorithm designed to work with a given model can do nifty things if that model roughly captures the structure of a problem domain. Or to put it simply, models model things. SI is openly concerned with exactly that type of optimization, and how it becomes unsafe if enough zealous undergrads with good intentions throw this, that, and their grandmother's hippocampus into a pot until it supposedly does fantastic venture capital attracting things. The fact that SI is not writing papers on efficient adaptive particle swarms is good and normal for an organization with their mission statement. Foom was a metaphorical onomatopoeia for an intelligence explosion, which is indeed a commonly used sense of the term "technological singularity".
Any references? I haven't seen anything that is in any way relevant to the type of optimization that we currently know how to implement. The SI is concerned with notion of some 'utility function', which appears very fuzzy and incoherent - what it is, a mathematical function? What does it have at input and what it has at output? The number of paperclips in the universe is given as example of 'utility function', but you can't have 'universe' as the input domain to a mathematical function. In the AI the 'utility function' is defined on the model rather than the world, and lacking the 'utility function' defined on the world, the work on ensuring correspondence of the model and the world is not an instrumental sub-goal arising from maximization of the 'utility function' defined on the model. This is rather complicated, technical issue, and to be honest the SI stance looks indistinguishable from confusion that would result from inability to distinguish function of model and the property of the world, and subsequent assumption that correspondence of model and the world is an instrumental goal of any utility maximizer. (Furthermore that sort of confusion would normally be expected as a null hypothesis when evaluating an organization so outside the ordinary criteria of competence)
edit: also, by the way, it it would improve my opinion of this community if, when you think that I am incorrect, you would explain your thought rather than click down vote button. While you may want to signal to me that "i am wrong" by pressing the vote button, that, without other information, is unlikely to change my view on the technical side of the issue. Keep in mind that one can not be totally certain in anything, and while this may be a normal discussion forum that happens to be owned by an AI researcher that is being misunderstood due to poor ability to communicate the key concepts he uses, it might also be a support ground for pseudoscientific research, and the norm of substance-less disagreement would seem to be more probable in the latter than in the former.