It might be worthwhile to note that cogent critiques of the proposition that a machine intelligence might very suddenly "become a singleton Power" do not deny the inefficacies of the human cognitive architecture offering improvement via recursive introspection and recoding, nor do they deny the improvements easily available via hardware substitution and expansion of more capable hardware and I/O.
The do, however, highlight the distinction between a vastly powerful machine madly exploring vast reaches of a much vaster "up-arrow" space of mathematical complexity, and a machine of the same power bounded in growth of intelligence -- by definition necessarily relevant -- due to starvation for relevant novelty in its environment of interaction.
If, Feynman-like, we imagine the present state of knowledge about our world in terms of a distribution of vertical domains, like silos, some broader with relevance to many diverse facets of real-world interaction, some thin and towering into the haze of leading-edge mathematical reality, then we can imagine the powerful machine quickly identifying and making a multitude of latent connections and meta-connections, filling in the space between the silos and even somewhat above -- but to what extent, given the inevitable diminishing returns among the latent, and the resulting starvation for the novel?
Given such boundedness, speculation is redirected to growth in ecological terms, and the Red Queen's Race continues ever faster.
Frelkins and Marshall pretty well sum up my impressions of the exchange between Jaron and EY.
Perhaps pertinent, I'd suggest an essay on OvercomingBias on our unfortunate tendency to focus on the other's statements, rather than focusing on a probabilistic model of the likelihood function generating those statements. Context is crucial to meaning, but must be formed rather than conveyed. Ironicallyâbut reflecting the fundamentally hard value of intelligenceâsuch contextual asymmetry appears to work against those who would benefit the most.
More concretely, I'm referring to the common tendency to shake one's head in perplexity and say "He was so wrong, he didn't make much sense at all." in comparison with laughing and saying "I can see how he thinks that way, within his context (which I may have once shared.)"
My (not so "fake") hint:
Think economics of ecologies. Coherence in terms of the average mutual information of the paths of trophic I/O provides a measure of relative ecological effectiveness (absent prediction or agency.) Map this onto the information I/O of a self-organizing hierarchical Bayesian causal model (with, for example, four major strata for human-level environmental complexity) and you should expect predictive capability within a particular domain, effective in principle, in relation to the coherence of the hierarchical model over its context.
As to comparative evaluation of the intelligence of such models without actually running them, I suspect this is similar to trying to compare the intelligence of phenotypical organisms by comparing the algorithmic complexity of their DNA.
@Tim Tyler: "That's no reason not to talk about goals, and instead only mention something like "utility"."
Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical underspecification of priors, we find that ultimately (in the bigger picture) rational decision-making is not so much about "expected utility" or "goals" as it is about promoting a present model of evolving values into one's future, via increasingly effective interaction with one's (necessarily local) environment of interaction. Wash, rinse, repeat. Certainty, goals, and utility are always only a special case, applicable to the extent that the context is adequately specifiable. This is the key to so-called "paradoxes" such a Prisoners's Dilemma and Parfit's Repugnant Conclusion as well.
Tim, this forum appears to be over-heated and I'm only a guest here. Besides, I need to pack and get on my motorcycle and head up to San Jose for Singularity Summit 08 and a few surrounding days of high geekdom.
I'm (virtually) outta here.
@Eliezer: _There's emotion involved. I enjoy calling people's bluffs._
_Jef, if you want to argue further here, I would suggest explaining just this one phrase "functional self-similarity of agency extended from the 'individual' to groups"._
Eliezer, it's clear that your suggestion isn't friendly, and I intended not to argue, but rather, to share and participate in building better understanding. But you've turned it into a game which I can either play, or allow you to use it against me. So be it.
The phrase is a simple one, but stripped of context, as you've done here, it may indeed appear meaningless. So to explain, let's first restore context.
Your essay, _Which Parts are "Me"_, highlighted some interesting and significant similarities -- and differences -- in our thinking. Interesting, because they match an epistemological model I held tightly and would still defend against simpler thinking, and significant, because a coherent theory of self, or rather agency, is essential to a coherent meta-ethics.
So I wrote (after trying to establish some similarity of background):
"At some point about 7 years later (about 1985) it hit me one day that I had completely given up belief in an essential "me", while fully embracing a pragmatic "me". It was interesting to observe myself then for the next few years; every 6 months or so I would exclaim to myself (if no one else cared to listen) that I could feel more and more pieces settling into a coherent and expanding whole. It was joyful and liberating in that everything worked just as before, but I had to accommodate one less hypothesis, and certain areas of thinking, meta-ethics in particular, became significantly more coherent and extensible. [For example, a piece of the puzzle I have yet to encounter in your writing is the functional self-similarity of agency extended from the "individual" to groups.]"
So I offered a hint, of an apparently unexplored (for you) direction of thought, which, given a coherent understanding of the functional role of agency, might benefit your further thinking on meta-ethics.
The phrase represents a simple concept, but rests on a subtle epistemic foundation which, as Mathew C pointed out, tends to bring out vigorous defenses in support of the Core Self. Further to the difficulty, an epistemic foundation cannot be conveyed, but must be created in the mind of thinker as described pretty well recently by Meltzer in a paper that "stunned" Robin Hanson, entitled Pedagogical Motives for Esoteric Writing. So, the phrase is simple, but the meaning depends on background, and along the road to acquiring that background, there is growth.
To break it down: "Functional self-similarity of agency extended from the 'individual' to groups."
"Functional" indicates that I'm referring to similarity in terms of function, i.e. relations of output to input, rather than e.g. similarities of implementation, structure, or appearance. More concretely [I almost neglected to include the concrete.] I'm referring to the *functional* aspects of agency, in essence, action on behalf of perceived interests (an internal model of some sort) in relation to which the agent acts on its immediate environment so as to (tend to) null out any differences.
"Self-similarity" refers to some entity replicated, conserved, re-used over a range of scale. More concretely, I'm referring to patterns of agency which repeat -- in functional terms, even though the implementation may be quite different in structure, substrate, or otherwise.
"Extended from the individual to groups" refers to the scale of the subject, in other words, that functional self-similarity of agency is conserved over increasing scale from the common and popularly conceived case of individual agency, extending to groups, groups of groups, and so on. More concretely, I'm referring to the essential functional similarities, in terms of agency, which are conserved when a model scales for example, from individual human acting on its interests, to a family acting on its interests, to tribe, company, non-profit, military unit, city-state, etc. especially in terms of the dynamics of its interactions with entities of similar (functional) scale, but also with regard to the internal alignments (increasing coherence) of its own nature due to selection for "what works."
As you must realize, regularities observed over increasing scale tend to indicate and increasingly profound principle. That was the potential value I offered to you.
In my opinion, the foregoing has a direct bearing on a coherent meta-ethics, and is far from "fake". Maybe we could work on "increasing coherence with increasing context" next?
Mathew C: "And the biggest threat, of course, is the truth that the self is not fundamentally *real*. When that is clearly seen, the gig is up."
Spot on. That is by far the biggest impasse I have faced anytime I try to convey a meta-ethics denying the very existence of the "singularity of self" in favor of the self of agency over increasing context. I usually to downplay this aspect until after someone has expressed a practical level of interest, but it's right there out front for those who can see it.
Thanks. Nice to be heard...
Based on the disproportionate reaction from our host, I'm going to sit quietly now.
@Cyan: "... you're going to need more equations and fewer words."
Don't you see a lower-case sigma representing a series every time I say "increasingly"? ;-)
Seriously though, I read a LOT of technical papers and it seems to me much of the beautiful LaTex equations and formulas are only to give the *impression* of rigor. And there are few equations that could "prove" anything in this area of inquiry.
What would help my case, if it were not already long lost in Eliezer's view, is to have provided examples, references, and commentary along with each abstract formulation. I lack the time to do so, so I've always considered my "contributions" to be seeds of thought to grow or not depending on whether they happen to find fertile soil.
@Eliezer: _I can't imagine why I might have been amused at your belief that you are what a grown-up Eliezer Yudkowsky looks like._
No, but of course I wasn't referring to similarity of physical appearance, nor do I characteristically comment at such a superficial level. Puhleease.
_I don't know if I've mentioned this publicly before, but as you've posted in this vein several times now, I'll go ahead and say it:_
_functional self-similarity of agency extended from the 'individual' to groups_
_I believe that the difficult-to-understand, high-sounding ultra-abstract concepts you use with high frequency and in great volume, are fake. I don't think you're a poor explainer; I think you have nothing to say._
_If I don't give you as much respect as you think you deserve, no more explanation is needed than that, a conclusion I came to years ago._
Well that explains the ongoing appearance of disdain and dismissal. But my kids used to do something similar and then I was sometimes gratified to see shortly after an echo of my concepts in their own words.
Let me expand on my "fake" hint of a potential area of growth for your moral epistemology:
If you can accept that the concept of agency is inherent to any coherent meta-ethics, then we might proceed. But, you seem to preserve and protect a notion of agency that can't be coherently modeled.
You continue to posit agency that exploits information at a level unavailable to the system, and wave it away with hopes of math that "you don't yet have." Examples are your post today that has "real self" somehow dominating lesser aspects of self as if quite independent systems, or with your "profound" but unmodelable interpretation of ishoukenmei which bears only a passing resemblance to the very realistic usage I learned while living in Japan.
You continue to speak (and apparently think) in terms of "goals", even when such "goals" can't be effectively specified in the uncertain context of a complex evolving future, and you don't seem to consider the cybernetic or systems-theoretic reality that ultimately no system of interesting complexity, including humans, actually attains long-term goals so much as it simply tries to null out the difference between its (evolving) internal model and its perceptions of its present reality. All the intelligence is in the transform function effecting its step-wise actions. And that's good enough, but never absolutely perfect. But the good enough that you can have is always preferable to the absolutely perfect that you can never have (unless you intend to maintain a fixed context.)
You posit certainty (e.g. friendliness) as an achievable goal, and use rigorous-sounding terms like "invariant goal" in regard to decision-making in an increasingly uncertain future, but blatantly and blithely ignore concerns addressed to you over the years by myself and others as to how you think that this can possibly work, given the ineluctable combinatorial explosion, and the fundamentally critically underspecified priors.
I realize it's like a Pascal's Wager for you, and I admire your contributions in a sense somewhat tangential to your own, but like an isolated machine intelligence of high processing power but lacking an environment of interaction of complexity similar to its own - eventually you run off at high speed exploring quite irrelevant reaches of possibility space.
As to my hint to you today, if you have a workable concept of agency, then you might profit from consideration of the functional self-similarity of agencies composed of agencies, and so on, self-similar with increasing scale, and how the emergent (yeah, I know you dismiss "emergence" too) dynamics will tend to be perceived as increasingly moral (from within the system, as each of us necessarily is) due to the multi-level selection and therefore alignment for "what works" (nulling out the proximal difference between their model and their perceived reality, wash, rinse, repeat) by agents each acting in their own interest within an ecology of competing interests.
Sheesh, I may be abstract, I may be a bit too out there to relate to easily, but I have a hard time with "fake."
I meant to shake your tree a bit, in a friendly way, but not to knock you out of it. I've said repeatedly that I appreciate the work you do and even wish I could afford to do something similar. I'm a bit dismayed, however, by the obvious emotional response and meanness from someone who prides himself on sharpening the blade of his rationality by testing it against criticism.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
In my opinion, EY's point is validâto the extent that the actor and observer intelligence share neighboring branches of their developmental tree. Note that for any intelligence rooted in a common "physics", this says less about their evolutionary roots and more about their relative stages of development.
Reminds me a bit of the jarred feeling I got when my ninth grade physics teacher explained that a scrambled egg is a clear and generally applicable example of increased entropy. [Seems entirely subjective to me, in principle.] Also reminiscent of Kardashev with his "obvious" classes of civilization, lacking consideration of the trend toward increasing ephemeralization of technology.