Excellent advice Eliezer!
I have a game I play ever few months or so. I get on my motorcycle, usually on a Friday, pack spare clothes and toiletries, and head out in a random direction. At most every branch in the road I choose randomly, and take my time exploring and enjoying the journey. After a couple of days, I return hugely refreshed, creative potential flowing.
But we already live in a world, right now, where people are less in control of their social destinies than they would be in a hunter-gatherer band... If you lived in a world the size of a hunter-gatherer band, then it would be easier to find something important at which to be the best - or do something that genuinely struck you as important, without becoming lost in a vast crowd of others with similar ideas.
Can you see the contradiction, bemoaning that people are now "less in control" while exercising ever-increasing freedom of expression? Ha...
Ironic, such passion directed toward bringing about a desirable singularity, rooted in an impenetrable singularity of faith in X. X yet to be defined, but believed to be [meaningful|definable|implementable] independent of future context.
It would be nice to see an essay attempting to explain an information or systems-theoretic basis supporting such an apparent contradiction (definition independent of context.)
Or, if the one is arguing for a (meta)invariant under a stable future context, an essay on the extended implications of such stability, if the one wou...
Coming from a background in scientific instruments, I always find this kind of analysis a bit jarring with its infinite regress involving the rational, self-interested actor at the core.
Of course two instruments will agree if they share the same nature, within the same environment, measuring the same object. You can map onto that a model of priors, likelihood function and observed evidence if you wish. Translated to agreement between two agents, the only thing remaining is an effective model of the relationship of the observer to the observed.
I'll second jb's request for denser, more highly structured representations of Eliezer's insights. I read all this stuff, find it entertaining and sometimes edifying, but disappointing in that it's not converging on either a central thesis or central questions (preferably both.)
Crap. Will the moderator delete posts like that one, which appear to be so off the Mark?
billswift wrote:
…but the self-taught will simply extend their knowledge when a lack appears to them.
Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on "will simply extend" but on "when a lack a...
A few posters might want to read up on Stochastic Resonance, which was surprisingly surprising a few decades ago. I'm getting a similar impression now from recent research in the field of Compressive Sensing, which ostensibly violates the Nyquist sampling limit, highlighting the immaturity of the general understanding of information-theory.
In my opinion, there's nothing especially remarkable here other than the propensity to conflate the addition of noise to data, with the addition of "noise" (a stochastic element) to (search for) data.
This confusion appears to map very well onto the cybernetic distinction between intelligently knowing the answer and intelligently controlling for the answer.
Jo -
Above all else, be true to yourself. This doesn't mean you must or should be bluntly open with everyone about your own thoughts and values; on the contrary, it means taking personal responsibility for applying your evolving thinking as a sharp instrument for the promotion of your evolving values.
Think of your values-complex as a fine-grained hierarchy, with some elements more fundamental and serving to support a wider variety of more dependent values. For example, your better health, both physical and mental, is probably more fundamental and necessar...
In my opinion, EY's point is valid—to the extent that the actor and observer intelligence share neighboring branches of their developmental tree. Note that for any intelligence rooted in a common "physics", this says less about their evolutionary roots and more about their relative stages of development.
Reminds me a bit of the jarred feeling I got when my ninth grade physics teacher explained that a scrambled egg is a clear and generally applicable example of increased entropy. [Seems entirely subjective to me, in principle.] Also reminiscent of Kardashev with his "obvious" classes of civilization, lacking consideration of the trend toward increasing ephemeralization of technology.
@pk I don't understand. Am I too dumb or is this gibberish?
It's not so complicated; it's just that we're so formal...
It might be worthwhile to note that cogent critiques of the proposition that a machine intelligence might very suddenly "become a singleton Power" do not deny the inefficacies of the human cognitive architecture offering improvement via recursive introspection and recoding, nor do they deny the improvements easily available via hardware substitution and expansion of more capable hardware and I/O.
The do, however, highlight the distinction between a vastly powerful machine madly exploring vast reaches of a much vaster "up-arrow" space of ...
Frelkins and Marshall pretty well sum up my impressions of the exchange between Jaron and EY.
Perhaps pertinent, I'd suggest an essay on OvercomingBias on our unfortunate tendency to focus on the other's statements, rather than focusing on a probabilistic model of the likelihood function generating those statements. Context is crucial to meaning, but must be formed rather than conveyed. Ironically—but reflecting the fundamentally hard value of intelligence—such contextual asymmetry appears to work against those who would benefit the most.
More concretely, ...
My (not so "fake") hint:
Think economics of ecologies. Coherence in terms of the average mutual information of the paths of trophic I/O provides a measure of relative ecological effectiveness (absent prediction or agency.) Map this onto the information I/O of a self-organizing hierarchical Bayesian causal model (with, for example, four major strata for human-level environmental complexity) and you should expect predictive capability within a particular domain, effective in principle, in relation to the coherence of the hierarchical model over it...
@Tim Tyler: "That's no reason not to talk about goals, and instead only mention something like "utility"."
Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical und...
@Eliezer: There's emotion involved. I enjoy calling people's bluffs.
Jef, if you want to argue further here, I would suggest explaining just this one phrase "functional self-similarity of agency extended from the 'individual' to groups".
Eliezer, it's clear that your suggestion isn't friendly, and I intended not to argue, but rather, to share and participate in building better understanding. But you've turned it into a game which I can either play, or allow you to use it against me. So be it.
The phrase is a simple one, but stripped of context, as...
Mathew C: "And the biggest threat, of course, is the truth that the self is not fundamentally real. When that is clearly seen, the gig is up."
Spot on. That is by far the biggest impasse I have faced anytime I try to convey a meta-ethics denying the very existence of the "singularity of self" in favor of the self of agency over increasing context. I usually to downplay this aspect until after someone has expressed a practical level of interest, but it's right there out front for those who can see it.
Thanks. Nice to be heard...
Based on the disproportionate reaction from our host, I'm going to sit quietly now.
@Cyan: "... you're going to need more equations and fewer words."
Don't you see a lower-case sigma representing a series every time I say "increasingly"? ;-)
Seriously though, I read a LOT of technical papers and it seems to me much of the beautiful LaTex equations and formulas are only to give the impression of rigor. And there are few equations that could "prove" anything in this area of inquiry.
What would help my case, if it were not already long lost in Eliezer's view, is to have provided examples, references, and commenta...
@Eliezer: I can't imagine why I might have been amused at your belief that you are what a grown-up Eliezer Yudkowsky looks like.
No, but of course I wasn't referring to similarity of physical appearance, nor do I characteristically comment at such a superficial level. Puhleease.
I don't know if I've mentioned this publicly before, but as you've posted in this vein several times now, I'll go ahead and say it:
functional self-similarity of agency extended from the 'individual' to groups
I believe that the difficult-to-understand, high-sounding ultra-abstract co...
Matthew C quoting Einstein: "A human being is a part of the whole, called by us, "Universe," a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest -- a kind of optical delusion of his consciousness."
Further to this point, and Eliezer's description of the Rubicon: It seems that recognizing (or experiencing) that perceived separation is a step necessary to its eventual resolution. Those many who've never even noticed to ask the question will not notice the answer, no matter how close to them it may be.
Eliezer, A few years ago I sat across from you at dinner and mentioned how much you reminded me of my younger self. I expected, incorrectly, that you would receive this with the appreciation of a person being understood, but saw instead on your face an only partially muted expression of snide mirth. For the next hour you sat quietly as the conversation continued around us, and on my drive home from the Bay Area back to Santa Barbara I spent a bit more time reflecting on the various interactions during the dinner and updating my model of others and you.
For...
@Thom: Why don't you write an article / sequence of articles here, on LW, on your now significantly more coherent and extensive model of reality? I, sincerely, would be really glad to read that.
@G: " if ethics were all about avoiding "getting caught", then the very idea that there could be an ethical "right thing to do" as opposed to what society wants one to do would be incoherent."
Well, I don't think anyone here actually asserted that the basis of ethics was avoiding getting caught, or even fear of getting caught. It seems to me that Eliezer posited an innate moral sense inhibiting risk-taking in the moral domain, and in my opinion this is more a reflection of his early childhood environment of development than an...
@George Weinberg: "...from an evolutionary perspective: why do we have a sense that we ought to do what is right as opposed to what society wants us to do?"
In other words, why don't humans function as mindless drones serving the "greater good" of their society? Like ants or bees? Well, if you were an ant or a bee, even one capable of speculating on evolutionary theory, you wouldn't ask that question, but rather its obverse. ;-)
Peter Watts wrote an entertaining bit of fiction, Blindsight on a similar question, but to ask why would evo...
@Caledonian: "...we must therefore conclude that a fatal flaw exists in our model..."
It's not necessarily that a "fatal flaw" exists in a model, but that all models are necessarily incomplete.
Eliezer's reasoning is valid and correct -- over a limited context of observations supporting meaning-making. It may help to consider that groups promote individual members, biological organisms promote genes, genes promote something like "material structures of increasing synergies"...
In cybernetic terms, in the bigger picture, there's...
Eliezer: "The problem is that it's nigh mathematically impossible for group selection to overcome a countervailing individual selection pressure..."
While Eliezer's point here is quite correct within its limited context of individual selection versus group selection, it seems obvious, supported by numerous examples in nature around us, that his case is overly simplistic, failing to address multi-level or hierarchical selection effects, and in particular, the dynamics of selection between groups.
This would appear to bear also on difficulty comprehending selection between (and also within) multi-level agencies in the moral domain.
odf23ds: "Ack. Could you please invent some terminology so you don't have to keep repeating this unwieldy phrase?"
I'm eager for an apt idiom for the concept, and one also for "increasing coherence over increasing context."
It seems significant, and indicative of our cultural unfamiliarity -- even discomfort -- with concepts of systems, information, and evolutionary theory, that we don't have such shorthand.
But then I look at the gross misunderestimation of almost every issue of any complexity at every level of supposed sophistication of social decision-making, and then geek speak seems not so bad.
Suggestions?
Russell: "ethics consists of hard-won wisdom from many lifetimes, which is how it is able to provide me with a safety rail against the pitfalls I have yet to encounter in my single lifetime."
Yes, generations of selection for "what works" encoded in terms of principles tends to outweigh assessment within the context of an individual agent in terms of expected utility -- to the extent that the present environment is representative of the environment of adaptation. To the extent it isn't, then the best one can do is rely on the increasing...
I'm in strong agreement with Peter's examples above. I would generalize by saying that the epistemic "dark side" tends to arise whenever there's an implicit discounting of the importance of increasing context. In other words, whenever, for the sake of expediency, "the truth", "the right", "the good". etc., is treated categorically rather than contextually (or equivalently, as if the context were fixed or fully specified.)
Phil: "Is that on this specific question, or a blanket "I never respond to Phil or Jef" policy?"
I was going to ask the same question, but assumed there'd be no answer from our gracious host. Disappointing.
Eliezer: "I'm not responding to Phil Goetz and Jef Allbright. And you shouldn't infer my positions from what they seem to be arguing with me about - just pretend they're addressing someone else."
Huh. That doesn't feel very nice.
@Cyan: "Hostile hardware", meaning that an agent's values-complex (essentially the agent's nature, driving its actions) contains elements misaligned (even to the extent of being in internal opposition on some level(s) of the complex hierarchy of values) is addressed by my formulation in the "increasing coherence" term. Then, I did try to convey how this is applicable to any moral agent, regardless of form, substrate, or subjective starting point.
I'm tempted to use n's very nice elucidation of the specific example of political corrupti...
@Cyan: Substituting "consider only actions that have predictable effects..." is for me much clearer than "limit the universe of discourse to actions that have predictable effects..." ["and note that Eliezer's argument still makes strong claims about how humans should act."]
But it seems to me that I addressed this head-on at the beginning of my initial post, saying "Of course the ends justify the means -- to the extent that any moral agent can fully specify the ends."
The infamous "Trolley Paradox" does not d...
Cyan: "...tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions."
On the contrary, promotion into the future of a [complex, hierarchical] evolving model of values of increasing coherence over increasing context, would seem to be central to the topic of this essay.
Fundamentally, any system, through interaction with its immediate environment, always only expresses its values (its physical nature.) "Intention", corresponding to "free-will" is merel...
Phil: "I don't know what "a model of evolving values increasingly coherent over increasing context, with effect over increasing scope of consequences" means."
You and I engaged briefly on this four or five years ago, and I have yet to write the book. [Due to the explosion of branching background requirements that would ensue.] I have, however, effectively conveyed the concept face to face to very small groups.
I keep seeing Eliezer orbiting this attractor, and then veering off as he encounters contradictions to a few deeply held assumptions. I remain hopeful that the prodigious effort going into the essays on this site will eventually (and virtually) serve as that book.
There's really no paradox, nor any sharp moral dichotomy between human and machine reasoning. Of course the ends justify the means -- to the extent that any moral agent can fully specify the ends.
But in an interesting world of combinatorial explosion of indirect consequences, and worse yet, critically underspecified inputs to any such supposed moral calculations, no system of reasoning can get very far betting on longer-term specific consequences. Rather the moral agent must necessarily fall back on heuristics, fundamentally hard-to-gain wisdom based on ...
"I don't think that even Buddhism allows that."
Remove whatever cultural or personal contextual trappings you find draped over a particular expression of Buddhism, and you'll find it very clear that Buddhism does "allow" that, or more precisely, un-asks that question.
As you chip away at unfounded beliefs, including the belief in an essential self (however defined), or the belief that there can be a "problem to solved" independent of a context for its specification, you may arrive at the realization of a view of the world flippe...
It seems you've missed the point here on a point common to Eastern Wisdom and to systems theory. The "deep wisdom" which you would mock refers to the deep sense there is no actual "self" separate from that which acts, thus thinking in terms of "trying" is an incoherent and thus irrelevant distraction. Other than its derivative implication that to squander attention is to reduce one's effectiveness, it says nothing about the probability of success, which in systems-theoretic terms is necessarily outside the agent's domain.
Remi...
Among the many excellent, and some inspiring, contributions to OvercomingBias, this simple post, together with its comments, is by far the most impactful for me. It's scary in almost the same way as the way the general public approaches selection of their elected representatives and leaders.
For me, a highlight of each year is a multi-day gathering of about 40 individuals selected for their intelligence, integrity and passion to make the world a better place. We share our current thinking and projects and actively refine and synergize plans for the year ahead. Nearly everyone there displays perceptiveness, creativity, joy of life, "sparkle", well above the norm, but -- these qualities are NOT highly predictive of effectiveness outside the individual's preferred environment.
@Roland
I suppose you could google "(arrogant OR arrogance OR modesty) eliezer yudkowsky" and have plenty to digest. Note that the arrogance at issue is neither dishonest nor unwarranted, but it is an impairment, and a consequence of trade-offs which, from within a broader context, probably wouldn't be taken in the same way.
That's as far as I'm willing to entertain this line of inquiry, which ostensibly neutral request for facts appears to belie an undercurrent of offense.
Eliezer, I've been watching you with interest since 1996 due to your obvious intelligence and "altruism." From my background as a smart individual with over twenty years managing teams of Ph.D.s (and others with similar non-degreed qualifications) solving technical problems in the real world, you've always struck me as near but not at the top in terms of intelligence. Your "discoveries" and developmental trajectory fit easily within the bounds of my experience of myself and a few others of similar aptitudes, but your (sheltered) arrog...
I see this discussion over the last several months bouncing around, teasingly close to a coherent resolution of the ostensible subjective/objective dichotomy applied to ethical decision-making. As a perhaps pertinent meta-observation, my initial sentence may promulgate the confusion with its expeditious wording of "applied to ethical decision-making" rather than a more accurate phrasing such as "applied to decision-making assessed as increasingly ethical over increasing context."
Those who in the current thread refer to the essential el...
Watching the ensuing commentary, I'm drawn to wishfully imagine a highly advanced Musashi, wielding his high-dimensional blade of rationality such that in one stroke he delineates and separates the surrounding confusion from the nascent clarity. Of course no such vorpal katana could exist, for if it did, it would serve only to better clear the way for its successors.
I see a preponderance of viewpoints representing, in effect, the belief that "this is all well and good, but how will this guide me to the one true prior, from which Archimedian point one ...
Eliezer, it's a pleasure to see you arrive at this point. With an effective understanding of the subjective/objective aspects supporting a realistic metaethics, I look forward to your continued progress and contributions in terms of the dynamics of increasingly effective evolutionary (in the broadest sense) development for meaningful growth, promoting a model of(subjective) fine-grained, hierarchical values with increasing coherence over increasing context of meaning-making, implemts principles of (objective) instrumental action increasingly effective ove...
Anon wrote: "Any question of ethics is entirely answered by arbitrarily chosen ethical system, therefore there are no "right" or "better" answers."
Matters of preference are entirely subjective, but for any evolved agent they are far from arbitrary, and subject to increasing agreement to the extent that they reflect increasingly fundamental values in common.
Once again we've highlighted the immaturity of present-day moral thinking -- the kind that leads inevitably to Parfit's Repugnant Conclusion. But any paradox is merely a matter of insufficient context; in the bigger picture all the pieces must fit.
Here we have people struggling over the relative moral weight of torture versus dust specks, without recognizing that there is no objective measure of morality, but only objective measures of agreement on moral values.
The issue at hand can be modeled coherently in terms of the relevant distances (regardless of...
"I think people would be more comfortable with your conclusion if you had some way to quantify it; right now all we have is your assertion that the math is in the dust speck's favor."
The actual tipping point depends on your particular subjective assessment of relative utility. The actual tipping point doesn't matter; what matters is that there is crossover at some point, therefore such reasoning about preferences, like San Jose --> San Francisco --> Oakland --> San Jose is incoherent.
I think it bears repeating here:
Influence is only one aspect of the moral formula; the other aspect is the particular context of values being promoted.
These can be quite independent, as with a tribal chief, with substantial influence, acting to promote the perceived values of his tribe, vs. the chief acting to promote his narrower personal values. [Note that the difference is not one of fitness but of perceived morality. Fitness is assessed only indirectly within an open context.]