Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Dunbar's Function
Comment author: Jef_Allbright 31 December 2008 08:45:44PM 0 points [-]

I think it bears repeating here:

Influence is only one aspect of the moral formula; the other aspect is the particular context of values being promoted.

These can be quite independent, as with a tribal chief, with substantial influence, acting to promote the perceived values of his tribe, vs. the chief acting to promote his narrower personal values. [Note that the difference is not one of fitness but of perceived morality. Fitness is assessed only indirectly within an open context.]

In response to A New Day
Comment author: Jef_Allbright 31 December 2008 06:55:09PM 1 point [-]

Excellent advice Eliezer!

I have a game I play ever few months or so. I get on my motorcycle, usually on a Friday, pack spare clothes and toiletries, and head out in a random direction. At most every branch in the road I choose randomly, and take my time exploring and enjoying the journey. After a couple of days, I return hugely refreshed, creative potential flowing.

In response to Dunbar's Function
Comment author: Jef_Allbright 31 December 2008 05:33:51PM 0 points [-]

But we already live in a world, right now, where people are less in control of their social destinies than they would be in a hunter-gatherer band...

If you lived in a world the size of a hunter-gatherer band, then it would be easier to find something important at which to be the best - or do something that genuinely struck you as important, without becoming lost in a vast crowd of others with similar ideas.

Can you see the contradiction, bemoaning that people are now "less in control" while exercising ever-increasing freedom of expression? Harder to "find something important" with so many more opportunities available? Can you see the confusion over context that is increasingly not ours to control?

Eliezer, here again you demonstrate your bias in favor of the context of the individual. Dunbar's (and others') observations on organizational dynamics apply generally, while your interpretation appears to speak quite specifically of your experience of Western culture and your own perceived place in the scheme of things.

Plentiful contrary views exist to support a sense of meaning, purpose, pride implicit in the recognition of competent contribution to community without the (assumed) need to be seen as extraordinary. Especially still in modern Japan and Asia, the norm is to bask in recognition of competent contribution and to recoil from any suggestion that one might substantially stand out. False modesty this is not. In Western society too, examples of fulfillment and recognition through service run deeply, although this is belied in the (entertainment) media.

Within any society, recognition confers added fitness, but to satisfice it is not necessary to be extraordinary.

But if people keep getting smarter and learning more - expanding the number of relationships they can track, maintaining them more efficiently...[relative to the size of the interacting population]..then eventually there could be a single community of sentients, and it really would be a single community.

Compare:

But as the cultural matrix keeps getting smarter—supporting increasing degrees of freedom with increasing probability—then eventually you could see self-similarity of agency over increasing scale, and it really would be a fractal agency.

Well, regardless of present point of view—wishing all a rewarding New Year!

Comment author: Jef_Allbright 11 December 2008 08:46:40PM -1 points [-]

Ironic, such passion directed toward bringing about a desirable singularity, rooted in an impenetrable singularity of faith in X. X yet to be defined, but believed to be [meaningful|definable|implementable] independent of future context.

It would be nice to see an essay attempting to explain an information or systems-theoretic basis supporting such an apparent contradiction (definition independent of context.)

Or, if the one is arguing for a (meta)invariant under a stable future context, an essay on the extended implications of such stability, if the one would attempt to make sense of "stability, extended."

Or, a further essay on the wisdom of ishoukenmei, distinguishing between the standard meaning of giving one's all within a given context, and your adopted meaning of giving one's all within an unknowable context.

Eliezer, I recall that as a child you used to play with infinities. You know better now.

Comment author: Jef_Allbright 10 December 2008 03:38:43PM 0 points [-]

Coming from a background in scientific instruments, I always find this kind of analysis a bit jarring with its infinite regress involving the rational, self-interested actor at the core.

Of course two instruments will agree if they share the same nature, within the same environment, measuring the same object. You can map onto that a model of priors, likelihood function and observed evidence if you wish. Translated to agreement between two agents, the only thing remaining is an effective model of the relationship of the observer to the observed.

Comment author: Jef_Allbright 09 December 2008 07:56:22PM 0 points [-]

I'll second jb's request for denser, more highly structured representations of Eliezer's insights. I read all this stuff, find it entertaining and sometimes edifying, but disappointing in that it's not converging on either a central thesis or central questions (preferably both.)

Comment author: Jef_Allbright 06 December 2008 07:03:30PM -1 points [-]

Crap. Will the moderator delete posts like that one, which appear to be so off the Mark?

Comment author: Jef_Allbright 06 December 2008 06:24:50PM -1 points [-]

billswift wrote:

‌but the self-taught will simply extend their knowledge when a lack appears to them.

Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on "will simply extend" but on "when a lack appears."

In this forum, and others like it, we characteristically fail to distinguish between the relative ease of learning from the already abstracted explicit and latent regularities in our environment and the fundamentally hard (and increasingly harder) problem of extracting novelty of pragmatic value from an exponentially expanding space of possibilities.

Therein lies the problem—and the opportunity—of increasingly effective agency within an environment of even more rapidly increasing uncertainty. There never was or will be safety or certainty in any ultimate sense, from the point of view of any (necessarily subjective) agent. So let us each embrace this aspect of reality and strive, not for safety but for meaningful growth.

In response to Worse Than Random
Comment author: Jef_Allbright 11 November 2008 08:42:58PM 3 points [-]

A few posters might want to read up on Stochastic Resonance, which was surprisingly surprising a few decades ago. I'm getting a similar impression now from recent research in the field of Compressive Sensing, which ostensibly violates the Nyquist sampling limit, highlighting the immaturity of the general understanding of information-theory.

In my opinion, there's nothing especially remarkable here other than the propensity to conflate the addition of noise to data, with the addition of "noise" (a stochastic element) to (search for) data.

This confusion appears to map very well onto the cybernetic distinction between intelligently knowing the answer and intelligently controlling for the answer.

Comment author: Jef_Allbright 09 November 2008 10:58:41PM 1 point [-]

Jo -

Above all else, be true to yourself. This doesn't mean you must or should be bluntly open with everyone about your own thoughts and values; on the contrary, it means taking personal responsibility for applying your evolving thinking as a sharp instrument for the promotion of your evolving values.

Think of your values-complex as a fine-grained hierarchy, with some elements more fundamental and serving to support a wider variety of more dependent values. For example, your better health, both physical and mental, is probably more fundamental and necessary to support better relationships, and a relatively few deeper relationships will tend to support a greater variety of subsidiary values than would a larger number of more shallow relationships, and so on.

Of course no one can compute and effectively forecast the future in such complex terms, but to the extent you can clarify for yourself the broad outlines, in principle, of (1) your values and (2) your thinking on how to promote those values into the future you create, then you'll tend to proceed in the direction of increasing optimality. Wash, rinse, repeat.

We wish you the best. Your efforts toward increasingly intelligent creation of an increasingly desirable world contribute to us all.

View more: Next