Related to: inferential distance, fun theory sequence.

“The arrow of human history…points towards larger quantities of non-zero-sumness. As history progresses, human beings find themselves playing non-zero-sum games with more and more other human beings. Interdependence expands, and social complexity grows in scope and depth.” (Robert Wright, Nonzero: The Logic of Human Destiny.)

What does it mean for a human society to be more complex? Where does new information come from, and where in the system is it stored? What does it mean for everyday people to live in a simple versus a complex society?

There are certain kinds of complexity that are inherent in the environment: that existed before there were human societies at all, and would go on existing without those societies. Even the simplest human society needs to be able to adapt to these factors in order to survive. For example: climate and weather are necessary features of the planet, and humans still spend huge amounts of resources dealing with changing seasons, droughts, and the extremes of heat and cold. Certain plants grow in certain types of soil, and different animals have different migratory patterns. Even the most basic hunter-gatherer groups needed to store and pass on knowledge of these patterns. 

But even early human societies had a lot more than the minimum amount of knowledge required to live in a particular environment. Cultural complexity, in the form of traditions, conventions, rituals, and social roles, added to technological complexity, in the form of tools designed for particular purposes. Living in an agricultural society with division of labour and various different social roles required children to learn more than if they had been born to a small hunter-gatherer band. And although everyone in a village might have the same knowledge about the world, it was (probably) no longer possible for all the procedural skills taught and passed on in a given group to be mastered by a single person. (Imagine learning all the skills to be a farmer, carpenter, metalworker, weaver, baker, potter, and probably a half-dozen other things.)

This would have been the real beginning of Robert Wright’s interdependence and non-zero-sum interactions. No individual could possess all of the knowledge/complexity of their society, but every individual would benefit from its existence, at the price of a slightly longer education or apprenticeship than their counterparts in hunter-gather groups. The complexity was hidden; a person could wear a robe without knowing how to weave it, and a clay bowl without knowing how to shape it or bake it in a kiln. There was room for that knowledge in other people’s brains. The only downside, other than slightly longer investments in education, was a small increase in inferential distance between individuals.

Writing was the next step. For the first time, a significant amount of knowledge could be stored outside of anyone’s brain. Information could be passed on from one individual, the writer, to a nearly unbounded number of others, the readers. Considering the limits of human working memory, significant mathematical discoveries would have been impossible before there was a form of notation. (Imagine solving polynomial equations without pencil and paper.) And for the first time, knowledge was cumulative. An individual no longer had to spend a number of years mastering a particular, specific skill in an apprenticeship, having to laboriously pass on any new discoveries one at a time to their own apprentices. The new generation could start where the previous generation had left off. Knowledge could stay alive indefinitely, almost, in writing, without having to pass through a continuous line of minds. (Without writing, even if the ancient Greek society had possessed equivalent scientific and mathematical knowledge, it could not have later been rediscovered by any other society.) Conditions were ripe for the total sum of human knowledge to explode, and for complexity to increase rapidly.

The downside was a huge increase in inferential distance. For the first time, not only could individuals lack a particular procedural skill, they might not even know that the skill existed. They might not even benefit from the fact of its existence. The stock market contains a huge amount of knowledge and complexity, and provides non-zero-sum gains to many individuals (as well as zero-sum gains to some individuals). But to understand it requires enough education and training that most individuals can’t participate. The difference between the medical knowledge of professionals versus uneducated individuals is huge, and I expect that many people suffer because, although someone knows how they could avoid or solve their medical problems, they don’t.  Computers, aside from being really nifty, are also incredibly useful, but learning to use them well is challenging enough that a lot of people, especially older people, don’t or can’t.

(That being said, nearly everyone in Western nations benefits from living here and now, instead of in an agricultural village 4000 years ago. Think of the complexity embodied in the justice system and the health care system, both of which make life easier and safer for nearly everyone regardless of whether they actually train as professionals in those domains. But people don’t benefit as much as they could.)

Is there any way to avoid this? It’s probably impossible for an individual to have even superficial understanding in every domain of knowledge, much less the level of understanding required to benefit from that knowledge. Just keeping up with day-to-day life (managing finances, holding a job, and trying to socialize in an environment vastly different from the ancestral one) can be trying, especially for individuals on the lower end of the IQ bell curve. (I hate the idea of intelligence, something not under the individual’s control and thus unfair-seeming, being that important to success, but I’m pretty sure it’s true.) This might be why so many people are unhappy. Without regressing to a less complex kind of society, is there anything we can do?

I think the answer is quite clear, because even as societies become more complex, the arrow of daily-life-difficulty-level doesn’t always go in the same direction. There are various examples of this; computers becoming more user-friendly with time, for example. But I’ll use an example that comes readily to mind for me: automated external defibrillators, or AEDs.

A defibrillator uses electricity to interrupt an abnormal heart rhythm (ventricular fibrillation is the typical example, thus de-fibrillation). External means that the device acts from outside the patient’s body (pads with electrodes on the skin) rather than being implanted. Most defibrillators require training to use and can cause a lot of harm if they’re used wrong. The automated part is what changes this. AEDs will analyze a patient’s heart rhythm, and they will only shock if it is necessary. They have colorful diagrams and recorded verbal instructions. There’s probably a way to use an AED wrong, but you would have to be very creative to find it. Needless to say, the technology involved is ridiculously complex and took years to develop, but you don’t need to understand the science involved in order to use an AED. You probably don’t even need to read. The complexity is neatly hidden away; all that matters is that someone knows it. There weren't necessarily any ground-breaking innovations involved, just the knowledge of old inventions in a user-friendly format.

The difference is intelligence. An AED has some limited artificial intelligence in it, programmed in by people who knew what they were talking about, which is why it can replace the decision process that would otherwise be made by medical professionals. A book contains knowledge, but has to be read and interpreted in its entirety by a human brain. A device that has its own small brain doesn’t. This is probably the route where our society is headed if the arrow of (technological) complexity keeps going up. Societies need to be livable for human beings.

That being said, there is probably such thing as too much hidden complexity. If most of the information in a given society is hidden, embodied by non-human intelligences, then life as a garden-variety human would be awfully boring. Which could be the main reason for exploring human cognitive enhancement, but that’s a whole different story.

New Comment
49 comments, sorted by Click to highlight new comments since:

Writing was the next step. For the first time, a significant amount of knowledge could be stored outside of anyone’s brain.

Nitpick - you can externalize a lot of information in the form of drawings, which predate writing by about 30K years. Counting and tallying are about that ancient - and numbers are an important form of knowledge.

Interesting... I didn't realize that writing numbers down predated writing words down.

My understanding is that most writing systems began with immobile agricultural societies to tally objects for the sake of tax, storage, and trade: "5 apple, 2 wheat, 6 pig" where the nouns being numbered start out as ideograms with a visible connection to the thing they refer to. This is apparently a fairly "obvious" thing to do because it seems to have occurred independently to the Mayans, and Sumerians, and probably the Chinese and Egyptians, and probably some others I've never heard of. The Incas accidentally ended up with a relatively non-upgradeable notation system when they used specialized knotting systems for roughly this purpose, so its possible for humans starting from scratch as a group to completely miss this, but it seems rare to miss it.

The hard thing seems to be independently inventing a notation for speech sounds, rather than words, and tends to happen when adapting an existing system of ideograms to a new culture. From memory and Le Wik, the Egyptian symbol system was probably re-purposed for Semitic languages and that eventually inspired Phoenician (from whence we get "phonetics"), and the Japanese syllabary grew out of the use of Chinese symbols for the sounds they made.

[-][anonymous]70

Phoenician (from whence we get "phonetics")

Dictionaries don't back you up on that etymology. Both words come from Greek, but one word meant "purple" and the other meant "sound".

Both words come from Greek, but one word meant "purple" and the other meant "sound".

Wiktionary claims "Phonecian" was from a greek transliteration of Egyptian, and does not mention "purple".

But yes, "phonetics" clearly came from the word for "sound".

I think it took the human race much longer to develop speech than basic counting and drawing which even less evolved animals have been known to utilize.

I know that chimpanzees will draw or paint if given materials, but is there any sense in which a chimpanzee's drawings contain information about the way it understands the world?

My own internet searching turned up a few studies indicating that some chimpanzees were able to recognize images in human works of art. When chimpanzees tried to draw on their own, they drew "hooks" and "squiggles", but that was about the extent of chimpanzee drawing I found. One researcher (probably looking for something to say) commented that chimpanzee drawings tended to be clustered on the bottom part of the page. I don't think chimpanzees are doing anything that we would call representational.

I don't think they do, I think its safe to say that Homo Sapiens developed differing methods of communication based on local culture just like animals would have identified with each other due to territory markings.

However, we all eventually developed speech and chimpanzees were quickly overcame(and would probably not be able to perform as soon as we could share knowledge in a way that better utilized our brain.

This is one of the key reasons I support Eliezer's FOOM speed theory's over Robin. If we have an AI that can respond to basic speech and memory, then expand it over larger amounts of computing power all of our knowledge is their for it to build upon. Once you get those few architectural insights that separate our brains from our less evolved cousins growth accelerates exponentially and who knows what a seed AI could grow to in that amount of time.

Anyway, sorry for getting off the main point but this discussion ties really clearly into a lot of what they were discussing in the recent debate from my perspective.

Drawing? Which animals do that? (More specifically representational drawing - scribbling abstractly isn't quite the same deal as what humans were doing 50,000 years ago.)

I may have been unclear when I previously didn't defined drawing, Swimmers definition here is more representative of what I meant.

But to understand it requires enough education and training that most individuals can’t participate. The difference between the medical knowledge of professionals versus uneducated individuals is huge, and I expect that many people suffer because, although someone knows how they could avoid or solve their medical problems, they don’t.

The average length of time between onset of symptoms and the diagnosis of celiac disease has decreased from 9 years for patients diagnosed prior to 1990 to 4.4 years for patients who were diagnosed after 1993

This is a relatively common disease that isn't terribly complicated to test for.

That being said, there is probably such thing as too much hidden complexity. If most of the information in a given society is hidden, embodied by non-human intelligences, then life as a garden-variety human would be awfully boring.

It sounds like the problem would be too little unhidden complexity. I don't see how adding a process that you don't know about could make anything more boring.

Considering that we're perfectly capable of making games with unhidden complexity, I don't forsee this being a problem.

That being said, there is probably such thing as too much hidden complexity.

Definitely. You may find the following links relevant:

Joseph Tainter - Collapse of Complex Societies

Clay Shirky - Collapse of Complex Business Models

To summarize: each problem solved through an increase in complexity cumulatively adds overhead to the system. Eventually, the net return on solving a problem is completely offset by that overhead. After that point, the system must simplify - either on its own, or with the "assistance" of external forces such as barbarians or hostile takeovers.

(I hate the idea of intelligence, something not under the individual’s control and thus unfair-seeming, being that important to success, but I’m pretty sure it’s true.)

Unless you believe in libertarian free will, "being under the individual’s control" in the sense you mean is not a meaningful concept.

"being under the individual’s control" in the sense you mean is not a meaningful concept.

It feels like a meaningful concept. For example, if I want to learn how to draw, or run a marathon, these are skills that most people can learn through approximately the same process, with a little willpower. Some people find it easier than others, but barring physical disability, almost anyone can train to run that marathon. If I decide I want to be more intelligent (rather than knowledgeable), there doesn't seem to be any way to increase this through practice and willpower, and not just anyone can train enough to, say, complete a degree in math.

[-][anonymous]10

with a little willpower

How much willpower you have is also very likley affected quite heavily by your genetic heritage.

Which also seems unfair to me sometimes. I'm quite well endowed in the willpower department, but I watch friends my age struggle with things that seem so freaking simple, like going back to graduate high school, and eventually I realize it's not because they're not trying but because they struggle to keep up the willpower to make a continuous effort without immediate returns, and it depresses me. And there's nothing I can do about it.

[-][anonymous]10

I can understand why you feel that way.

Several LWers have spent quite some time finding and investigating the returns of several ways to boost willpower in order to complete tasks(doing more with less, building up willpower in the long term, short term willpower spkies like consuming sugar) and even IQ (nbacking, nootropics). So there are a few things you can do for your friends.

But what your are concerned about dosen't really seem to be fairness, at least not fairness in the usual sense of the word (note: I am not saying this in a disapproving tone!), but I'm not quite sure where you fit. Could you help me out? If so please read on.

In another comment you noted that trainable skills are an important category for you. Even in a system where everyone had the same starting conditions and same ability to self-modify for more willpower and higher IQs, differences would arise over time. Before answering if you would consider such differences fair, please consider the following.

People have different values, some values are rewarded by our universe more than others. The more generous reward can obviously be used to invest more than others do into enhancement. Would this bother you?

Now lets assume all values are locked in and are thoroughly homogeneous. Differences will still arise. Unavoidably so. Can you guess why this is so? Are you ok with that?

To answer the other half of your question:

Even in a system where everyone had the same starting conditions and same ability to self-modify for more willpower and higher IQs, differences would arise over time. Before answering if you would consider such differences fair, please consider the following.

Yes. I would consider it fair. Because if I lived in that world, and there was something I wanted to succeed, I would never be in a position where someone else could succeed at it easily while I struggled with transcendent efforts and might ultimately fail anyway. I might live in that world and not choose to self-modify for higher IQ, for example if I preferred to expend my self-modification energy on being more generous or more fun to party with, and I might end up with less money or fame or books published than someone else who chose intelligence, but I could have chosen differently if I'd wanted to.

[-][anonymous]00

I would consider it fair. Because if I lived in that world, and there was something I wanted to succeed, I would never be in a position where someone else could succeed at it easily while I struggled with transcendent efforts and might ultimately fail anyway.

Not all goals are created equal. Some are more difficult others less so. Many would still struggle.

Also even among those with the same goals, some will get lucky, and gains are likley to snowball over time into ever greater differentials.

I might live in that world and not choose to self-modify for higher IQ, for example if I preferred to expend my self-modification energy on being more generous or more fun to party with, and I might end up with less money or fame or books published than someone else who chose intelligence, but I could have chosen differently if I'd wanted to.

I sympathize with such a position. I feel I would ideally like it if as many people as possible could acheive what is sometimes called self-actualization and pursue their other (differing) values.

However this leaves us with the many difficult questions of how to keep such a situation in equilibrium with ever greater Malthusian pressures (due to the lightspeed limit) and ever greater power differentials. I have been for the past year or so toying with the ideas of voluntary compacts enforced by self-modification (basically I agree to change myself so I care deeply about not altering the terms of this agreement, which naturally means that in the future I will try my very best to preserve this value and hopefully keep the terms).

[-][anonymous]10

Not all goals are created equal. Some are more difficult others less so. Many would still struggle.

But they'd struggle at achieving harder versus easier goals. I don't think Swimmer is suggesting all goals would be equally-easy to attain (we should rightly be suspicious if someone thinks that in a fair, self-actualized world, becoming an astronaut and becoming a teacher involved the same amount of effort), just that given two people trying to achieve the same goal (say, stability of person and health and shelter and income) by the same means, we would expect to see "luck" and the difficulty of the task determine probability of success.

[-][anonymous]00

I agree she wasn't suggesting this. However what I was pointing out is that this was a source of "unfairness" for people struggling to achieve one's goals that she hadn't touched on.

But what your are concerned about dosen't really seem to be fairness, at least not fairness in the usual sense of the word (note: I am not saying this in a disapproving tone!), but I'm not quite sure where you fit. Could you help me out? If so please read on.

The 'unfair' part comes in when someone wants to be different, and I can't help them change even though I've already achieved what they want, seemingly without a lot of effort. For example, a friend of mine knows that she has poor willpower and self-control, and she commits to do things which are a good idea and will help her in the long run, but she finds the short-term pain overwhelming...and she has told me repeatedly that she wishes she could be more like me and be able to stick to precommitments. I would have an easier time with this if it was hard for me to make plans and follow them, because then at least I could detail the steps I followed, and if she failed to follow the same steps, it would be a matter of her not having tried as hard. But tasks involving willpower have always been easy for me, and I can't describe any steps I follow aside from "making a plan to do something productive, and then doing it even if I'm tempted to do other unproductive-but-fun things later."

[Attempting to analyze this thought pattern...] I think a lot of this comes from the fear that someday I'll encounter some task that I'll desperately want to accomplish, and no matter how hard I try I won't be able to keep up with someone who is talented in a particular area and does it without trying. I haven't encountered this before, because ultimately I am very stubborn and there's nothing I've really failed at. But watching my friend reminds me that "but for the grace of God, there go I."

I never understood willpower. If you don't want to do something, you could always, well, not do it.

Willpower is the ability to force yourself to do something even if you "don't feel like doing it", as you put it, because the action in question has consequences you want.

I don't understand how is it even possible to do something without feeling like doing it.

Do you also not understand how it is possible for the hare to catch the tortoise, for arrows to move, to learn anything one did not already know, or to do good of one's own volition? One can prove, with impeccable logic, that all of these things are impossible. Yet they happen. The logic is wrong, and the wrongness is not in the deductive system but in the ontology, the verbal categories that embody assumptions one does not know one is making.

You can prove with impeccable logic that there is no such thing as will power. Either you want to do something or you do not; you do what you want and not what you don't. Yet everyday experience tells us all (except those who have philosophised themselves into ignoring what they have proved cannot exist) that it is not as simple. We do, in fact, experience conflicts. That our verbal formulations of what is going on may fail us does not make the reality go away.

BTW, in your comment that Eugine_Nier linked, you say:

My father warns me that not working now will greatly reduce my future employment prospects, and that I'll eventually have to find work or starve after they retire and can no longer support me. (So I guess I'll starve, then?)

That's some rather steep temporal discounting. Does that happen in the short term as well? E.g. do you leave something undone because you don't feel like it, and later the very same day, wish your earlier self had done it?

The logic is wrong, and the wrongness is not in the deductive system but in the ontology, the verbal categories that embody assumptions one does not know one is making.

Yes, I've seen it happen. I just don't understand it.

That's some rather steep temporal discounting. Does that happen in the short term as well? E.g. do you leave something undone because you don't feel like it, and later the very same day, wish your earlier self had done it?

Well, sort of. I do have a tendency to let stuff go undone and be inconvenienced by the fact that it's not done (for example, I might run out of clean laundry, or put off getting a haircut for several months) but I rarely think of not doing them as mistakes to be regretted. I've learned all kinds of "bad" lessons that seem to amount to "putting things off never has consequences". For example, I once skipped a midterm exam in college so I could play Final Fantasy X, and in hindsight it turned out to be the right decision. (I hadn't studied - too much playing Final Fantasy X - so I would have done terribly. As it turned out, the professor accepted my excuse, I ended up passing the course.) Also, there's all that time spent playing JRPGs, in which you want to explore every side path before reaching the main goal and nothing ever happens until you, the player, take an action to cause it to happen.

Oh, for sure! "It's -20º C outside and I've been out of the house for 16 hours and I really don't want to go jump into a cold pool and swim laps for an hour, and I'll be exhausted after, but I haven't exercised in 2 days and I really should." This is kind of a worst-case scenario. Most of the time, for me anyway, the parts of me that do want to do something and the parts that don't are equally lined up. (For example: I don't want to swim because I could go home and play on the computer and go to bed early instead, but I do want to swim because I'll get crabby if I don't and I'll feel better afterwards if I do.)

To me, that reads as a more complicated form of "feeling like it"...

To me, simplifying it down to 'feeling like it' collapses the difference between someone who consistently will choose to swim in the aforementioned example, and someone who consistently won't. You could call the difference 'being better at delayed gratification' but I think the usual definition of willpower covers it quite well.

[-][anonymous]10

Sometimes you distract or fool yourself into starting, and then it's not so bad after that. Like, you don't want to write a paper, so you start a video on youtube, and while you're distracted with that, tell yourself you're just going to open MS Word, and then maybe write something or maybe not; then you tell yourself you're just going to write 100 words, and so on. Do it a few times in a row, and the process becomes habit, and then you might not even have to lie to yourself about why you're opening MS Word, because it's not a conscious decision anymore.

You don't have to be unhappier not doing the thing than you are doing it, necessarily. You just have to be unhappy enough not doing the thing to keep trying to try.

That works for you? For me, the best way is to plan, preferably a day in advance, that 'I will write my essay on Friday night' or something. There is definitely a part of myself that resents having another part of myself 'trick' it into anything, productive or not.

[-][anonymous]10

People are different, I guess. "Deciding" to do something on Friday night has little correlation with whether I will actually do it on Friday night, or at all. It mainly just makes me feel bad when I haven't written the essay as of Saturday morning.

See, there's a part of me that really doesn't like writing essays-- actually, not writing essays in particular, as I've mostly fixed that problem, but just being productive, doing effortful things. If I try to power through it, that part of me complains so loud that I'm very motivated to rationalize doing whatever it is later. Giving it advance warning just makes it complain louder, if anything. But it's easily distracted.

I can't identify with what you say about resenting being "tricked". I actually feel pretty good when I successfully circumvent the part of me that doesn't care what happens tomorrow. Now, ideally, I'd like to train the complaining part of me to just shut up when I do productive things, but that's not easy, and I suspect I'll need to experience many successes first so that I can associate trying with good things.

That's a very interesting difference. I find that being psychologically prepared to do something productive makes it way easier. If I trick myself into going for a swim by telling the lazy part of myself that I'll just go for 20 minutes, you can bet that if 20 minutes is up and I try to motivate myself to keep going, there will be several sub-components of my mind screaming 'but you promised!' This is even more true for things that aren't habits for me. (Exercise is something I do pretty easily by pure habit.)

Can you give an example of it, since the one Swimmer963 offered doesn't qualify?

I tend to decide what to do by imagining my options and choosing the one that I feel the best about. To me, "feeling like doing something" and "deciding to do something" feel like they're the exact same thing. Maybe that's why I get confused when people talk about doing things they don't feel like or don't want to do? They have some kind of "override" in their brains that kicks in between feeling like doing something and actually doing it - which they call "willpower" - and which I'm not aware of having?

Maybe it does? Willpower is something that confuses me, after all.

I think in order to understand what willpower is and what it is useful for, it is important to understand that people want more than one thing. For example, I want to read Internet news. I also want to improve my math abilities, increase my programming skills, read novels, learn more physics, improve at my job, draw more pictures, bicycle more, spend more time in nature, and bake more delicious strawberry-rhubarb pies of which I will place one scoop of vanilla ice cream on each slice I eat. That's not even close to an exhaustive list of all the things I want.

Multiple wants will often come into conflict with one another. All of the things I've mentioned take time. My time is limited. So I have to put some kind of priority ordering on them. Sometimes I should do something where I will only accomplish what I want in the long term. Sometimes I should do things that can be quickly or easily accomplished with little mental effort in the short term. Mostly people talk about willpower when they're having trouble doing the long term or high effort things they want to do, not the short term or low effort things they want to do.

So the need for willpower mostly arises when people are trying to maximize attainment of various wants which compete for their time and energy. Since the longer term wants don't maximize short term rewards, they take more mental effort to accomplish since you don't feel automatic and immediate positive reinforcement from them (you want to do them in the abstract, but there are other things that would make you happier right now if you did them instead).

So when someone does something requiring willpower, they want to do it in an intellectual sense, and they have some emotional stake in whether it gets done, but they don't want to do it in the sense that they are getting a large positive reinforcement from anticipating the task. Possibly they feel bad when they anticipate not doing it, or it just doesn't generate as much excitement at the moment as the other task. So they want to do it, but it requires willpower for them to do so. I think willpower is really just a way to talk about the mental effort required to do something a person wants to do, which differs for different wants. If you want to know more about what might actually affect how willpower works, understanding more about motivation will probably help.

I prefer to minimize the need for willpower in long term goals, however. If I can find a way to give myself short term positive reinforcement for doing something that achieves a long term goal, I will. Anyway, I hope that helps explain it.

Willpower seems to be about trading off short-term gains for long term ones. In the short term, doing a particular action may not be very pleasant, but in the long term, you do want to have completed that activity. People's preferences aren't generally stable, and what we "want" in the short term is not always the same thing we will "want" in the long term. We use the same word for both, but it doesn't mean that it is the same thing, or that it is impossible for a short term and a long term desire to conflict.

Well, even by this definition success will always be based on things beyond an individual's control. I'm assuming you mean success at something zero-sum like status. After all, since people will try their hardest to succeed (to their innate limits of willpower and drive), the factor distinguishing success form failure cannot be under their control.

Maybe dividing things into a continuum of 'under the individual's control' to 'beyond the individual's control' doesn't make sense. It's still something my brain tries to do, and it still feels unfair that intelligence would so strongly determine outcomes.

[-][anonymous]60

One of the ways my mind perceives this is basically as a waste. "Look these two sacks of meat expend nearly the same amount of resources, yet one can't feed itself while the other has positive externalities it dosen't fully capture. The difference being 4 or 5 standard deviations caused by just a few thousand genes. Can't we find a way to fix up the less productive one rather than wasting all that negentropy to build a whole new one out of the same atoms?"

Also I'm generally in favour of letting people improve themselves, it really sucks from an eudaimonic perspective that we can't do anything about such an important aspect of ourselves.

Transhumanism ftw.

Could you give an example of what you would consider 'fair'?

[-]MrMind120

I think you can dissolve the argument by substituting "under individual's control" with "trainable".

I think Swimmer is talking about the same thing Eliezer pointed out in Why Are Individual IQ Differences OK?

That being said, there is probably such thing as too much hidden complexity. If most of the information in a given society is hidden, embodied by non-human intelligences, then life as a garden-variety human would be awfully boring.

That wouldn't be my main worry. Setting aside general AI, one problem with smaller amounts of hidden complexity is that we rarely hide it completely successfully. Automation has failure modes. Abstractions leak. If some instance of complexity hiding is so successful that it is ok that the rare failures turn into research problems, that's ok. If another instance is sufficiently fragile that its users regularly bump into its failures, and become familiar with how to recover from them (but where it is still reliable enough to be useful), that's ok. The intermediate case, where it is reliable enough for most users to forget what is hidden, but fragile enough that recovering from its failures can't be neglected - that's a problem...