Utilitarianism twice fails

-26 [deleted] 21 February 2013 06:10AM

(Crossposted.)

It seems almost self-evident that (barring foreign subjugation) a government will care about the wants of (some of) its citizens and nothing else: no other object of concern is plausible. If governments concern themselves with the wants of noncitizens, that will be only because citizens desire their well-being. The now platitudinous insight that the only possible basis for government policy is people’s wants can be attributed to utilitarianism, which gets credit in its stronger form for the apparent success of weaker claims.

Another reasonable claim derives from utilitarianism: citizens’ wants should count equally. This seems only fair in a democracy, where one citizen gets one vote. Few today would deny the principle that public policy should serve the most good of the greatest number, which may seem to contradict my claim that no general moral principle governs public policy, but in practice, the consequences of this limited utilitarianism are thin indeed, leaving ample room for ideology. I’ll call thin utilitarianism this public-policy formula: the greatest good for the greatest number of citizens, weighting their welfare equally.

First, I’ll consider whether thin utilitarianism succeeds on its own terms by providing a practical guide to public policy. Second, I’ll examine how this deceptively appealing guide to public policy transmogrifies into the monster of full-blown utilitarianism, a form of moral realism. The first constrains even casual use of thin utilitarianism; the second impugns utilitarianism as a general ethical theory.

1. Non-negotiable conflicts between subagents undermine thin utilitarianism

Although simple economic models attributing conduct to rational self-interest require that agents assign consistent utilities to outcomes, agents are inconsistent. One example of inconsistent utility assignment is the endowment effect, where agents assign more value to property they own than  to the same property they don’t own. The inconsistency considered here is stronger than the endowment effect and similar phenomena that we can surmount with effort, as professional traders must do. Despite the effect, there is a real answer to how much utility an outcome affords; the endowment effect is a bias, which willpower or habit can neutralize.

The conflict between subagents within a single person, on the other hand, can’t be resolved by means of a common criterion, such as market price, since two subagents pursue different ends. Which of these subagents dominates depends on situational and personological factors that elicit one or the other, not on overcoming bias. Construal-level theory reveals a conflict between intrapersonal subagents, near-mode and far-mode, integrated mindsets applied to matter experienced at fine or broad granularities. Modes (or “construal levels”) differ in that far-mode is more future-oriented and principled, near-mode, present-oriented and contextual. Far-mode and near-mode are elicited by the way social choices are made: voting elicits far-mode and market choices, near-mode; the utility of a choice depends on construal level.

Take a policy choice: how much wealth should be spent on preventive medicine? There are two basic ways allocating resources to medical care, political process and the market, socialized medicine being an example of political process, private medicine, the market. Socialized medicine makes allocating funds for the medical care a political decision; the market makes it each consumer’s personal choice. When you compare the utility of the choices by political process with those on the market, you should expect to find that when people choose politically, they use far-mode thinking encouraged by voting; whereas when they make purchases, they use near-mode thinking encouraged by the market. The preventive-care expenditure will be higher under socialized medicine because political process elicits far-mode, which is concerned with future health. People will be more miserly with preventive care under private medicine, where the decision to spend is made by consumer choice in near-mode, where we care more about the present. People favor spending more on preventive care when they vote to tax themselves than when they buy it on the market. Which outcome provides the greater utility—more preventive care or more recreation—is relative to construal level.

The same indeterminacy of utility occurs when comparing decisions made under different political processes, such as local versus central. Local decisions will be near-mode, central decisions far-mode. Assuming socialized medicine, less funding would be available if it were subject to state rather than federal control. Which provides more utility depends on whether the consequences are evaluated in near-mode or far-mode; no thin-utilitarian criterion applies.

Some utilitarians will protest that we should measure experiences rather than wants. The objection misses the argument’s point, which is that utility is relative to mode, a conclusion easiest to see in the public-choice process because the alternatives may be delimited. If the conclusion that utility depends on construal level holds, the same indeterminacies occur in evaluating experience. That apart, when utilitarianism is applied to public policy, present wants rather than experienced satisfaction is the criterion; agents necessarily choose based on present wants whether on the market or the political process.

2. Full-blown utilitarianism stands convicted of moral realism

Full-blown utilitarians are necessarily moral realists, but increasingly they are seen to deny it. While moral realism is widely recognized as absurd, utilitarianism seems to some an attractive ethical philosophy. For the sake of intellectual respectability, utilitarians can appear to reject an anachronistic moral realism while practicing it philosophically.

Full-blown utilitarianism often obscures its differences with thin utilitarianism, which is a questionable doctrine but in accord with ordinary common sense. It emerges from thin utilitarianism by the misdirection of subjecting ethical premises to the test of simplicity, a test appropriate to realist theories exclusively, because simplicity serves truth. A classic illustration: Aristotle theorized that everything on earth that goes up goes down; Newton set out the gravity theory, which applies to all objects, not just those terrestrial, and which predicts that objects can escape the earth’s gravitational field by traveling fast. Scientists confidently bet on Newton well before rockets were invented, and their confidence was vastly increased by the simplicity of Newton’s theory, which made correct predictions concerning all objects. Although philosophers have explained variously the correlation between simplicity and truth, they generally agree that simplicity signals truth. Unless utilitarians can otherwise justify it, searching for a simple moral theory means searching for a true theory.

The full-blown utilitarian seeks a misplaced simplicity by insisting that all entities that can experience happiness, a much simpler criterion than “current citizens,” serve as the beneficiary reference group—including future generations of humans and even beasts, whose existence depends on policy; whereas, thin utilitarianism is a democratic convention, serving only the wants of the currently existing citizens . Because they must incorporate future generations into the reference group, utilitarian philosophers have had to accept that a policy-dependent reference group entails a dilemma regarding interpretation of full-blown utilitarianism, with unattractive consequences at both horns, which realize radically different ideals.  In one version, you maximize the average utility obtained by the whole population; in the other you sum the utilities. These interpretations seem almost equally unattractive: the averaging view says that one supremely happy human is better than a billion very happy ones; the adding approach implies that a hundred trillion miserable wretches is better than a billion happy people. To apply a utilitarian standard to scenarios so distant from thin utilitarianism, accepting their consequences because of simplicity’s demands, is to treat moral premises as truths and to practice moral realism, despite contrary self-description. Those agreeing that moral realism is impossible must reject full-blown utilitarianism.

Can infinite quantities exist? A philosophical approach

-9 metaphysicist 03 January 2013 10:52PM

 

[Crossposted]

Initially attracted to Less Wrong by Eliezer Yudkowsky's intellectual boldness in his "infinite-sets atheism," I've waited patiently to discover its rationale. Sometimes it's said that our "intuitions" speak for infinity or against, but how could one, in a Kahneman-appropriate manner, arrive at intuitions about whether the cosmos is infinite? Intuitions about infinite sets might arise from an analysis of the concept of actually realized infinities. This is a distinctively philosophical form of analysis and one somewhat alien to Less Wrong, but it may be the only way to gain purchase on this neglected question. I'm by no means certain of my reasoning; I certainly don't think I've settled the issue. But for reasons I discuss in this skeletal argument, the conceptual—as opposed to the scientific or mathematical—analysis of "actually realized infinities" has been largely avoided, and I hope to help begin a necessary discussion.

1. The actuality of infinity is a paramount metaphysical issue.

Some major issues in science and philosophy demand taking a position on whether there can be an infinite number of things or an infinite amount of something. Infinity’s most obvious scientific relevance is to cosmology, where the question of whether the universe is finite or infinite looms large. But infinities are invoked in various physical theories, and they seem often to occur in dubious theories. In quantum mechanics, an (uncountable) infinity of worlds is invoked by the “many worlds interpretation,” and anthropic explanations often invoke an actual infinity of universes, which may themselves be infinite. These applications make real infinite sets a paramount metaphysical problem—if it indeed is metaphysical—but the orthodox view is that, being empirical, it isn’t metaphysical at all. To view infinity as a purely empirical matter is the modern view; we’ve learned not to place excessive weight on purely conceptual reasoning, but whether conceptual reasoning can definitively settle the matter differs from whether the matter is fundamentally conceptual.

Two developments have discouraged the metaphysical exploration of actually existing infinities: the mathematical analysis of infinity and the proffer of crank arguments against infinity in the service of retrograde causes. Although some marginal schools of mathematics reject Cantor’s investigation of transfinite numbers, I will assume the concept of infinity itself is consistent. My analysis pertains not to the concept of infinity as such but to the actual realization of infinity. Actual infinity’s main detractor is a Christian fundamentalist crank named William Lane Craig, whose critique of infinity, serving theist first-cause arguments, has made infinity eliminativism intellectually disreputable. Craig’s arguments merely appeal to the strangeness of infinity’s manifestations, not to the incoherence of its realization. The standard arguments against infinity, which predate Cantor, have been well-refuted, and I leave the mathematical critique of infinity to the mathematicians, who are mostly satisfied. (See Graham Oppy, Philosophical perspectives on infinity (2006).) 

2. The principle of the identity of indistinguishables applies to physics and to sets, not to everything conceivable.

My novel arguments are based on a revision of a metaphysical principle called the identity of indistinguishables, which holds that two separate things can’t have exactly the same properties. Things are constituted by their properties; if two things have exactly the same properties, nothing remains to make them different from one another. Physical objects do seem to conform to the identity of indistinguishables because physical objects are individuated by their positions in space and time, which are properties, but this is a physical rather than a metaphysical principle. Conceptually, brute distinguishability, that is differing from all other things simply in being different, is a property, although it provides us with no basis for identifying one thing and not another. There may be no way to use such a property in any physical theory, we may never learn of such a property and thus never have reason to believe it instantiated, but the property seems conceptually possible.

But the identity of indistinguishables does apply to sets: indistinguishable sets are identical. Properties determine sets, so you can’t define a proper subset of brutely distinguishable things.

3. Arguments against actually existing infinite sets.

A. Argument based on brute distinguishability.

To show that the existence of an actually existing infinite set leads to contradiction, assume the existence of an infinite set of brutely distinguishable points. Now another point pops into existence. The former and latter sets are indistinguishable, yet they aren’t identical. The proviso that the points themselves are indistinguishable allows the sets to be different yet indistinguishable when they’re infinite, proving they can’t be infinite.

B. Argument based on probability as limiting relative frequency.

The previous argument depends on the coherence of brute distinguishability. The following probability argument depends on different intuitions. Probabilities can be treated as idealizations at infinite limits. If you toss a coin, it will land heads roughly 50% of the time, and it gets closer to exactly 50% as the number of tosses “approaches infinity.” But if there can actually be an infinite number of tosses, contradiction arises. Consider the possibility that in an infinite universe or an infinite number of universes, infinitely many coin tosses actually occur. The frequency of heads and of tails is then infinite, so the relative frequency is undefined. Furthermore, the frequency of rolling a 1 on a die also equals the frequency of rolling 2 – 6: both are (countably) infinite. But if infinite quantities exist, then relative frequency should equal probability. Therefore, infinite quantities don’t exist.

4. The nonexistence of actually realized infinite sets and the principle of the identity of indistinguishable sets together imply the Gold model of the cosmos.

Before applying the conclusion that actually realized infinities can’t exist together with the principle of the identity of indistinguishables to a fundamental problem of cosmology, caveats are in order. The argument uses only the most general and well-established physical conclusions and is oblivious to physical detail, and not being competent in physics, I must abstain even from assessing the weight the philosophical analysis that follows should carry; it may be very slight. While the cosmological model I propose isn’t original, the argument is original and as far as I can tell, novel. I am not proposing a physical theory as much as suggesting metaphysical considerations that might bear on physics, whereas it is for physicists to say how weighty these considerations are in light of actual physical data and theory.

The cosmological theory is the Gold model of the universe, once favored by Albert Einstein, according to which the universe undergoes a perpetual expansion, contraction, and re-expansion. I assume a deterministic universe, such that cycles are exactly identical: any contraction is thus indistinguishable from any other, and any expansion is indistinguishable from any other. Since there is no room in physics for brute distinguishability, they are identical because no common spatio-temporal framework allows their distinction. Thus, although the expansion and contraction process is perpetual and eternal, it is also finite; in fact, its number is unity.

The Gold universe—alone, with the possible exception of the Hawking universe—avoids the dilemma of the realization of infinite sets or origination ex nihilo.

 

Politics Discussion Thread January 2013

6 OrphanWilde 02 January 2013 03:31AM
  1. Top-level comments should introduce arguments; responses should be responses to those arguments. 
  2. Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both. 
  3. A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
  4. In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.

As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.

Intelligence explosion in organizations, or why I'm not worried about the singularity

13 sbenthall 27 December 2012 04:32AM

If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:

  1. Machine intelligence is getting smarter.
  2. Once an intelligence becomes sufficiently supra-human, its instrumental rationality will drive it towards cognitive self-enhancement (Bostrom), so making it a super-powerful, resource hungry superintelligence.
  3. If a superintelligence isn't sufficiently human-like or 'friendly', that could be disastrous for humanity.
  4. Machine intelligence is unlikely to be human-like or friendly unless we take precautions.
I am not particularly worried about the scenario envisioned in this argument.  I think that my lack of concern is rational, so I'd like to try to convince you of it as well.*

It's not that I think the logic of this argument is incorrect so much as I think there is another related problem that we should be worrying about more.  I think the world is already full of probably unfriendly supra-human intelligences that are scrambling for computational resources in a way that threatens humanity.

I'm in danger of getting into politics.  Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.

Smart organizations

By "organization" I mean something commonplace, with a twist.  It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization". 

Do organizations have intelligence?  I think so.  Here's some of the reasons why:

  1. We can model human organizations as having preference functions. (Economists do this all the time)
  2. Human organizations have a lot of optimization power.

I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.

So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys

...and then...

It would be a kind of weird [organization] that was better than the best human or even the median human at all the things that humans do. [Organizations] aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are  [Organizations] that  are better than median humans at certain things, like digging oil wells, but I don’t think there are [Organizations] as good or better than humans at all things. More to the point, there is an interesting difference here because [Organizations] are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse. 

I think that Muehlhauser is slightly mistaken on a few subtle but important points.  I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.

  • When judging whether an entity has intelligence, we should consider only the skills relevant to the entity's goals.
  • So, if organizations are not as good at a human being at composing music, that shouldn't disqualify them from being considered broadly intelligent if that has nothing to do with their goals.
  • Many organizations are quite good at AI research, or outsource their AI research to other organizations with which they are intertwined.
  • The cognitive power of an organization is not limited to the size of skulls. The computational power is of many organizations is comprised of both the skulls of its members and possibly "warehouses" of digital computers.
  • With the ubiquity of cloud computing, it's hard to say that a particular computational process has a static spatial bound at all.
In summary, organizations often have the kinds of skills necessary to achieve their goals, and can be vastly better at them than individual humans. Many have the skills necessary for their own cognitive enhancement, since if they are able to raise funding they can purchase computational resources and fund artificial intelligence research. More mundanely, organizations of all kinds hire analysts and use analytic software to make instrumentally rational decisions.

In sum, many organizations are of supra-human intelligence and strive actively to enhance their cognitive powers.

Mean organizations


Suppose the premise that there are organizations with supra-human intelligence that act to enhance their cognitive powers.  And suppose the other premises of the Singularitarian argument outlined at the beginning of this post.

Then it follows that we should be concerned if one or more of these smart organizations are so unlike human beings in their motivational structure that they are 'mean'.

I believe the implications of this line of reasoning may be profound, but as this is my first post to LessWrong I would like to first see how this is received before going on.

* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication.  As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.

Politics Discussion Thread December 2012

5 OrphanWilde 04 December 2012 08:19PM

I skipped October and November owing to election season, but opening back up:

  1. Top-level comments should introduce arguments; responses should be responses to those arguments. 
  2. Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both. 
  3. A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate. 
  4. In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.

As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.

Detecting Web baloney with your nose?

-3 uzalud 10 November 2012 03:50PM

Is there a useful heuristic for detecting rationally-challenged texts (as in Web pages, forum posts, facebook comments) which takes relatively superficial attributes such as formatting choices, spelling errors, etc. as input? Something a casual Internet reader may use to detect possibly unworthy content so they can suspend their belief and research the matter further. Let's call them "text smells" (analogue to code smells), like:

  1. too much emphasis in text (ALL CAPS, bold, color, exclamations, etc.);
  2. walls of text;
  3. little concrete data/links/references;
  4. too much irrelevant data and references;
  5. poor spelling and grammar;
  6. obvious half-truths and misinformation.

Since many crackpots, pseudoscientific con artists, and conspiracy theorists seem to have cleaned up their Web sites in recent years, I wonder do these low-cost baloney detection tools might be of real value. Does anyone know of any studies or analyses of correlation between these basic metrics and the actual quality of the content? Can you think of some other smells typical of Web baloney?

 

NKCDT: The Big Bang Theory

-12 [deleted] 10 November 2012 01:15PM

Hi, Welcome to the first Non-Karmic-Casual-Discussion-Thread.

This is a place for [purpose of thread goes here].

In order to create a causal non karmic environment for every one we ask that you

-Do not upvote or downvote any zero karma posts

-If you see a vote with positive karma, downvote it towards zero, even if it’s a good post

-If you see a vote with negative karma, upvote it towards zero, even if it’s a weak post

-Please be polite and respectful to other users

-Have fun!”

 

 

This is my first attempt at starting a casual conversation on LW where people don't have to worry about winning or losing points, and can just relax and have social fun together.

 

So, Big Bang Theory. That series got me wondering. It seems to be about "geeks", and not the basement-dwelling variety either; they're highly successful and accomplished professionals, each in their own field. One of them has been an astronaut, even. And yet, everything they ever accomplish amounts to absolutely nothing in terms of social recognition or even in terms of personal happiness. And the thing is, it doesn't even get better for their "normal" counterparts, who are just as miserable and petty.

 

Consider, then; how would being rationalists would affect the characters on this show? The writing of the show relies a lot on laughing at people rather than with them; would rationalist characters subvert that? And how would that rationalist outlook express itself given their personalities? (After all, notice how amazingly different from each other Yudkowsky, Hanson, and Alicorn are, just to name a few; they emphasize rather different things, and take different approaches to both truth-testing and problem-solving).

Note: this discussion does not need to be about rationalism. It can be a casual, normal discussion about the series. Relax and enjoy yourselves.

 

But the reason I brought up that series is that its characters are excellent examples of high intelligence hampered by immense irrationality. The apex of this is represented by Dr. Sheldon Cooper, who is, essentially, a complete fundamentalist over every single thing in his life; he applies this attitude to everything, right down to people's favorite flavor of pudding: Raj is "axiomatically wrong" to prefer tapioca, because the best pudding is chocolate. Period. This attitude makes him a far, far worse scientist than he thinks, as he refuses to even consider any criticism of his methods or results. 

 

[LINK] How rational is the US federal state

-21 Thomas 30 October 2012 08:56AM

http://www.weeklystandard.com/blogs/over-60000-welfare-spentper-household-poverty_657889.html

 

60000 dollars per year per poor family, if the article is correct.

How to Deal with Depression - The Meta Layers

26 ShannonFriedman 26 October 2012 06:44PM

I wrote this for the Positive Vector website awhile back and lots of people have found it valuable, so I want to share it with the Less Wrong community as well.   I think this applies to most people - meta suffering thing is something I see everywhere, even though it is most prominent with people who have depression.   This is based on my experience with working with depressed people and with studying Buddhism, especially Big Mind.   Enjoy!

 

---

The roots of suffering are often deep.  But not all of the suffering happens at the root.  A lot of the suffering that people experience is “meta” suffering.  Meta suffering is when you suffer because you are distressed that you are suffering.  You are feeling depressed and hopeless, and there is a part of you that genuinely fears that it will never end.  That you will feel this way forever.  This fear of the suffering persisting can cause you much more suffering than whatever started your suffering.  And it can last much longer.  At some point days later, you might think to yourself about how terrible that initial suffering was, and feel fear and suffering about the possibility of it coming back.

Many people suffer as much or more from meta-suffering than suffering that comes from physical or situational sources!  

The good news is that meta suffering is much easier to fix than deeper forms of suffering.

One thing you can do is to collect data* in order to develop an accurate model of how often you actually feel bad.    Try monitoring your moods for awhile and get a baseline for what your moods actually are.  At least half of the people who have suffered from major depression who have done this and spoken with me about it have been surprised to find that they often feel better than their self-perception when they assess their mood at random points throughout the day.

Regardless of what your default mood state or range is, once you know what it is, you are likely to feel less fear.  You can look at what your mood historically does over time, and feel more confidence that this is what it will do in the future.  When you are in the state of despair and wondering if it will last forever, odds are that it won’t.

Another extremely powerful technique for dealing with meta-suffering is accepting that you are suffering.  The meta suffering is suffering because you really want to change your state and are not successful.  If you can just be with the state and not making yourself bad or wrong for being in that state, then all you have to deal with is the base state of suffering, which will be less intense and last less long than if you tack on that extra meta layer.

The ironic thing is that just by thinking that thought, if you are prone to depression, you will probably notice yourself meta suffering and then feel guilt or shame about it.  If this happens, my advice is to take it to the next level – feel compassion and acceptance for your meta-meta-suffering.

As you make this a practice, and feel acceptance and compassion for your suffering, you will feel more freedom from the meta level, and have more resources to work with the underlying suffering or depression.

Another common way in which meta suffering sabotages people with depression is for them to feel depression as soon as they start feeling good.  The story that some people have is that it is futile to think that they might feel so good in the future, and it is better not to get their hopes up and have them crushed.  I encourage the person with this meta suffering story to assure the meta suffering part that they do not have obligation to feel good in the future.  Feeling good in the present is of value, for however long it lasts, and that is worth appreciating and a good thing.

Desiring more pleasant states is great.  Working to create those states is fabulous.

Feeling guilt, shame, depression, or other suffering because of not liking your current state or projected future state does not contribute to your feeling better, and is something that is pretty purely good to release.  


* Example of a site to track depression levels over time.   

Using existing Strong AIs as case studies

-6 ialdabaoth 16 October 2012 10:59PM

I would like to put forth the argument that we already have multiple human-programmed "Strong AI" operating among us, they already exhibits clearly "intelligent", rational, self-modifying goal-seeking behavior, and we should systematically study these entities before engaging in any particularly detailed debates about "designing" AI with particular goals.

They're called "Bureaucracies".

Essentially, a modern bureaucracy - whether it is operating as the decision-making system for a capitalist corporation, a government, a non-profit charity, or a political party, is an artificial intelligence that uses human brains as its basic hardware and firmware, allowing it to "borrow" a lot of human computational algorithms to do its own processing.

The fact that bureaucratic decisions can be traced back to individual human decisions is irrelevant - even within a human or computer AI, a decision can theoretically be traced back to single neurons or subroutines - the fact is that bureaucracies have evolved to guide and exploit human decision-making towards their own ends, often to the detriment of the individual humans that comprise said bureaucracy.

Note that when I say "I would like to put forth the argument", I am at least partially admitting that I'm speaking from hunch, rather than already having a huge collection of empirical data to work from - part of the point of putting this forward is to acknowledge that I'm not yet very good at "avalanche of empirical evidence"-style argument. But I would *greatly* appreciate anyone who suspects that they might be able to demonstrate evidence for or against this idea, presenting said evidence so I can solidify my reasoning. 

As a "step 2": assuming the evidence weighs in towards my notion, what would it take to develop a systematic approach to studying bureaucracy from the perspective of AI or even xenosapience, such that bureaucracies could be either "programmed" or communicated with directly by the human agents that comprise them (and ideally by the larger pool of human stakeholders that are forced to interact with them?)

View more: Next