Here's my claim: 

"Ideal Bureaucratic Structure" (or IBS for short as I imagine this claim is likely to meet serious groans, maybe even deep from within your bowels) has an unparalleled ability to control and direct individual and collective groups of general intelligent agents' behavior towards common goals. It may be particularly useful when considering "multi-stakeholder/multi-agent interactions leading to extinction events" and "multi-agent processes with a robust tendency to play out irrespective of which agents execute which steps in the process...Robust Agent-Agnostic Processes (RAAPs)."

Max Weber put it this way:

"The decisive reason for the advancement of bureaucratic organizations (read: IBS) has always been the purely technical superiority over all other administrative forms. A fully developed bureaucratic mechanism compares to other administrative forms in the same way machines compares to nonmechanical means for producing goods. A strictly bureaucratic administration -- especially an monocratic administration run by trained, individual Beamte (read: agent) -- produces an optimal efficiency for precision, speed, clarity, command of case knowledge, continuity, confidentiality, uniformity, and tight subordination. This is in addition to minimization of friction and the costs associated with materials and personnel. The opposite is true for all other forms of administration, such as collegial, honorary, or adjunct administration."

The Ideal Bureaucratic Structure (IBS) provides an idealized structure for the flow of information, decision points, and actions of a multi-agent system and for the types of agent positions that need to be available to process information and execute actions. 

Maybe this is a good place to 'fess up that I come to questions of AI existential safety through an AI governance lens in which I am particularly concerned about "the problem of aligning the development and deployment of AI technologies with broadly agreeable human values." that develop in Multi/Multi: Multi-human / Multi-AI scenarios

After diving into the history of bureaucratization and some of its prerequisites and consequences, and towards the end of his chapter on bureaucracy, Weber says, and rationalists everywhere cheered:

"It is also apparent that general bureaucratic structures have only recently developed. The farther back in time we go, the more typical the absence of bureaucracy and Beamte is for the structure of domination (read: control) and governance. The bureaucracy has a rational character, and its regulations, purposes, means, and impersonal objectivity that control its demeanor. Therefore, the development and spreading of bureaucracy had, in a special sense, a 'revolutionary' effect everywhere (which needs to be discussed later), just like the advancement of rationalism in general was prone to a 'revolutionary' effect in all areas."

Cheers? Anyone? Weber claims rationalism has a revolutionary effect in all areas and once rationalism spread to the organization of multi-agent systems towards purposes by agreed upon means with impersonal objectivity. I'm cheering Weber on anyways, I hope my cheers are contagious. And if not, one more quick ode from Weber to rationalism:

"But it is essential to recognize that at least in principal, behind every action of a true bureaucratic administration exists a system of rationally identified 'reasons,' which are either the application of norms or reasoning based on balancing purposes and means."

Bureaucracy as the hero of rationality! Who knew?! 

Unfortunately there is a catch. A big catch as it were:

Weber again: "And, so in light of this historical view, we need to remember that bureaucracy, taken as it is, is just an instrument of precision that can be put to service by purely political, economic, or any other dominating or controlling interest. Therefore the simultaneous development of democratization and bureaucratization should not be exaggerated, no matter how typical the phenomena may be." 

Yikes, okay, it seems like Weber understood the notion the orthogonality thesis. But this doesn't capture Weber's full views on the topic. Weber dedicates some additional time to "The Persistent Character of the Bureaucratic Apparatus." Here Weber paints a more nuanced picture of the staying power of the Robust Agent-Agnostic Process (RAAP) that is the Ideal Bureaucracy Structure (IBS).

Back to Weber for an extended direct quote (it's worth it)

"A mature bureaucracy is an almost indestructible social structure.

Bureaucratization is the ultimate specific means to turn a mutually agreed upon community action rooted in subjective feeling into action rooted in a rational agreement by mutual consent. Thus bureaucratization serves as a means to establish ties rooted in a rational agreement by mutual consent within the structures of domination. Bureaucratization becomes the ultimate means of power for those who dominate the bureaucratic apparatus. This is so, given the same conditions, because a systematically organized and managed action rooted in a rational agreement by mutual consent is superior to any kind of reluctant 'mass' or community action.

Once an administration is fully bureaucratized, a virtually permanent structure of domination ties is created and the individual Beamte cannot escape the apparatus in which he is situated.  In contrast to the professional Honoratioren who administrates on a honorary and part-time basis, the professional Beamte is chained to his work with his whole existence, both material and nonmaterial. This holds true for the majority of individual Beamte, since he is only a single cog in a restlessly operating machine and simply entrusted with isolated tasks. This machine is prompted to move or stand still only by the highest level in the bureaucratic hierarchy, not typically by the Beamte himself; thus, this mechanism prescribes the fixed procedures the Beamte takes in approaching his tasks. As a result, and above every thing else, the individual Beamte is chained in a “syndicate” to every other functionary who is incorporated into this machine. This syndicate has a vital interest in keeping this operating in order so that this kind of dominion through ties rooted in rational agreement by mutual consent continues.

On the other hand, the governed people are not able to do without a bureaucratic control apparatus once it is established nor can they replace it. This is because the control of the bureaucracy is based on the methodical synthesis of specialized training: specialization in one area of the division of labor and fixation on single functions which are brilliantly mastered. If the bureaucratic apparatus ceases to do its work, or if its work is violently obstructed, chaos will erupt."

Do you see it? The whole "bureaucracy as a robust agent-agnostic process" thing? The beamte, for example is a position held by an agent, but it is agent-agnostic ("Thus, the execution of roles (“leader”, “follower”) is somewhat agnostic as to which agents execute them."). Also the ideal bureaucratic structure is robust ("If you temporarily distract one of the walkers to wander off, the rest of the group will keep heading toward the restaurant, and the distracted member will take steps to rejoin the group.") by design.

A little more on the details of the Ideal Bureaucracy Structure below.

I've gone on long enough without filling in the outline that I've painted of IBS. Here are some of the nuts and bolts:

Weber provides 6 specific features of the IBS (he calls it the Modern Bureaucracy) including:

  1. The principle of fixed competencies
  2. The principle of hierarchically organized positions
  3. Actions and rules are written and recorded
  4. In-depth specialist training needed for agents undertaking their position
  5. The position is full time and occupies all the professional energy of the agent in that position.
  6. The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive

Weber goes on to argue that a particular type of agent, a beamte, is needed to fulfill the various positions specialization demands for processing information and executing actions. So what does the position or role of the beamte demand?

  1. The position is seen as a calling and a profession
  2. The beamte (the agent) aims to gain and enjoy a high appreciation by people in power
  3. The beamte is nominated by a higher authority
  4. The beamte is a lifetime position
  5. The beamte receives a regular remuneration
  6. The beamte are organized into a professional track.

Ok, here I quote myself and a co-author in summarizing IBS and the dual creation of both process and agent roles (work currently under review at an academic journal):

"To recap, the modern bureaucracy (Beamtentum), according to Weber, comprises various organized sets of offices (Behorde), that contain a variety of bundled tasks as individual positions (Amt), that are fulfilled by human workers often thought of as bureaucrats (Beamte). These six overriding characteristics of bureaucracy elucidate the relationships between organizational structure, underlying tasks to be accomplished, and the expectations of humans fulfilling these roles.

 (AND HERE WE MAKE THE RELEVANT (PARTIAL) LEAP TO AI SAFETY)... 

 From here, if we broaden our conceptualization of the Beamte to include not only human agents but also machine agents, we can examine how well both human and machine agents may fulfill the role of Beamte and what this could mean for the structure of offices (Behorde), bureaus (Buro), and the broader characteristics and functioning of modern bureaucracy (Beamtentum)."

Why "only" the partial leap?

When I began considering the role of AI in society, it came through the lens of trying to understand how the incorporation of AI systems into the decision making of public organizations influences the decision making process of those organizations and thus their outputs. My thinking went something like "Humans have been using judgment to make the decisions within organizations for a long time, what happens when this human judgment is replaced by machine judgment across certain tasks? What will this do to the outputs of public organizations delivering public services?" My intuition and my work with colleagues on this issue does suggest to me that it is an important set of questions for AI Governance.

AI Safety, AI Alignment, Bureaucracy

But, as I've thought about it more, I think that there may be additional value to the notion of Ideal Bureaucracy Structure as a prescriptive and normative ideal for creating Robust Agent-Agnostic Processes that could ensure control and alignment in a slow-takeoff scenario in which multiple increasingly intelligent AI systems are developed and deployed in multi human systems or in scenarios where there are collections of increasingly intelligent AI systems that are dominated or controlled by the bureaucratic structure. (It strikes me that this is akin in some interesting ways to the CAIS model, but I will save this elaboration for another time.) It seems to me that AI alignment is in large part about the domination or control of the behavior of the AI in a way that is aligned with human values and that allows the AI to act on behalf of humans and human values.

In this regard it seems to me that building machine beamte to fulfill the various desired societal positions for a functioning ideal bureaucratic structure that is democratically controlled, could, at least in theory, give us controllable collections of arbitrarily intelligent artificial intelligences that, while functioning as embedded individual agents making decisions and executing actions, are better described as agents within a robust agent-agnostic process that is controlled, by design, by rationally agreed upon mutual cooperation.  

Concluding Remarks

Of course, it is all much more complicated than this, but I think there is useful insight in the following:

  1. Ideal Bureaucracy Structure (IBS) as an attempt at rationalizing multi-agent self interest towards achieving mutual goals
  2. IBS as a Robust Agent-Agnostic Process (RAAP)
  3. Integration of AI systems into human dominated RAAPs may alter those processes
  4. The creation of Machine Beamte as controllable agents that act in aligned ways with democratic preferences

This is obviously still in its early sketches, but I hope you find it instructive all the same. 

Cheers for my first lesswrong.com post, and cheers to the advancement of rationalism in general as it is prone to a 'revolutionary' effect in all areas.

New Comment
15 comments, sorted by Click to highlight new comments since:

I see the largest challenge here that it doesn't deal with the fundamental reasons why AI misalignment is a concern. Bureaucracies are notorious homes to Goodhart effects and they have as yet found no way to totally control them. Classic examples being things like managers optimizing for their own promotion or for the performance of their department to the determinant of the larger organization and whatever its goals are.

Now to be fair bureaucracies do manage to achieve a limited level of alignment, and they can use various mechanisms that generate more vs. less alignment (examples of things that help: OKRs, KPIs, mission and value statements), and they do sometimes manage to rein in what would otherwise be threats to political stability (ambitious people direct their ambition to moving up in the bureaucracy rather than taking over a country), but they also sometimes fail at this when you get smart actors who know how to game the system.

I'd argue that Robert Moses is a great example of this. He successfully moved up the bureaucracy in New York to, yes, get stuff done, but also did it in ways that many people would consider "unaligned with human values" and also achieved outcomes of mixed value (good for some people (affluent whites), bad for others (everyone else)). What we can say is he did stuff that was impressive that got and kept him in power.

Actually, I think this generalizes: if you want to design an alignment mechanism, you should expect that it could at least force Robert Moses to be aligned, a man who expertly gamed the system to achieve this goals the same way we expect transformative AI to be able to do.

Thank you for the insights. I agree with your insight that "bureaucracies are notorious homes to Goodhart effects and they have as yet found no way to totally control them." I also agree with you intuition that "to be fair bureaucracies do manage to achieve a limited level of alignment, and they can use various mechanisms that generate more vs. less alignment." 

I do however believe that an ideal type of bureaucratic structure helps with at least some forms of the alignment problem. If for example, Drexler is right, and my conceptualization of the theory is right (CAIS) expects a slow takeoff of increasing intelligent narrow AIs that work together on different components of intelligence or completing intelligent tasks. In this case, I think Weber's suggestions both of how to create generally controllable intelligent agents (Beamte) and his ideas on constraining individual agents authority to certain tasks who are then nominated to higher tasks by those with more authority (weight, success, tenure, etc) has something helpful to say in the design of narrow agents that might work together towards a common goal. 

My thoughts here are still in progress and I'm planning to spend time with these two recent posts in particular to help my understanding:

https://www.lesswrong.com/posts/Fji2nHBaB6SjdSscr/safer-sandboxing-via-collective-separation

https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models

 

One final thing I would add is that I think many of problems with bureaucracies can often be characterized around limits of information and communication (and how agents are trained and how they are motivated and what are the most practical or useful levels of hierarchy or discretion). I think the growth of increasingly intelligent narrow AIs could (under the right circumstance) drastically limit information and communication problems. 

Thanks again for your comment. The feedback is helpful. I hope to make additional posts in the near future to try and further develop these ideas.

Yeah, I guess I should say that I'm often worried about the big problem of superintelligent AI and not much thinking about how to control narrow and not generally capable AI. For weak AI, this kind of prosaic control mechanism might be reasonable. Christiano things this class of methods might work on stronger AI.

I think this approach may have something to add to Christiano's method, but I need to give it more thought. 

I don't think it is yet clear how this structure could help with the big problem of superintelligent AI. The only contributions I see clearly enough at this point are redundant to arguments made elsewhere. For example, the notion of a "machine beamte" as one that can be controlled through (1) the appropriate training and certification, (2) various motivations and incentives for aligning behavior with the knowledge from training, and (3) nominated by a higher authority for more influence. These are not novel considerations of course, but I think they do very much point to the same types of concerns of how to control agent behavior in an aligned way when the individual intelligent agents may have some components that are not completely aligned with the goal function of the principal (organization in this context, keeping superintelligent AI controlled by humanity as another potential context).

Thanks for the follow up.

While I do think the rise of bureaucracies is inevitable, it's important to remember that there is a tradeoff between bureaucracy and innovation.

I'm not sure that the statement

A mature bureaucracy is an almost indestructible social structure.

is false so much as sub-optimal.  The easiest place to see this is in businesses.  Businesses follow a fairly predictable cycle:

  1.  A new business is created that has access to some new innovative idea or technology that allows it to overcome entrenched rivals
  2. As the business grows, it develops a bureaucracy that allows its actions to more efficiently "mine" its advantage and fend off competitors.  This increased business efficiency comes at a tradeoff with innovation.  The company becomes better at the things it does and worse at the things it doesn't.
  3. Eventually the world changes and what was once a business advantage is now a disadvantage.  However the business is now too entrenched in its own bureaucracy to change.
  4. New innovative rivals appear and destroy what remains of the business.

Or, to quote Paul Graham:

Companies never become less bureaucratic, but they do get killed by startups that haven't become bureaucratic yet, which amounts to the same thing.

I suspect that a similar cycle plays out in the realm of public governance as well, albeit on a much larger time scale.  Consider the Chinese concept of the Mandate of Heaven.  As governments age, they gradually become less responsive to the needs of the people until they are ultimately overthrown.  Indeed, one of the primary advantages of multi-party democracy is that the ruling party can be periodically overthrown without burning down the entire country first.

 

The basic energy behind this process is rule 6 of your bureaucratic process

The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive

Because bureaucracies follow a fixed set of rules (and because rules are much more likely to be added than repealed), the action of the bureaucracy becomes more stratified over time.  This stratification leads to paralysis because no individual agent is capable of change, even if they know what change is needed and want to implement it.  Creating a bureaucracy is creating a giant coordination problem that can only be solved by replacing the bureaucracy.

 

What does any of this mean for AI?

Will we use bureaucracies to govern AI?  Yes, of course we will.  I am doing some work with GPT-3 and they have already developed a set of rules governing its use, and a series of procedures for determining if those rules are being followed.

Can we imagine a single "perfect bureaucracy" that will govern all of AI on behalf of humans?  No.  Just like businesses and governments need to periodically die in order to allow innovation, so will the bureaucracies that govern AI.  Indeed one sub-optimal singularity would be if a single bureaucracy of AIs became so powerful that it could never be overthrown.  This would hopefully leave humans much better off than they are today, but permanently locked in at whatever level of development the bureaucracy had reached prior to ossification.

Is there some post-bureaucracy governance model that can give us the predictability/controllability of bureaucracy without the tradeoff of lost innovation?  If you consider a marketplace with capitalistic competition a "structure", then sure.  If AI is somehow able to solve the coordination problem that leads to ossification of bureaucracy (perhaps this is a result of the limits on human's cognitive abilities), then maybe?  I feel like the tradeoff between rigid predictable rules and innovation is more fundamental than just the coordination problem, but I could be wrong.

Thank you for the comment. There are several interesting points I want to comment on. Here are my thoughts in no particular order of importance:

  • I think what I see as your insight on rigidity versus flexibility (rigid predictable rules vs. innovation) more generally is helpful and something that is not addressed well in my post. My own sense is that an ideal bureaucracy structure could be rationally constructed that balances tradeoffs across rigidity and innovation. Here I would also take Weber's rule 6 that you highlight as an example. As represented in the post it states "The duties of the position are based on general learnable rules and regulation, which are more or less firm and more or less comprehensive." I take this as rules and regulation need to be "learnable" not stable. A machine beamte (generally intelligent AI) should be able to quickly update on new rules and regulations. The condition of "more or less firm and more or less comprehensive" seems akin to more of a coherence condition rather than one that is static
  • This builds towards what I see as your concern of an ideal bureaucracy structure being consisted of fixed rules, ossification, and general inability to adapt successfully to changes in the type and character of complexity in the environment in which the bureaucracy is embedded. My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.
  • One note here is that for me an ideal bureaucracy structure doesn't need to perfectly replicate Weber's description. Instead it would appropriately take into account what I see as the underlying fact that complexity demands specialization and coordination which implies hierarchy. An ideal bureaucracy structure would be one that requires multiple agents to specialize and coordinate to solve problems of any arbitrary level of complexity, which requires specifying both horizontal and vertical coordination. Weber's conceptualization as described in the post, I think, deserves more attention for the alignment problem, given that I think bureaucracies limitations can mostly be understood in terms of human limitation for information processing and communication.
  • I think I share your concern with a single bureaucracy of AI's being suboptimal, unless the path to superintelligence is through iterated amplification of more narrow AI's that eventually lead to joint emergent superintelligence that is constrained in an underlying way by the bureaucratic structure, training, and task specialization. This is a case where (I think) the emergence of a superintelligent AI that in reality functions like a bureaucracy would not necessarily be suboptimal. It's not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.
  • I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information. One takes a more decentralized processing based largely on prices to convey relevant value and information the other takes a more centralized approach implied by loosely organized hierarchical structures that allow for reliable specialization. It seems to me that market mechanisms also have their own tradeoffs across innovation and controllability. In other words, I do not see that the market structure dominates the bureaucratic or centralized approach across these tradeoffs in particular.
  • There are other governance models that I think are helpful for the discussion as well. Weber is one of the oldest in the club. One is Herbert Simon's Administrative Behavior (which is generalized to other types of contexts in his "The Sciences of the Artificial"). Another is Elinor Ostrom's Institutional Analysis and Development Framework. My hope is build out posts in the near future taking these adjustments in structure into consideration and discussing the tradeoffs.

 

Thanks again for the comment. I hope my responses have been helpful. Additional feedback and discussion are certainly welcomed! 

My sense is that these are not fundamental components of a rationally applied bureaucratic structure, but rather of the limited information and communication capabilities of the agents that hold the positions within the bureaucratic structure. My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.

 

I think this is the essential question  that needs to be answered: Is the stratification of bureaucracies a result of the fixed limit on human cognitive capacity, or is it an inherent limitation of bureaucracy?

One way to answer such a question might be to look at the asymptotics of the situation.  Suppose that the number of "rules" governing an organization is proportional to the size of the organization.  The question would then be does the complexity of the coordination problem also increase only linearly as well?  If so, it is reasonable to suppose that  humans (with a finite capacity) would face a coordination problem but AI would not.  

Suppose instead that the complexity of the coordination problem increases with the square of organization size.  In this case, as the size of an organization grows, AI might find the coordination harder and harder, but still tractable.  

Finally, what if the AI must consider all possible interactions between all possible rules in order to resolve the coordination problem?  In this case, the complexity of "fixing" a stratified bureaucracy is exponential in the size of the bureaucracy and beyond a certain (slowly rising) threshold the coordination problem is intractable.

My sense is that AIs could overcome these challenges given some flexibility in structure based on some weighted voting mechanism by the AIs.

If weighted voting is indeed a solution to the problem of bureaucratic stratification, we would expect this to be true of both human and AI organizations.  In this case, great effort should be put into discovering such structures because they would be of use in the present and not only in our AI dominated future.

It's not clear to me that if the bureaucratic norms and training could be updated for better rules and regulation to be imposed upon it why it would need to be overthrown.

Suppose the coordination problem is indeed intractable.  That is to say that once a bureaucracy has become sufficiently complex it is impossible to reduce the complexity of the system without unpredictable and undesirable side-effects.  In this case, the optimal solution may be the one chosen by capitalism (and revolutionaries) to periodically replace the bureaucracy once it is no longer near the efficiency frontier .

I would suggest that market competition and bureaucratic structure are along a continuum of structures for effectively and efficiently processing information.

There is undoubtedly a continuum of solutions between "survival of the fittest" capitalistic competition and "rules abiding" bureaucratic management.  The discovery of new "points" on this continuum (for example bureaucracy with capitalist characteristics) is something that deserves in-depth study.  

To take one example, the Bezos Mandate aims to structure communication between teams at Amazon more like a marketplace and less like a bureaucracy.  Google's 20% time is another example of purposely reducing management overhead in order to foster innovation.

It would be awesome if one could "fine tune" the level of competitiveness and thereby choose any point on this continuum.  If this were possible, one might even be able to use control theory to dynamically change the trade-off over time in order to maximize utility.

I love this exploration, but there are two parts of governance that need to work well in order to be effective.  Bureaucracy is one answer to the "how" norms and rules get distributed, monitored, and enforced.   More important (and more interesting to me) is hte "what" rules are in scope, and the specifics of the rules that are enforced by the bureaucratic mechanisms.

Thanks. I think your insight is correct that governance requires answers to the "how" and "what" questions, and that the bureaucratic structure is one answer, but it leave the "how" unanswered. I don't have a good technical answer, but I do have an interesting proposal by Hannes Alfven in the book "The End of Man?" that he published under the pseudonym of Olof Johnneson called Complete Freedom Democracy that I like. The short book is worth the read, but hard to find. The basic idea is a parliamentary system in which all humans, through something akin to a smart phone, to rank vote proposals. I'll write up the details some time! 

Weber again: "And, so in light of this historical view, we need to remember that bureaucracy, taken as it is, is just an instrument of precision that can be put to service by purely political, economic, or any other dominating or controlling interest. Therefore the simultaneous development of democratization and bureaucratization should not be exaggerated, no matter how typical the phenomena may be." Yikes, okay, it seems like Weber understood the notion the orthogonality thesis."

Isn't this interesting, Weber's point is similar to the orthogonality thesis. This makes me realize a wider implication: the orthogonality thesis is actually very similar to the general argument that "technological progress is good" vs "no it isn't necessarily".

Weber: democratization isn't given from bureaucreatization
Orthogonality thesis: Intelligence and morality are orthogonal.
Technological caution argument: More powerful technology isn't by default a good thing for us.

I'm especially interested in constrasting orthogonality to technological caution. I'd like to express them in a common form. Intelligence is capability. Technology generally is capability. Morality = what is good. More capability dropped into parts of a society isn't necessarily a good thing. Be it that part of society is an AI, a human, a social system or a socio-technical system.

This is a generalization of the orthogonality thesis and the technological caution argument, assuming that AI gets embedded in society (which should be assumed).

As you likely know by now, I think the argument that “Technological Progress = Human Progress” is clearly more complicated than is sometimes assumed. AI is very much already embedded in society and the existing infrastructure makes further deployment even easier. As you say, “more capability dropped into parts of a society isn’t necessarily a good thing.”

One of my favorite quotes from the relationship between technological advancement and human advancement is from Aldous Huxley below:

“Today, after two world wars and three major revolutions, we know that there is no necessary correlation between advanced technology and advanced morality. Many primitives, whose control over their environment is rudimentary, contrive nonetheless to be happy, virtuous, and, within limits, creative. Conversely, the members of civilized societies, possessed of the technological resources to exercise considerable control over their environment, are often conspicuously unhappy, maladjusted, and uncreative; and though private morals are tolerably good, collective behavior is savage to the point of fiendishness. In the field of international relations the most conspicuous difference between men of the twentieth century and the ancient Assyrians is that the former have more efficient methods of committing atrocities and are able to destroy, tyrannize, and enslave on a larger scale.

The truth is that all an increase in man’s ability to control his environment can do for him is merely to modify the situation in which, by other than technological means, individuals and groups attempt to make specifically human progress in creativeness, morality, and happiness. Thus the city-dwelling factory worker may belong, biologically speaking, to a more progressive group than does the peasant; but it does not follow that he will find it any easier to be happy, good, and creative. The peasant is confronted by one set of obstacles and handicaps; the industrial worker, by another set. Technological progress does not abolish obstacles; it merely changes their nature. And this is true even in cases where technological progress directly affects the lives and persons of individuals.”

— The Divine Within: Selected Writings on Enlightenment by Aldous Huxley, Huston Smith https://a.co/a0BFqOM

Thanks Justin! This is an interesting perspective. I'd enjoy seeing a compilation of different perspectives on ensuring AI alignment. (Another recurrent example would be the cybersecurity perspective on AI safety.)

Bureaucratization is the ultimate specific means to turn a mutually agreed upon community action rooted in subjective feeling into action rooted in a rational agreement by mutual consent.

This sounds a lot like the general situation of creating moral or judicial systems for a society. (When it works well.)

The principle of fixed competencies The principle of hierarchically organized positions

Interestingly, these may go counter to Agile-associated practices and some practices I would consider generally good. It seems to be good to cultivate specialities, but also to cultivate some breadth in competencies. And to nurture bottom-up flows! Hierarchy has its limitations.

Thanks for the comment, David! It also caused me to go back and read this post again, which sparked quite a few old flames in the brain.

I agree that a collection of different approaches to ensuring AI alignment would be interesting! This is something that I’m hoping (now planning!) to capture in part with my exploration of scenario modeling that’s coming down the pipe. But, a brief overview of the different analytical approaches to AI alignment, would be helpful (if it doesn’t already exist in an updated form that I’m unaware of).

I agree with your insight that Weber’s description here can be generalized to moral and judicial systems for society. I suspect if we went looking into Weber’s writing we might find similar analogies here as well.

I agree with your comment on the limitations of hierarchy for human bureaucracies. Fixed competencies and hierarchical flows benefit from bottom up information flows and agile adaptation. However, I think this reinforces my point about machine beamte and AGI controlled through this method. For the same sorts of benefits of agility and modification by human organizations, you might think that we would want to restrict these things for machine agents to deliberately sacrifice benefits from adaptation in favor of aligned interests and controllability.

Thanks for the feedback! I can imagine some more posts in this direction non the future.

Bureaucracy is just as gameable as any other system. Human bad actors are able to use bureaucracies to their own ends, I see no reason to believe that AI couldn't do the same.

Might be worth checking out the Immoral Mazes sequence and the Gervais Principle to see how that goes down.

Thanks for this. I tabbed the Immoral Mazes sequences. On cursory view it seems very relevant. I'll be working my way through it. Thanks again.