Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

The fact that blame-minimalization seemed somewhat plausible as a first idea already tells us that the question might be unanswerable:

All these variables to be maximized or minimized are relevant for the respective theories in which they are central because they determine some sort of quality or at least some sort of measurement that enables the respective entity to either be better or worse than actualy alternatives (competition between companies or politicians) or at least it may serve to enable others to hold the entity responsible for being better or worse than actual or imaginable alternatives (and therefore maybe be replaced by another one etc. etc.).

This automatically leads to a selection-mechanism that shapes the respective entities - though the similarity to natural selection is at least in some cases pretty limited.

Since a theory of such entities is suposed describe them and therefore the shapes they have/take, it must at least in some way deal with (facts that depend on) the way those entities are being shaped.

But if there it is really at least to some degree true that bureaucracies tend to limit blame(worthyness), then this entails that they tend to limit their own responsibility (in some regards). What follows is that - at least to some degree - they lack the respective shaping-mechanism, the selective pressure that follows from sometimes being blamed and therefore sometimes being replaced by an alternative. 

What follows (and seems very plausible at least to me) is that the dominant shaping-mechanism of a bureaucracy is just the sum of decisions that lead to its installment (again, this is only the case to the degree that this lack of blame-worthyness is given). 

Such decisions (correct me in so far as I am wrong, of course) are based on specific laws and regulations that ought to be implemented.

Therefore, when we try to formulate a theory of bureaucracies, we deal with a whole lot of individual differences when it comes to the defining characteristics of the entities that ought to be described - due to the differences in terms of those laws and regulations.

I therefore suspect it will at least stay very hard to formulate an adequate theory of bureaucracy - at least if we stay in a realm of such simplicity where the maximization or minimzation of a single variable is our starting point. 

Of course we might say "the degree to which rule x is adequately implemented is being maximized" might be a fitting description, but the blame-avoidance makes it impossible to say that this rule actually describes the entities we want ot describe and not just the intentions that went into their respective installation. 

(sorry if my writing is kind of inefficient - I fear this might be the case - as I am not a native speaker)

Dave JacobΩ010

I guess it is just my lack of understanding (? ? ?), but - as far as I think I understand it - my own submission is actually hardly different (at least in terms of how it goes around the counter-examples we knew so far) from the Train a reporter that is useful to an auxiliary AI-proposal.

My idea was to simply make the reporter useful for (or rather: a necessarily clearly and honestly communicating part of) our original smart-vault-AI (instead of any auxiliary-AI), by enforcing a structure of the overal smart-vault-AI where its predictor can only communicate what to do to its "acting-on-the-world"-part by using this reporter.

Additionally, I would have enforced that there is not just one such reporter but a randomized row of them, so as to make sure that by having several different of them basically play "chinese-whispers", they have a harder time of converging on the usage of some kind of hidden code within their human-style communication.

I assume the issue with my proposal is that the only thing I explained about why those reporters would communicate in an understandable-for-humans-way in the first place was that this would simply be enforced by only using reporters whose output consists of human concepts + in between each training-step of the chinese-whisper-game, they would also be filtered out if they stopped using human concepts as their output. 

My counter-example also seems similar to me than those mentioned under Train a reporter that is useful to an auxiliary AI-proposal.:

As mentioned above, the AI might simply use our language in another way than it is actually intended to be used, by hiding codes within it etc.

I am just posting this to get some feedback on where I went wrong - or why my proposal is simply not useful, apparently.

(Link to my original submission:) https://docs.google.com/document/d/1oDpzZgUNM_NXYWY9I9zFNJg110ZPytFfN59dKFomAAQ/edit?usp=sharing