1 min read

19

 

A recent post by Bruce Schneier on control fraud.

Control fraud is a process of optimizing an organization for fraud, utilizing a position of power to suborn controls.
From the abstract of the paper Bruce references:
Individual “control frauds” cause greater losses than all other forms of property crime combined. They are financial super-predators. Control frauds are crimes led by the head of state or CEO that use the nation or company as a fraud vehicle. Waves of “control fraud” can cause economic collapses, damage and discredit key institutions vital to good political governance, and erode trust. The defining element of fraud is deceit – the criminal creates and then betrays trust. Fraud, therefore, is the strongest acid to eat away at trust. Endemic control fraud causes institutions and trust to become friable – to crumble – and produce economic stagnation.
Friendly AI is an important topic on this site, but what about creating friendly organizations such as companies and governments? The damage done by a government wireheaded for fraud can be enormous.

Can the same approaches used to build FAI be used to improve other types of systems?

New Comment
9 comments, sorted by Click to highlight new comments since:

I believe the traditional term is principal-agent problem. There's a lot of econ research on the topic. Not sure if FAI has any insights at all to offer here.

Thanks for pointing out the prior art.

Here is my thought on how this might relate to FAI.

If we build FAI, or a FAI modifies itself, "friendliness" will be determined within a specific context. Outside of that context friendliness can't be guaranteed (or might be meaningless).

For example an industrial robot may be considered safe when used within a cage designed to keep people out. But it might be very dangerous if used unprotected in an area people may pass through.

To expand the contexts that an industrial robot is safe in, we might have independent inspectors check the installation, or we might add proximity sensors to the robot.

For a safer government we might create different branches of government that can cross check each other.

Biological systems have genetic regulatory networks that can optimize an organism for survival in the face of a changing environment.

What these approaches have in common is a distribution of cross checking controls.

I think that advancing our knowledge on how to build these types of regulatory networks is essential to creating practical FAI, and could also help improve other types of systems.

If the head of government (with the collusion of the head of state) is commiting the fraud, how can it actually be illegal?

...because the law is not just whatever the head of government says it is?

In this case, we're talking about an invocation of Nixon's rule ("if the President does it, it's not illegal"). In the case of a government, we're not talking about the law not being just what the head of government says it is, but the expectations of the governed. A head of government may to some extent be able to dictate the law as they go along, but at the risk of the governed overthrowing them.

Yeah, obviously my answer assumes that "legal" is an actual well-defined category; but since the original question seemed to as well, and in this case there was an actual simple answer within that framework that didn't require dissolution of the question, I figured it was OK. :)

Seems unlikely- you'd expect they'd change the rules to cover their tracks.

Again, that assumes that changing the relevant rules is within the powers their office grants them.

At the very least that would require publicly stating what you're doing. Depending on the government system it might also require the legislature's cooperation.