One of boyd's examples is a pretty straightforward feedback loop, recognizable to anyone with the slightest degree of systems engineering:
Consider, for example, what’s happening with policing practices, especially as computational systems allow precincts to distribute their officers “fairly.” In many jurisdictions, more officers are placed into areas that are deemed “high risk.” This is deemed to be appropriate at a societal level. And yet, people don’t think about the incentive structures of policing, especially in communities where the law is expected to clear so many warrants and do so many arrests per month. When they’re stationed in algorithmically determined “high risk” communities, they arrest in those communities, thereby reinforcing the algorithms’ assumptions.
This system — putting more crime-detecting police officers (who have a nontrivial false-positive rate) in areas that are currently considered "high crime", and shifting them out of areas currently considered "low crime" — diverges under many sets of initial conditions and incentive structures. You don't even have to posit racism or classism to get these effects (although those may contribute to failing to recognize them as a problem); under the right (wrong) conditions, as t → ∞, the noise (that is, the error in the original believed distribution of crime) dominates the signal.
The ninth of Robert Peel's principles of ethical policing is surprisingly relevant: "To recognise always that the test of police efficiency is the absence of crime and disorder, and not the visible evidence of police action in dealing with them." [1]
There is a lot of mainstream interest in machine ethics now. Here are some links to some popular articles on this topic.
By Zeynep Tufecki, a professor at the I School at UNC, on Facebook's algorithmic newsfeed curation and why Twitter should not implement the same.
By danah boyd, claiming that 'tech folks' are designing systems that implement an idea of fairness that comes from neoliberal ideology.
danah boyd (who spells her name with no capitalization) runs the Data & Society, a "think/do tank" that aims to study this stuff. They've recently gotten MacArthur Foundation funding for studying the ethical and political impact of intelligent systems.
A few observations:
First, there is no mention of superintelligence or recursively self-modifying anything. These scholars are interested in how, in the near future, the already comparatively powerful machines have moral and political impact on the world.
Second, these groups are quite bad at thinking in a formal or mechanically implementable way about ethics. They mainly seem to recapitulate the same tired tropes that have been resonating through academia for literally decades. On the contrary, mathematical formulation of ethical positions appears to be ya'll's specialty.
Third, however much the one-true-morality may be indeterminate or presently unknowable, progress towards implementable descriptions of various plausible moral positions could at least be incremental steps forward towards an understanding of how to achieve something better. Considering a slow take-off possible future, iterative testing and design of ethical machines with high computational power seems like low-hanging fruit that could only better inform longer-term futurist thought.
Personally, I try to do work in this area and find the lack of serious formal work in this area deeply disappointing. This post is a combination heads up and request to step up your game. It's go time.
Sebastian Benthall
PhD Candidate
UC Berkeley School of Infromation