You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

A Proposal for Defeating Moloch in the Prison Industrial Complex

23 lululu 02 June 2015 10:03PM

Summary

I'd like to increasing the well-being of those in the justice system while simultaneously reducing crime. I'm missing something here but I'm not sure what. I'm thinking this may be a worse idea than I originally thought based on comment feedback, though I'm still not 100% sure why this is the case.

Current State

While the prison system may not constitute an existential threat, At this moment more than 2,266,000 adults are incarcerated in the US alone, and I expect that being in prison greatly decreases QALYs for those incarcerated, that further QALYs are lost to victims of crime, family members of the incarcerated, and through the continuing effects of institutionalization and PTSD from sentences served in the current system, not to mention the brainpower and man-hours lost to any productive use.


If you haven't read these Meditations on Moloch, I highly recommend it. It’s long though, so the executive summary is: Moloch is the personification of the forces of competition which perverse incentives, a "race to the bottom" type situation where all human values are discarded in an effort to survive. That this can be solved with better coordination, but it is very hard to coordinate when perverse incentives also penalize the coordinators and reward dissenters. The prison industrial complex is an example of these perverse incentives. No one thinks that the current system is ideal but incentives prevent positive change and increase absolute unhappiness.

 

  • Politicians compete for electability. Convicts can’t vote, prisons make campaign contributions and jobs, and appearing “tough on crime” appeals to a large portion of the voter base.
  • Jails compete for money: the more prisoners they house, the more they are paid and the longer they can continue to exist. This incentive is strong for public prisons and doubly strong for private prisons.
  • Police compete for bonuses and promotions, both of which are given as rewards to cops who bring in and convict more criminals
  • Many of the inmates themselves are motivated to commit criminal acts by the small number of non-criminal opportunities available to them for financial success, besides criminal acts. After becoming a criminal, this number of opportunities is further narrowed by background checks.

 

The incentives have come far out of line with human values. What can be done to bring incentives back in alignment with the common good?

My Proposal

Using a model that predicts recidivism at sixty days, one year, three years, and five years, predict the expected recidivism rate for all inmates at all individual prison given average recidivism. Sixty days after release, if recidivism is below the predicted rate, the prison gets a small sum of money equaling 25% of the predicted cost to the state of dealing with the predicted recidivism (including lawyer fees, court fees, and jailing costs). This is repeated at one year, three years, and five years.


The statistical models would be readjusted with current data every years, so if this model causes recidivism to drop across the board, jails would be competing against ever higher standard, competing to create the most innovative and groundbreaking counseling and job skills and restorative methods so that they don’t lose their edge against other prisons competing for the same money. As it becomes harder and harder to edge out the competition’s advanced methods, and as the prison population is reduced, additional incentives could come by ending state contracts with the bottom 10% of prisons, or with any prisons who have recidivism rates larger than expected for multiple years in a row.

 

Note that this proposal makes no policy recommendations or value judgement besides changing the incentive structure. I have opinions on the sanity of certain laws and policies and the private prison system itself, but this specific proposal does not. Ideally, this will reduce some amount of partisan bickering.


Using this added success incentive, here are the modified motivations of each of the major actors.

 

  • Politicians compete for electability. Convicts still can’t vote, prisons make campaign contributions, and appearing “tough on crime” still appeals to a large portion of the voter base. The politician can promise a reduction in crime without making any specific policy or program recommendations, thus shielding themselves from criticism of being soft on crime that might come from endorsing restorative justice or psychological counselling, for instance. They get to claim success for programs that other people, are in charge of administrating and designing. Further, they are saving 75% of the money predicted to have have been spent administrating criminals. Prisons love getting more money for doing the same amount of work so campaign contributions would stay stable or go up for politicians who support reduced recidivism bonuses.
  • Prisons compete for money. It costs the state a huge amount of money to house prisoners, and the net profit from housing a prisoner is small after paying for food, clothing, supervision, space, repairs, entertainment, ect. An additional 25% of that cost, with no additional expenditures is very attractive. I predict that some amount of book-cooking will happen, but that the gains possible with book cooking are small compared to gains from actual improvements in their prison program. Small differences in prisons have potential to make large differences in post-prison behavior. I expect having an on-staff CBT psychiatrist would make a big difference; an addiction specialist would as well. A new career field is born: expert consultants who travel from private prison to private prison and make recommendations for what changes would reduce recidivism at the lowest possible cost.
  • Police and judges retain the same incentives as before, for bonuses, prestige, and promotions. This is good for the system, because if their incentives were not running counter to the prisons and jails, then there would be a lot of pressure to cook the books by looking the other way on criminals til after the 60 day/1 year/5 year mark. I predict that there will be a couple scandals of cops found to be in league with prisons for a cut of the bonus, but that this method isn’t very profitable. For one thing, an entire police force would have to be corrupt and for another, criminals are mobile and can commit crimes in other precincts. Police are also motivated to work in safer areas, so the general program of rewarding reduced recidivism is to their advantage.

 

Roadmap

If it could be shown that a model for predicting recidivism is highly predictive, we will need to create another model to predict how much the government could save if switching to a bonus system, and what reduction of crime could be expected.


Halfway houses in Pennsylvania are already receiving non-recidivism bonuses. Is a pilot project using this pricing structure feasible?

The Benefits of Closed-Mindedness

2 JosephY 03 June 2014 06:09PM

 

Every so often, I will have a discussion with someone who wants to share their new “big idea” with me. Some of them make sense. Others, less so. For example, it was recently proposed to me that everyone has a soul, and it is the pattern of electricity in your brain. This pattern lives on after you die. The rather scary thing is that this idea was suggested to me by a neuroscientist getting her Ph.D. Aside from wondering “what does that even mean?”, one cannot help but notice the belief as attire in the idea.

 

And invariably, after objecting to these strange ideas, I will be told, "Don't be so closed-minded! There is so much that we don't know!"

 

Now, this is a strange form of belief as attire. It is the belief of the sophisticated person, who knows that since everything is a shade of gray, all is equal. It is very much rooted in dark side epistemology. In acknowledging their ignorance, they glory in the fundamental unknowability of the universe. "After all, if we don't know the truth, all explanations are equal! Who's to say that I am wrong? You can't disprove my theory!"

 

In general, I like to think of myself as open-minded. I support gay marriage, I am pro-choice, etc. And yet, doesn't everyone think they are open-minded? Do I discard legitimately promising ideas? Do I make too many false negative errors? I thought about it for some time and came to the conclusion: No, that idea was just plain silly.

 

Sometimes, when faced with a new idea, the instinct is to discard it out of hand. Sometimes we try not to believe new ideas, especially if they contradict long-held and deeply-rooted beliefs. And occasionally, the idea is correct, and you really do need to do a mental overhaul. However, that is often not the case.

 

A few million results come up in a google search for "benefits of homeopathy". However, I do not entertain homeopathy as a legitimate means of curing ailments. I have been told repeatedly of the existence of God. However, after the point where I understood the notion of "beliefs as anticipation-controllers", I held a strictly naturalistic worldview. I am dismissive of theories that do not fit this worldview.

 

I am skeptical to the extreme of implausible ideas. That is, after all, what closed-mindedness is. The measure of open-mindedness is merely about which ideas seem implausible to me. I tend to believe that if scientific education was better and more widespread, then people would become more skeptical of ideas that don't make sense. Of course, there is always the difficulty that one might end up being skeptical of strange but true ideas, such as cryonics.

 

So then the real benefit of closed-mindedness is this: it saves you the time of having to entertain silly notions. But remember the danger in too much of a good thing! Some wacky ideas are true. A simple test is to list as many problems with the idea as you can think of in one minute. If you've listed three or more seemingly intractable problems, and the one explaining it to you cannot solve them, then being closed-minded is probably a good idea. If, however, you can only think up a couple of problems, or the one can dispel your doubts, then it may be time to look into the idea further.

Friendly AI ideas needed: how would you ban porn?

6 Stuart_Armstrong 17 March 2014 06:00PM

To construct a friendly AI, you need to be able to make vague concepts crystal clear, cutting reality at the joints when those joints are obscure and fractal - and them implement a system that implements that cut.

There are lots of suggestions on how to do this, and a lot of work in the area. But having been over the same turf again and again, it's possible we've got a bit stuck in a rut. So to generate new suggestions, I'm proposing that we look at a vaguely analogous but distinctly different question: how would you ban porn?

Suppose you're put in change of some government and/or legal system, and you need to ban pornography, and see that the ban is implemented. Pornography is the problem, not eroticism. So a lonely lower-class guy wanking off to "Fuck Slaves of the Caribbean XIV" in a Pussycat Theatre is completely off. But a middle-class couple experiencing a delicious frisson when they see a nude version of "Pirates of Penzance" at the Met is perfectly fine - commendable, even.

The distinction between the two case is certainly not easy to spell out, and many are reduced to saying the equivalent of "I know it when I see it" when defining pornography. In terms of AI, this is equivalent with "value loading": refining the AI's values through interactions with human decision makers, who answer questions about edge cases and examples and serve as "learned judges" for the AI's concepts. But suppose that approach was not available to you - what methods would you implement to distinguish between pornography and eroticism, and ban one but not the other? Sufficiently clear that a scriptwriter would know exactly what they need to cut or add to a movie in order to move it from one category to the other? What if the nude "Pirates of of Penzance" was at a Pussycat Theatre and "Fuck Slaves of the Caribbean XIV" was at the Met?

To get maximal creativity, it's best to ignore the ultimate aim of the exercise (to find inspirations for methods that could be adapted to AI) and just focus on the problem itself. Is it even possible to get a reasonable solution to this question - a question much simpler than designing a FAI?

A medium for more rational discussion

10 adamzerner 24 February 2014 05:20PM

It would be cool if online discussions allowed you to 1) declare your claims, 2) declare how your claims depend on each other (ie. make a dependency tree), 3) discuss the claims, and 4) update the status of the claim by saying whether or not you agree with it, and using something like the text shorthand for uncertainty to say how confident you are in your agreement/disagreement.

I think that mapping out these things visually would allow for more productive conversation. And it would also allow newcomers to the discussion to quickly and easily get up to date, rather than having to sift through tons of comments. On this note, there should also probably be something like an answer wiki for each claim to summarize the arguments and say what the consensus is.

I get the feeling that it should be flexible though. That probably means that it should be accompanied by the normal commenting system. Sometimes you don't actually know what your claims are, but need to "talk it out" in order to figure out what they are. Sometimes you don't really know how they depend on each other. And sometimes you have something tangential to say (on that note, there should probably be an area for tangential comments, or at least a way to flag them as tangential).

As far who would be interested in this, obviously this Less Wrong community would be interested, and I think that there are definitely some other online communities that would (Hacker News, some subreddits...).

Also, this may be speculating, but I would hope that it would develop a reputation for the most effective way to have a productive discussion. So much so that people would start saying, "go outline your argument on [name]". Maybe there'd even be pressure for politicians to do this. If so, then I think this could put pressure on society to be more rational.

What do you guys think?

 

EDIT: If anyone is actually interested in building this, you definitely have my permission (don't worry about "stealing the idea"). I want to build it, but 1) I don't think I'm a good enough programmer yet, and 2) I'm busy with my startup.

EDIT: Another idea: if you think that a statement commits an established fallacy, then you should be able to flag it (like this). And if enough other people agree, then the statement is underlined or highlighted or something. The advantage to this is that it makes the discussion less "bulky". A simple version of this would be flagging things as less than DH6. But there are obviously a bunch of other things worth flagging that Eliezer has talked about in the sequences that are pretty non-controversial.

EDIT: Here is a rough mockup of how it would look. Notes: 

- The claims should show how many votes of agreement/disagreement they got. Probably using text shorthand for uncertainty.

- The claims should be colored green if there is a lot of agreement, and red if there is a lot of disagreement.

- See edit above. Commenting in the discussion should be like this. And you should be able to flag statements as fallacious in a similar way. If there is enough agreement about the flag, the statement should be underlined in red or something.

Partly-baked ideas

3 lukeprog 02 April 2012 09:21PM

I.J. Good, from the opening of 1962's The Scientist Speculates (a collection of partly-baked ideas):

A partly-baked idea or PBI is either a speculation, a question of some novelty, a suggestion for a novel experiment, a stimulating analogy, or (rarely) a classification. It has a bakedness of p that is less than unity, or even negative. The bakedness of an idea should be judged by its potential value, the chance that it can be completely baked, its originality, interest, stimulation, conciseness, lucidity, and liveliness. It is often better to be stimulating and wrong than boring and right.

A very rough guide to the maximum length that a PBI should have is given by the formula

10^(9px/2) words

where x, the importance of the topic, is between 0 and 1. For example, the maximum length for a negatively-baked idea is less than one word. An idea can compensate in importance what it lacks in bakedness, and conversely. The formula is applicable to each sentence and to each paragraph, as well as to the whole of a contribution. For the non-specialist, the formula makes sense even when px = 1, but in this anthology px rarely exceeds 7/9.

A possible justification for the exponential or antilogarithmic form is that if an idea is developed to a certain length d, then the size of the expository tree increases roughly exponentially with d, if the multifurcation of the tree is the same at every level.

(Note that I changed the formatting a bit for readability.)

Religious dogma as group identity

7 uzalud 28 December 2011 10:12AM

I was reading the "Professing and Cheering" article and it reminded me about some of my own ideas about the role of religious dogma as group identity badges. Here's the gist of it:

Religious and other dogmas need not make sense. Indeed, they may work better if they are not logical. Logical and useful ideas pop-up independently and spread easily, and widely accepted ideas are not very good badges. You need a unique idea to identify your group. It helps to have a somewhat costly idea as a dogma, because they are hard to fake and hard to deny. People would need to invest in these bad ideas, so they would be less likely to leave the group and confront the sunk cost. Also, it's harder to deny allegiance to the group afterwards, because no one in their right minds would accept an idea that bad for any other reason.

If you have a naive interpretation of the dogma, which regards it as an objective statement about the world, you will tend to question it. When you’re contesting the dogma, people won’t judge your argument on its merits: they will look at it as an in-group power struggle. Either you want to install your own dogma, which makes you a pretender, or you’re accepted a competing dogma, which makes you a traitor. Even if they accept that you just don’t want to yield to the authority behind the dogma, that makes you a rebel. Dogmas are just off-limits to criticism.

Public display of dismissive attitude to your questioning is also important. Taking it into consideration is in itself a form of treason, as it is interpreted as entertaining the option of joining you against the authority. So it’s best to dismiss the heresy quickly and loudly, without thinking about it.

Do you know of some other texts which shed some light on this idea?

 

11 Less Wrong Articles I Probably Will Never Have Time to Write

23 lukeprog 23 October 2011 02:44AM

There are many Less Wrong posts I'd like to write, but I'm starting to admit there are some of them I'll probably never get around to. I need to be doing other things. If anybody wants to write up the post ideas below, go for it! You may also want to announce you're working on one or more of them in the comments, to avoid duplicate work.

In no particular order...

  1. The Value of Information. Less Wrong still doesn't have a tutorial on how to do value of information calculations. If 5+ examples from common circumstances are included, I think this could be useful to many people. One classic example is that most people don't spend even 10 hours figuring out how they should spend several years of their life while getting a degree. [Update: Vaniver wrote this one.]
  2. Gamify Boring Tasks. The potato chip lady is a classic example of how to do this. I've got several of my own examples from my own life, and perhaps another author has their own. "Make it a game" is something my mother might advise for getting through boring tasks, and I didn't take this advice seriously until lots of scientific literature gave me the same advice. ('Flow' literature.) This is one tiny piece of How to Beat Procrastination that could be zoomed in on with its own post.
  3. Biases in Charity. We've all heard about scope insensitivity, but several other biases effect our charitable giving. This post could basically be a summary of this article, plus a few others from that same book. [Update: done by Kaj.]
  4. Motivational Externalism. One of the classic debates in metaethics/moral psychology is between motivation externalism and motivational internalism. This debate seems to be in the process of being resolved by neuroscience, in favor of motivational externalism. I have spoken with LWers who do not know this. Much of the case is laid out here, though there are more details to be gleaned from neuroeconomics.
  5. The Dr. Evil Problem. Less Wrong has spent much discussion on the sleeping beauty problem (due largely to Adam Elga). A similar problem in decision theory / probability theory that may be worth discussing is Elga's "Dr. Evil Problem," discussed here and here.
  6. Hedonomics. Hedonomics is a particular way of combining decision research and happiness research, and has implications for scientific self-help. A beginning review is here. The field could be summarized for Less Wrong.
  7. Thinking Too Little or Thinking Too Much. Thinking errors can result both from thinking too little (heuristics and biases) and from thinking too much (overzealous decision analysis). It would be useful to have some heuristics available to recognize which type of thinking error is likely under which circumstances, as discussed by Ariely & Norton (2010).
  8. How to be a Happy Consumer. There is a ton of research on how to spend money in ways that actually make you happy, recently reviewed here. This could be summarized for Less Wrong.
  9. Informal Fallacies as Errors in Bayesian Reasoning. Just as science errs or succeeds as it agrees with probability theory, informal fallacies are justified only in so far as they agree with Bayes. A recent summary of this is here. [Update: done by Kaj.]
  10. Close-Call Counterfactuals. This is one of the biases I don't think has been discussed on Less Wrong yet. Summary. [Update: done by Kaj.]
  11. Make Better Decisions with UnBBayes. UnBBayes is a fairly mature, actively developed cross-platform decision network software. It would be useful to have a tutorial on how Less Wrongers can use it to make better (important) decisions, like these video tutorials but with better real-life examples.