Comment author: benkuhn 03 June 2015 10:48:52PM 2 points [-]

To increase p'-p, prisons need to incarcerate prisoners which are less prone to recidivism than predicted. Given that past criminality is an excellent predictor of future criminality, this leads to a perverse incentive towards incarcerating those who were unfairly convicted (wrongly convicted innocents or over-convinced lesser offenders).

If past criminality is a predictor of future criminality, then it should be included in the state's predictive model of recidivism, which would fix the predictions. The actual perverse incentive here is for the prisons to reverse-engineer the predicted model, figure out where it's consistently wrong, and then lobby to incarcerate (relatively) more of those people. Given that (a) data science is not the core competency of prison operators; (b) prisons will make it obvious when they find vulnerabilities in the model; and (c) the model can be re-trained faster than the prison lobbying cycle, it doesn't seem like this perverse incentive is actually that bad.

Comment author: ChaosMote 04 June 2015 04:34:28AM 1 point [-]

Your argument assumes that the algorithm and the prisons have access to the same data. This need not be the case - in particular, if a prison bribes a judge to over-convict, the algorithm will be (incorrectly) relying on said conviction as data, skewing the predicted recidivism measure.

That said, the perverse incentive you mentioned is absolutely in play as well.

Comment author: ChaosMote 03 June 2015 01:44:15AM 20 points [-]

Great suggestion! That said, in light of your first paragraph, I'd like to point out a couple of issues. I came up with most of these by asking the questions "What exactly are you trying to encourage? What exactly are you incentivising? What differences are there between the two, and what would make those difference significant?"

You are trying to encourage prisons to rehabilitate their inmates. If, for a given prisoner, we use p to represent their propensity towards recidivism and a to represent their actual recidivism, rehabilitation is represented by p-a. Of course, we can't actually measure these values, so we use proxies; anticipated recidivism according to your algorithm and re-conviction rate (we'll call these p' and a', respectively).

With this incentive scheme, our prisons have three incentives: increasing p'-p, increasing p-a, and increasing a-a'. The first and last can lead to some problematic incentives.

To increase p'-p, prisons need to incarcerate prisoners which are less prone to recidivism than predicted. Given that past criminality is an excellent predictor of future criminality, this leads to a perverse incentive towards incarcerating those who were unfairly convicted (wrongly convicted innocents or over-convinced lesser offenders). If said prisons can influence the judges supplying their inmates, this may lead to judges being bribed to aggressively convict edge-cases or even outright innocents, and to convict lesser offenses of crimes more correlated with recidivism. (Counterpoint: We already have this problem, so this perverse incentive might not be making things much worse than they already are.)

To increase a-a', prisons need to reduce the probability of re-conviction relative to recidivism. At the comically amoral end, this can lead to prisons teaching inmates "how not to get caught." Even if that doesn't happen, I can see prisons handing out their lawyer's business cards to released inmates. "We are invested in making you a contributing member of society. If you are ever in trouble, let us know - we might be able to help you get back on track." (Counterpoint: Some of these tactics are likely to be too expensive to be worthwhile, even ignoring morality issues.)

Also, since you are incentivising improvement but not disincentivizing regression, prisons who are below-average are encouraged to try high-volatility reforms even if they would yield negative expected improvement. For example, if a reform has a 20% chance of making things much better but a 80% chance of making things equally worse, it is still a good business decision (since the latter consequence does not carry any costs).

Comment author: [deleted] 01 June 2015 07:08:40AM 2 points [-]

3-6 months? People don't go on piling up savings indefinitely? How else do you retire? I mean... there is state pension in the country I live in but I would not count it not going bust in 30 years so I always assumed I will have what I save and then maybe the state pays a bonus.

In response to comment by [deleted] on Stupid Questions June 2015
Comment author: ChaosMote 01 June 2015 08:48:20PM 0 points [-]

You are of course entirely correct in saying that this is far too little to retire on. However, it is possible to save without being able to liquidate said saving; for example by paying down debts. The Emergency Fund advice is that you should make a point to have enough liquid savings tucked away to tide you over in a financial emergency before you direct your discretionary income anywhere else.

Comment author: Jiro 01 June 2015 03:56:17PM 0 points [-]

How do you determine whether a seat belt cutter/window breaker is a good one? Should you test it on an old rag or something?

Comment author: ChaosMote 01 June 2015 08:31:02PM 0 points [-]

I'm afraid I don't know. You might get better luck making this question a top level post.

Comment author: ahbwramc 31 May 2015 04:06:28AM 10 points [-]

What contingencies should I be planning for in day to day life? HPMOR was big on the whole "be prepared" theme, and while I encounter very few dark wizards and ominous prophecies in my life, it still seems like a good lesson to take to heart. I'd bet there's some low-hanging fruit that I'm missing out on in terms of preparedness. Any suggestions? They don't have to be big things - people always seem to jump to emergencies when talking about being prepared, which I think is both good and bad. Obviously certain emergencies are common enough that the average person is likely to face one at some point in their life, and being prepared for it can have a very high payoff in that case. But there's also a failure mode that people fall into of focusing only on preparing for sexy-but-extremely-low-probability events (I recall a reddit thread that discussed how to survive in case an airplane that you're on breaks up, which...struck me as not the best use of one's planning time). So I'd be just as interested in mundane, everyday tips.

(Note: my motivation for this is almost exclusively "I want to look like a genius in front of my friends when some contingency I planned for comes to pass", which is maybe not the best motivation for doing this kind of thing. But when I find myself with a dumb-sounding motive for doing something I rationally endorse anyway, I try to take advantage of the motive, dumb-sounding or not.)

Comment author: ChaosMote 31 May 2015 05:08:58AM *  10 points [-]

I am by no means an expert, but here are a couple of options that come to mind. I came up with most of these by thinking "what kind of emergency are you reasonably likely to run into at some point, and what can you do to mitigate them?"

  • Learn some measure of first aid, or at least the Heimlich maneuver and CPR.

  • Keep a Seat belt cutter and window breaker in your glove compartment. And on the subject, there are a bunch of other things that you may want to keep in your car as well.

  • Have an emergency kit at home, and have a plan for dealing with natural disasters (fire, storms, etc). If you live with anyone, make sure that everyone is on the same page about this.

  • On the financial side, have an emergency fund. This might not impress your friends, but given how likely financial emergencies (e.g. unexpectedly losing a job) are relative to other emergencies, this is a good thing to plan for nonetheless. I think the standard advice is to have something on the order 3-6 months of income tucked away for a rainy day.

Comment author: RobbBB 28 May 2015 10:27:03PM 2 points [-]

I think these concerns are good if we expect the director(s) (/ the process of determining LessWrong's agenda) to not be especially good. If we do expect them the director(s) to be good, then they should be able to take your concerns into account -- include plenty of community feedback, deliberately err on the side of making goals inclusive, etc. -- and still produce better results, I think.

If you (as an individual or as a community) don't have coherent goals, then exclusionary behavior will still emerge by accident; and it's harder to learn from emergent mistakes ('each individual in our group did things that would be good in some contexts, or good from their perspective, but the aggregate behavior ended up having bad effects in some vague fashion') than from more 'agenty' mistakes ('we tried to work together to achieve an explicitly specified goal, and the goal didn't end up achieved').

If you do have written-out goals, then you can more easily discuss whether those goals are the right ones -- you can even make one of your goals 'spend a lot of time questioning these goals, and experiment with pursuing alternative goals' -- and you can, if you want, deliberately optimize for inclusiveness (or for some deeper problem closer to people's True Rejections). That creates some accountability when you aren't sufficiently inclusive, makes it easier to operationalize exactly what we mean by 'let's be more inclusive', and makes it clearer to outside observers that at least we want to be doing the right thing.

(This is all just an example of why I think having explicit common goals at all is a good idea; I don't know how much we do want to become more inclusive on various axes.)

Comment author: ChaosMote 29 May 2015 01:44:46AM 0 points [-]

You make a good point, and I am very tempted to agree with you. You are certainly correct in that even a completely non-centralized community with no stated goals can be exclusionary. And I can see "community goals" serving a positive role, guiding collective behavior towards communal improvement, whether that comes in the form of non-exclusiveness or other values.

With that said, I find myself strangely disquieted by the idea of Less Wrong being actively directed, especially by a singular individual. I'm not sure what my intuition is stuck on, but I do feel that it might be important. My best interpretation right now is that having an actively directed community may lend itself to catastrophic failure (in the same way that having a dictatorship lends itself to catastrophic failure).

If there is a single person or group of people directing the community, I can imagine them making decisions which anger the rest of the community, making people take sides or split from the group. I've seen that happen in forums where the moderators did something controversial, leading to considerable (albeit usually localized) disruption. If the community is directed democratically, I again see people being partisan and taking sides, leading to (potentially vicious) internal politics; and politics is both a mind killer and a major driver of divisiveness (which is typically bad for the community).

Now, to be entirely fair, these are somewhat "worst case" scenarios, and I don't know how likely they are. However, I am having trouble thinking of any successful online communities which have taken this route. That may just be a failure of imagination, or it could be that something like this hasn't been tried yet, but it is somewhat alarming. That is largely why I urge caution in the instance.

Comment author: [deleted] 27 May 2015 09:25:50PM 6 points [-]

I loved that post. I commented on it originally, but I'll comment here too.

People probably use bicameral reasoning for a lot of things, but I doubt that too many people actually use it for thinking about politics.

I remember sitting around a campfire with my mom and grandpa in like kindergarten and listening to them discuss politics. My grandpa was talking about all these issues, and my mom was admitting to not being well-informed or having strong opinions about any of them. She said, “As long as democrats approve of abortion, nothing will convince me to vote for them.” My grandpa sighed and told her she was trapped in a religious bubble, that there was a whole world out there she was ignorant of.

But I remember thinking, “Wow! Mom is actually smart.” If democrats are murdering tons and tons of babies every year, and we only hope they go to heaven but God doesn’t actually say, who cares about money or guns or school or any of that other stuff? Even global warming was insignificant, since we believed the earth was going to end anyway. Maybe global warming would just be His way of destroying it. So based on abortion alone, I considered myself Republican until I deconverted from Christianity.

Maybe a week after deconverting, I thought, “well, I guess I should think about voting democrat now.” This was based solely on environmental concerns. There was no longer any guarantee that the earth would end anytime soon, and I’d quite rather it didn’t. All the other issues were fun to think about if I could find the time to inform myself, but I wasn’t terribly concerned about them.

So ultimately, maybe a lack of scope insensitivity could be the root cause of the strong correlation between political party and religious affiliation. Everyone blames herd mentality, which is a huge part of it, but even people who bother thinking for themselves are likely to arrive at the same conclusion as their peers... which is a nice counter for people who think that accurate beliefs aren't too important as long as people's beliefs make them happy.

Comment author: ChaosMote 28 May 2015 05:34:51AM 0 points [-]

While your family's situation is explained by lack of scope insensitivity, I'd like to put forward an alternative. I think the behavior you described also fits with rationalization. If you family had already made up their mind about supporting the Republican party, they could easily justify it to themselves (and to you) by citing a particular close-to-the-heart issue as an iron-clad reason.

Rationalization also explains why "even people who bother thinking for themselves are likely to arrive at the same conclusion as their peers" - it just means that said people are engaging in motivated cognition to come up with reasonable-sounding arguments to support the same conclusions as their peers.

Comment author: ChaosMote 28 May 2015 05:19:24AM 3 points [-]

Interesting point! It seems obvious in hindsight that if you reward people for making predictions that correspond to reality, they can benefit both by fitting their predictions to reality or fitting reality to their predictions. Certainly, it is an issue that come up even in real life in the context of sporting betting. That said, this particular spin on things hadn't occurred to me, so thanks for sharing!

Comment author: ChaosMote 26 May 2015 01:43:31PM 8 points [-]

I think the issue you are seeing is that Less Wrong is fundamentally a online community / forum, not a movement or even a self-help group. "Having direction" is not a typical feature of such a medium, nor would I say that it would necessary be a positive feature.

Think about it this way. The majority of the few (N < 10) times I've seen explicit criticism of Less Wrong, one of the main points cited was that Less Wrong had a direction, and that said direction was annoying. This usually refereed to Less Wrong focusing on the FAI question and X-risk, though I believe I've seen the EA component of Less Wrong challenged as well. By its nature, having direction is exclusionary - people who disagree with you stop feeling welcome in the community.

With that said, I strongly caution about trying to change Less Wrong to import direction to the community as a whole (e.g. by having an official "C.E.O"). With that said, organizing a sub-movement within Less Wrong for that sort of thing carries much less risk of alienating people. I think that would be the most healthy direction to take it, plus it allows you to grow organically (since people can easily join/leave your movement and you don't need to get the entire community mobilized to get started).

Comment author: ChaosMote 26 May 2015 01:16:10PM 0 points [-]

I consider philosophy to be a study of human intuitions. Philosophy examines different ways to think about a variety of deep issues (morality, existence, etc.) and tries to resolve results that "feel wrong".

On the other hand, I have very rarely heard it phrased this way. Often, philosophy is said to be reasoning directly about said issues (morality, existence, etc.), albeit with the help of human intuitions. This actually seems to be an underlying assumption of most philosophy discussions I've heard. I actually find that mildly disconcerting, given that I would expect it to confuse everyone involved with substantial frequency.

If anyone knows of a good argument for the assumption above, I would really like to hear it. I've only seen it assumed, never argued.

View more: Prev | Next