Comment author: 10 February 2017 03:34:16AM 0 points [-]

I intuitively feel a 50-50 chance implies a uniform distribution

Well, imagine a bet on a fair coin flip. That's a 50-50 chance, right? And yet there is no uniform distribution in sight.

Comment author: 10 February 2017 07:20:22PM 0 points [-]

So if we can distinguish between

"I know the probabilities involved and they are 50% for X and 50% for Y" and "I don't know".

Could we further distinguish between

a uniform distribution on the 0 to 1 range and "I don't know"?

Let's say a biased coin with unknown probability of landing heads is tossed, p is uniform on (0,1) and "I don't know" means you can't predict better than randomly guessing. So saying p is 50% doesn't matter because it doesn't beat random.

But what if we tossed the coin twice, and I had you guess twice, before the tosses. If you get at least one guess correct then you get to keep your life. Assuming you want to play to keep your life, then how would you play? Coin is still p uniform on (0,1), but it seems like "I don't know" doesn't mean the same thing anymore, because you can play in a way that can better predict the outcome of keeping your life.

You would guess (H,T) or (T,H) but avoid randomly guessing because it would produce things like (H,H) which is really bad because if p is uniform on (0,1), then probability of heads is 90% is just as likely as probability of heads is 10%, but heads at 10% is really bad for (H,H), so bad that even 90% heads doesn't really help that much more.

If p is 90% or 10%, guessing (H,T) or (T,H) would result in the same small probability of dying at 9%. But (H,H) would result in at best 1% or 81% chance of dying. Saying I don't know in this scenario doesn't feel the same as I don't know in the first scenario. I am probably confused.

In response to The Social Substrate
Comment author: 09 February 2017 09:13:41PM *  2 points [-]

The main character's reaction is sort of unhealthy/fake - better would have been to clarify that you overheard them bantering earlier.

I did not feel that way at all, the reaction is simple and appropriate. Imagine how clunky and awkward it would be for the main character trying to explain that in fact you over heard the banter and that you don't want the mom to think, that you think its OK for there to be rude things said about her son, in front of her. That would come off as weird.

it's not so much that wearing a gold watch isn't about knowing the time, it's that the owner's actual desires got distorted by the lens of common knowledge. Knowing that someone would be paying attention to them to infer their desires, they filtered their desires to focus on the ones they thought would make them look good. This also can easily come off as inauthentic, and it seems fairly clear why to me: if you're filtering your desires to make yourself look good, then that's a signal that you need to fake your desires or else you won't look good.

The gold watch wearer does not come off as inauthentic to me. If I had more information, like that the person was not well off then I would. Just because the gold watch wearer wants to look good in front of me doesn't make it inauthentic nor does it mean the wearer is faking it. There isn't much difference between the fashion and hygiene example and the gold watch example. Putting in the effort to look good by being well dressed and clean (presuming that is in fact true, people might think you fail at both), is the same as using money to wear a gold watch to convey wealth. All three attempt to convey some information about the person, and nothing is inauthentic if it's true. How else do you let people know you got money?

Comment author: 24 January 2017 05:55:41PM *  1 point [-]

In competitive gaming, this explains what David Sirlin calls "scrubs" - players who play by their own made-up rules rather than the true ones, and thus find themselves unprepared to play against people without the same constraints. It isn't that the scrub is a fundamentally bad or incompetent player - it's just that they've chosen the wrong paradigm, one that greatly limits their ability when they come into contact with the real world.

I suspect that in software development, trying to develop a good, bug-free program makes you a "scrub". A more reliable path to victory is to quickly make something that can be sold to the customer, and fix it later when necessary. While your competitor develops a bug-free solution, you already own the market. Furthermore, you can spend the money you made to create an improved version 2.0 of your program, so now you have both money and quality. (But maybe even this makes you a "scrub", and you should be developing another application and taking over yet another market instead.)

Comment author: 24 January 2017 07:28:19PM 1 point [-]

This is a really good example of when the organization does gets it right on the big picture, but it seems like they didn't pick the right paradigm. An observation of mine is that organizations often seem dysfunctional to a lot of participants because they aren't part of the profit center or privy to the overall strategy. A company can be fully aware of dysfunction or inefficiencies within, and find it acceptable, because fixing it or making someone happy isn't worth the resources.

Comment author: 24 January 2017 06:21:30PM 1 point [-]

Suggestion to sticky the welcome thread. Stickying the welcome thread to the sidebar would encourage participation/comments/content. And perhaps in the future add emphasis on communication norms to the thread, specifically that negative reception and/or lack of reception is more obvious on LessWrong – So have thick skin and do not take it personal. I'd imagine that quality control will be what it has always been, critical comments.

Comment author: 24 January 2017 02:09:52PM 1 point [-]

Can you point at the part which you find objectionable?

Comment author: 24 January 2017 05:14:14PM 0 points [-]

Admin intervention is way too much.

Comment author: 24 January 2017 01:55:18AM 0 points [-]

Yep, it works. Don't take cigarettes away from schizophrenics!

Comment author: 24 January 2017 01:35:33PM 0 points [-]

But tobacco is still bad! E-Cigs are better.

Comment author: 23 January 2017 09:44:30PM 0 points [-]

What are your own thoughts about the problem of monopolies, are they even a problem at all? The standard answer is that they either would not occur or would be a beneficial thing.

Comment author: 19 January 2017 05:01:26PM *  2 points [-]

Thanks for the link, that's an interesting and useful article. Updated my probabilities in a few areas. Also an amazingly civil, rational comments section given the nature of the material.

It didn't help me very much on police shootings though - lot of way-out-of-date data that points both ways depending on area. No racial bias in shooting recorded in New York, seriously significant effect in Tennessee. Given what I know about American racism-by-state that's almost disappointingly obvious. (Again though, out of date - relevant laws in Memphis have changed since).

Okay, strap in, this is a long one. First I'm going to cheerfully steal a chunk directly from one of those comments on SSC for your consideration:

Suppose that you notice that on average, green shows up twice as often as red, but you can’t see a pattern to it. If you want to maximise your winnings, should you on average bet on green twice as often as red to match the frequencies you’re seeing? No, you should strictly bet on green every time.

Similarly, if a random black person is statistically more likely to be a criminal than a white person, then a police officer’s or prosecutor’s career incentive is to focus on them.

Of course this wouldn’t fly as a practical policy. Green and red lights may be independent from your choices, but humans are not. If you completely removed police overwatch from non-black populations, this would encourage crime in those populations. Even if you didn’t care, there’s no way you’d defend yourself from accusations of blatant racism and unfairness.

Still, the incentive is there. And it’s based on math – racial prejudice not required.

Because I believe people tend to follow incentives, my current best guess is that police do over-profile (target the higher risk groups more than the actual risk differences would suggest), and they are going to, and the only question is to what extent this can be mitigated.

I'd be interested to see what you think of that.

Now, speaking for myself -

I find no flaw with your buggy-robot analogy. That would indeed in result in more shootings of innocent black people without any need for racial prejudice influencing the decision to shoot, simply because of disproportionate exposure given higher crime rates.

My contention, however, is that racial prejudice is a factor in real-world police shootings/violence. So the question I ask as a half-decent wannabe rationalist is how my belief should constrain my expectation - how do I expect my world to look different from a non-racist buggy-robot world? How do I test it? Honestly I don't have a particularly satisfactory answer. Some attempts follow, feel free to skim if you're not arsed reading them.

1. I would expect the disproportionate impact of police shootings/violence upon innocent black people (innocent meaning unarmed/non-dangerous here, not necessarily innocent of any crime) to be measurably higher even when adjusted for the higher crime-rate/residential grouping effect as it was in the Memphis study. (Though not to such a degree - important to state I do believe that this problem is getting better). However, the way a shooting event or other incident of violence is recorded depends very heavily on the word of the officer involved (the event is assumed to be "an "assault on law enforcement" and the officer is referred to as "the victim" in the report on a fatal shooting by default). If racial prejudice influenced his/her decision to shoot it could also - with or without any deliberate lying - influence his/her assessment of whether the individual shot was behaving as a threat. (You'd be amazed what some police officers will call "assault on an officer" or "resisting arrest" with a straight face - that's a problem even without touching any racial-disparity issues). So that's also in my model, and the fact that the two effects counter each other means they're of little use to me as a measurable anticipation-constraint.

2. I'd expect a higher impact on black people of what I'd call "WTF shootings" - shootings where the victim could not have been deemed a threat by a reasonable observer. Unresisting arrestees, fleeing suspects holding nothing in their hands, kids holding toy guns shot before being given any chance to comply with verbal directions - or being given no verbal directions. Not tragic-but-understandable mistake type shootings - "itchy trigger finger" shootings that baffle reasonable explanation and appear to proceed directly from some kind of gut feeling on the part of the officer.

Interesting to think of this in relation to what seems like an odd number of reports of police shooting securely-tethered pet dogs that barked at them. I've seen an actual cop try to explain this phenomenon by saying that police often have terrifying, dangerous encounters with vicious guard dogs owned by drug dealers and the like, and develop a fear that leads them to react with instinctive aggression to a barking dog without taking the time to evaluate whether or not it's a threat. Interesting, that. I have no data for it though, so just an idea.

Anyway, I'd expect these "WTF shootings" to hit black people harder, but one can only apply the "reasonable observer" test if the incident is recorded on camera or there are a decent number of witnesses - and in this case I'm willing to admit that the political heat around this issue might lead to WTF shootings of black people being *over-represented" or identified where they don't exist. So, measurement problem here again.

1. Police shooting disproportionately affecting black people even when you only count the shooting events that occur in locations that mitigate the grouping effect. Put simply, your robot that shoots innocent people is disproportionately likely to hit a black person because it's in a black neighborhood interacting with black people for a disproportionate number of hours of the day. But real policemen are assigned to patrol specific areas - the racial mix of the people they interact with is governed by the demographics of their "beat", not overall crime stats. There are whole all-white towns in America. Pretend for a second that the crime rate among black Americans is four times that of white Americans. Now imagine a police officer in a neighborhood - or city, even - with a 3% black population. Adjust for higher crime rate and their interactions with black citizens go up to 12% which neatly matches the actual demographics (if I remember the figures.) That could be an adequate sort of "controlled environment" where interactions mirror actual population demographics. If black people are disproportionately shot within that neighbourhood, I'd say that's a measurable indication that racism is playing a part. If they aren't, it indicates the opposite.

But after all that, I just don't know where to find current objective data or how to look at it, and at the end of the day I'm looking for something that is capable of covering its own tracks. I'm not quite at "no sabotage is evidence of Fifth Column" yet but I'm brushing dangerously close to a universally applicable argument. Racism not evident in data? Data could be skewed by racism! Sounds dodgy but common sense says it is possible and has happened before and I can't discount it. But given my difficulty making my beliefs pay rent, I've revised my certainty down a bit just by writing all this out.

But not a lot down and here's why - the other side - the part two of this ridiculously long comment.

A certain percentage of Americans are racists. Lots are a teeny bit racist (arguably we all are), but a few are massively, viciously racist. This isn't distributed equally over the states or within them - there are clusters. If American police are a fair sample of the American population, then many of them are a wee bit racist and a few are massively racist, and there are clusters in certain areas. (By the way, I wouldn't be overly surprised if American cops were less racist on average than Americans as a whole. That still leaves a goodly few "bad apples"). How could that not impact their treatment of minority-group individuals? What negates the effect of that bias in a given situation? I'm willing to accept the impact could be neutralized to a large extent by complex structures and redundancies within, say, the court system - but in the personal, individual, encounters - split-second decisions whether or not to shoot, whether or not to resist the impulse to kick someone in the head while they're on the ground, what check is there? You can say that the buggy-robot is a simpler explanation, but to me it's just a shorter one. The more complex idea, as I see it, is that somehow there's far less racism among police officers than among the genpop, or that somehow the racism there is is prevented from impacting its targets in situations where no apparent check is provided. The absence of racism as a motive force in any instances of police misbehavior or misjudgment would need explaining to me.

I'm done. Sorry about the novel, it's been a slow day at work.

TFL;DFR - the evidence is complex, patchy and difficult to interpret but doesn't appear to be stronger for my position than the converse; however cops are people, some people are racist, therefore some cops are racist, and cops have a lot of discretion as to how and when it's appropriate to use physical force which means some whacking great racists make decisions about whether or not to shoot or brutalize black people, and I don't see how that can't equal disproportionate impact, at least in certain states or areas.

My reasoning on some parts is probably lousy with holes, so if you've slogged through this far, have at it is with a hatchet, and if you haven't, I don't blame you.

Comment author: 19 January 2017 08:08:56PM *  2 points [-]

My contention, however, is that racial prejudice is a factor in real-world police shootings/violence.

I'm not disagreeing with you but I just want to add to the conversation that I think the SSC comment is closest to the issue when he/she said:

Still, the incentive is there. And it’s based on math – racial prejudice not required.

Because I believe people tend to follow incentives, my current best guess is that police do over-profile (target the higher risk groups more than the actual risk differences would suggest), and they are going to, and the only question is to what extent this can be mitigated.

Let's say you and another guard are manning a castle gate, and there is a serial killer outside in the village of 100 people. A peasant knocks and says "let me in". You reply "I am sorry I value my life more than yours I can not let you in, even if you are probably not the killer". The other guard says "I despise all peasants, I would never let you in" This repeats again and again. Both you and the other guard have caused a disproportionate amount of impact on innocent peasants, and your actions are indistinguishable, yet you are not prejudiced. If you change the mind of the other guard to not hate peasants, the predicament of the poor peasants do not change – you both still refuse entry. That doesn't mean reducing prejudice can't help. Imagine a third guard that is also a peasant hating misanthrope but he takes his hate to another level, so that when a peasant knocks, the third guard says to the others "Hey this guy is a peasant, let's just kill him". You and the second guard relieve the third guard of duty and that really did help the situation of the peasants, you saved them from violent prejudice, but the problem of innocent villagers stuck outside the wall remain. Getting rid of the third guard does help, but doesn't solve everything.

Comment author: 19 January 2017 10:02:51AM 6 points [-]

2) Imagine that we replace the cops with intelligent robots who have zero feelings of racism or anything. Let's assume that the robots, despite being perfectly fair, also contain a software bug which triggers at random moments and causes them to kill a perfectly randomly selected person in sight. This itself is enough to cause a disproportionate amount of the innocent victims to be black. Simply because people often live groupped by ethnicity, some groups have disproportionally higher crime rate, so even the fair robot who would completely randomly choose which crime to investigate, would spend disproportionally more time surrounded by people of this ethnicity, which in turn would make them more likely to become victims of its software bug. Is it fair? No. Is it racist? Also no. That would be a false dilemma.

PS: Also please don't feed Eugine. You can easily guess which one is his latest account.

Comment author: 19 January 2017 06:22:33PM *  1 point [-]

Is it fair? No. Is it racist? Also no.

Agree, and I think this is a really important and overlooked implication, that two tribes will talk past each other on. Unfair discrimination persists even with rational, non-racist, greedy capitalist.

A less charged example would be life insurance policies. Almost everyone would agree that mortality tables are acceptable; almost everyone could also imagine themselves getting older, and could imagine themselves as above average with in their group. The insurer will rationally charge the older group more premium. Atypical healthier older people within this group have experience unfair discrimination and the insurer is rationally non-prejudiced.

So when one tribe says that markets will punish racist, it doesn't fix unfair discrimination. And when other tribe says that there is unfair discrimination, that doesn't mean there is rampant racism. I personally feel a lot of compassion towards atypical individuals within a disadvantaged group, but how could we improve?

Comment author: 18 January 2017 08:10:24PM *  0 points [-]

No, you don't. A perfect rationalist is not a sociopath because a perfect rationalist understands what they are, and by scientific inquiry can constantly update and align themselves with reality. If every single person was a perfect rationalist then the world would be a utopia, in the sense that extreme poverty would instantly be eliminated. You're assuming that a perfect rationalist cannot see through the illusion of self and identity, and update its beliefs by understanding neuroscience and evolutionary biology. Complete opposite, they will be seen as philanthropic, altruistic and selfless.

The reason why you think so is because of straw Vulcan, your own attachment to your self and identity, and your own projections onto the world. I have talked about your behavior previously in one of my posts. do you agree? I also gave you suggestions on how to improve, by meditating, for example. http://lesswrong.com/lw/5h9/meditation_insight_and_rationality_part_1_of_3/

In another example, as you and many in society seem to have a fetish for sociopaths, yes you'll be a sociopath, but not for yourself, for the world. By recognizing your neural activity includes your environment and that they are not separate, that all of us evolved from stardust, and practicing for example meditation or utilizing psychotropic substances, your "Identity" "I" "self" becomes more aligned, and thus what your actions are directed to. That's called Effective Altruism. (emotions aside, selflessness speaks louder in actions!)

Edit: You changed your post after I replied to you.

[1] ETA: Before I get nitpicked to death, I mean the symptoms often associated with high-functioning sociopathy, not the clinical definition which I'm aware is actually different from what most people associate with the term.

Still apply. Doesn't matter.

Comment author: 18 January 2017 08:33:41PM 0 points [-]

If I remember correctly username2 is a shared account, so the person are talking to now might not be whom you have had previously conversed with. Just thought you should know because I don't want you to mistake the account with a static person.

View more: Next