The common narrative among experts is that when applying a crime risk model, we must choose between two kinds of unfairness accepting one of two evils. The MIT Technology article that you just read, which does a great job illustrating the predicament puts it this way. We gave you two definitions of fairness, keep the error rates comparable between groups and treat people who are flagged in the same way. Both of these definitions are totally defensible but satisfying both at the same time is impossible. As we've discussed for the campus crime risk model, this disparity in error rates is the result of going with the second option, treating people who are flagged in the same way. So, should we just accept that we face this exceedingly difficult choice and quibble about which is less unfair? In this video, I'll argue that no, we shouldn't just work to determine the less problematic ethical choice between two unfavorable options. Rather, the conundrum itself spotlights certain consequences of racial inequality. And so it compels us to actively combat that inequality and to use the framework of predictive policing to do so. After all, once of predictive policing's original purposes was to address inequity. And it rose in the first place largely to help decrease the effects of human bias by introducing data driven ,evidence based objectivity. So now that we've done some analysis, the takeaway can't be that this is an unsolvable problem. No, the takeaway is a renewed clarity and enhanced visibility of the underlying nature of today's racial inequality and its ramifications, and a new drive to do something about it. So, let's take a step back and get some perspective on how the cycle of disadvantage works. How models can magnify and exacerbate the cycle, and then where race comes in. Law enforcement punishes defendants for having more challenging socio-economic circumstances. Risk models like compass that inform bail, sentencing and parole decisions take as input a defendant's personal background. Here's a small example of the variables that the developers of complices model is had to work with. Whether any family members or friends have ever been arrested. Whether parents had drug or alcohol problems. Whether friends are gang members. Whether there is much crime in the defendants neighborhood, education level employment status, and whether the defendant had barely enough money to get by. This means that if your circumstances present certain challenges, not only would that potentially limit your access to resources and opportunities, but it would also explicitly place limits by way of the risk model. First, limited opportunities increase the risk of crime in the first place. Now you're on trial and those aspects of your background are explicitly considered in order to decide how long you stay in jail. Your challenging circumstances are used against you since it's been observed that many people fail to overcome those circumstances. Challenging circumstances are predictive of crime. So crime risk models predict better by incorporating them. This means flagging of those with challenging circumstances. And the fact is, more flagging of those people also means more false flagging of those people. So to the same degree that this further disadvantages the disadvantaged. It also disadvantages the black population which has been granted fewer opportunities and resources in the first place. Talk about adding insult to injury. Black Americans are more likely to be born into more challenging circumstances. And the resulting higher proportion who are subsequently arrested, then face yet another higher risk because of those circumstances, that they'll be falsely flagged and needlessly jailed. This second magnification of risk doesn't only correlate with or intrinsically stem from challenging life circumstances, it intentionally directly results from challenging circumstances by design. Crime risk models are engineered to flag defendants based in part on having those life circumstances. As the founder of the company behind compass has stated, if factors correlated with race, such as poverty and joblessness are omitted from your risk assessment, accuracy goes down. So the cycle of disadvantage into which government policies have historically thrust the black population now includes a secondary cycle piled on top of the first. And there is a third cycle, crime predicts crime. And indeed the model captures this with prior arrests also included as a factor. So a disadvantaged group will likely be arrested at a higher rate and that larger arrested portion is then subjected to these additional cyclic effects. Now, these cycles of self fulfilling prophecy aren't new. Even before predictive models, the common practice of human decision makers considering a subjects conviction history and life circumstances would have contributed to the same kind of cyclic perpetuation for the African American population. The difference now is that now, the effects are measured, the ramifications, the inequitable false positive rates have been explicitly quantified and widely publicized. So here's the problem with the common narrative that our mathematical hands are tied. And that we must select between only two unsavory options, each compromising fairness in one way or another. This framing of the situation serves to placate to subdue the concerns. There's no one to blame. This is just a fact of mathematics. You can't have it both ways. So knuckle down and make a tough ethical decision. Pick your poison. Does the compass model deserve to be called biased? Is a model that achieves fairness in one sense and yet inadvertently generates this racial difference and false positive rates unfair? The fact is, the words biased and unfair, are entirely subjective. So, instead of debating whether these words apply or don't apply, let's actually combat racial inequality. Let's turn our debating towards agreeing exactly which measures would best serve to actively do so. Rather than only studying which option doesn't worsen racial injustice as much, let's enhance predictive policing to actively decrease racial injustice. So I do have some specific measures I'd like to suggest and so I'll list a few of them right now. In part, to express my personal opinion of what could help. And in part as food for thought to help you from your own ideas to give specific examples of the kinds of responses to the status quo predictive policing has unearthed. So that hopefully you'll think of more measures yourself. One, adjust for compromises to ground truth. Before crime risk models are trained, the training data itself must be calibrated by an informed estimate of the degree to which law enforcement arrests. And convicts black defendants proportionately more often, thereby artificially inflating their criminal records. It would be challenging to agree on these metrics but, how can we justify the use of crime risk models without any such adjustment? Number two, publicize crime risk models for inspection. The compass model is proprietary, kept in a secretive black box and unavailable for audit or inspection. I'll address this lack of transparency as its own topic in the next video. Three, predictively rehabilitate with the same vigor and investment poured into applying machine learning to law enforcement. Also apply machine learning to target and optimize rehabilitation for convicts, suspects and at risk populations. For example, uplift modeling could target those most likely to benefit from special programs. Number four, calibrate the risk threshold to equalize the false positive rates. Although crime risk models themselves are colorblind in that race is not a direct input, this calibration would change that. When airing such an idea, it's important not to shy away from how it would work. It means applying a higher, more lenient risk threshold for black defendants. The reason to consider this is simply the systematic inequality that has gotten us here. So just as affirmative action initiatives serve to level an uneven distribution of opportunity and advantage, so too can we nudge a risk score that's based on life circumstances, towards a less an equitable balance. This won't be an easy policy to approve or an easy argument to win. But my position would be that doing so wouldn't introduce unfairness for white defendants. Rather, it would make the overall circumstances less unfair for black defendants. Number five, educate decision makers who use risk scores. When model scores are handed over to humans to inform their decisions, the packaging and delivery of those scores requires some very particular human engineering. Nowhere is this more so than with predictive policing and crime risk models. Let's rigorously train judges, parole boards and officers to understand the pertinent caveats when they're given the calculated probability of black suspect defendant, or convict will be rearrested. Empower these decision makers to incorporate these considerations into their decision making. The score has been affected by compromised ground truth in the training data. And it has been indirectly influenced by the defendants race by way of proxies including life circumstances. As a result, the black population is ravaged by a much higher incidence of false flags. Tell them all that. So, that's a start and beyond these measures that I've suggested, I encourage you to think of more that may help compensate for past and present racial injustices and the cycles of disenfranchisement, than sue. Machine bias extends well beyond law enforcement. Now that we've become familiar with how false positive rates differ between groups, the ramifications when that happens, and the kinds of measures we can consider undertaking in response. Let me remind you that this all applies to many other consequential decisions as well, including loan approvals, insurance pricing. HR decisions such as hiring and promoting, housing approvals, and medical triage. The problem of unequal false positive rates can arise for all these, and therefore, so too can the layering of new cycles of disadvantage. For example, those who most need a student loan are more likely to also have life circumstances that a predictive model will count against them. So the same magnification of racial divide applies here as well. Racial injustice has largely determined these life circumstances, and then a secondary cycle moves opportunity even more out of reach. The public spotlight on machine bias has brought new visibility to these cycles of disadvantage. At the same time, now that we're driving decisions with predictive models, we've gained an unprecedented opportunity to advance social justice by positioning this technology and recalibrating how it's used. So that it actively improves equality.