A drone programmed or trained to do the same thing would be no better and no worse in terms of fairness and public safety than the human. Cathy: So it's just one example, but it's I think a very important example to demonstrate the fact that the company's using these algorithms for HR or what have you, and that's often the framework, the setup is that some small company builds the algorithm and then licenses to some large either company or government agency in the case of predictive policing, recidivism or, for that matter, teacher evaluation. Do you have addiction problems?". She persuasively points out that computer models can encode human bias into tools and that sometimes exacerbates these biases. So that's why I started a company that theoretically anyway could be invited in to audit an algorithm. That person is then more likely to commit another crime, and so the model looks like it got it right. I am constantly surprised that people continue to choose to use Facebook daily in all its awfulness. One salient point she touches on here is the relationship between emergent technology and the practice of stop and frisk. This generally does more harm than good, including to the intended beneficiaries. Broaden that out to look at all the people that we're affecting, including maybe the environment. Externality. I saw the headline “Math is racist” at CNN, and correctly guessed there’d be something more reasonable at this blog. Hugo: Yeah. Hugo: Yeah. It's kind of a sensitivity analysis. I also think that she dramatically oversimplifies the negative impacts of WMD’s across many domains, choosing very specific examples and neglecting most of the positive impacts that these new technologies might bring. But the book is short on the kind of details I personally crave and long on blanket statements and generalizations, the same kind of generalizations she denounces companies for making. They always say yes, and then you say, "What do you mean?" And that's how I like to think of data science work in general as well. Other questions are like, "Are you a member of a gang? It's a question that has to be asked at a much higher level with much more access. They are also scalable, thereby amplifying any inherent biases to affect increasingly larger populations. Some will go off to one side and some to another, but most will probably cluster in the middle. It's important, and it's secret, and it's destructive. A lot of the ominous implications made in the book have to do with what MIGHT happen in the future, if certain systems become more common. Are you good at your job? And honestly I think we need to consider both very carefully. So I'm wondering, looking back on that, if you were to rewrite it or do it again, what do you think is worth talking about now that you couldn't see then? In this example, PredPol system fails because it focuses on the wrong inputs “type and location of each crime and when it occurred” (p86) and manipulates them to create a “pernicious feedback loop. And this idea of a feedback loop in data science work in algorithms and modeling is one of the key ingredients of what you call a Weapon of Math Destruction, which I really look forward to getting back to. Hugo: Well, I for one am really excited about reading this paper and we'll include a link in the show notes to it as well. That desperation is potentially very damaging to democracy. Clay Shirky from The New York Times Book Review said "O’Neil does a masterly job explaining the pervasiveness and risks of the algorithms that regulate our lives," while pointing out that "the section on solutions is weaker than the illustration of the problem.". They sued and won and the judge found that their due process rights had been violated, and I'm sort of sitting around waiting for that to happen in every other example that I have mentioned, but also in lots and lots of other examples that are similar where you have this secret important decision made about you. Among the many examples of powerful formulas that O’Neil cites in her book, political polling doesn’t come up, even though this election cycle has made polling’s power more talked about than ever before. Those are the three characteristics. Often this tampering is done in the name of egalitarianism, to make things superficially appear more equal by actually making them less fair. I mean right now it's just a complete shit show, but even in the best of times it responds better to stories of cruelty and death than it does to silent mistakes that nevertheless cost people real opportunities. And there's all sorts of evidence now that judge either ignore them or they ignore them in certain cases, but listen to them in other cases. Those who objected were regarded as nostalgic Luddites. Hugo: You're welcome. Unfortunately, that is not an aspect that is usually attributed to Big Data. One example of humans causing the problems of WMDs is in Chapter 3. This book is an extended essay where the author is trying to make a point about how algorithms can be damaging to our communities. Related to this is the point, that many algorithms at their core are nothing but statistically vamped up prejudices: Correlating e.g. Furthermore, she also focuses on the key point that while many people benefit from these models, the problems embedded in these models lead to many people suffering from their consequences (31). The seemingly contradictory words “fear” and “trust” leap out to me: how many other things do we both fear and trust, except perhaps for fate or God? Without good evidence of either consequence, O’Neil poses a weak objection. In all of these examples, O’Neil argues that the model is “overly simple, sacrificing accuracy and insight for efficiency (chapter 1).” And Cathy O’Neil does discuss this, especially in the conclusion, but for me the focus of the book wasn’t on target. And that's the best case scenario when you live in a society that actually cares. This is not just a question of maybe having learned and numerically implemented one or two well establied models, but of having made the experience that one can come up with loads of models for a given phenomenon, which are mathematically consistent and at first sight might seem plausible, but later just turn out to be wrong in important aspects. So if you found out that your score would go from bad to good based on one small change, then you would know that this is a bogus algorithm. In the first segment of the book, O’Neil attempts to illuminate the shortcomings of what she calls “Weapons of Math Destruction,” large-scale opaque models that do significant societal harm. O’Neil even notes that in a world of thousands of security cameras that send out our images for analysis “police won’t have to discriminate as much” (101). As well as questioning the two-party system in the US, she’s also looked at how mathematics has been used in the housing and banking sector to affect our lives via her blog mathbabe for more than a decade.