Credit: Raimond Spekking / CC BY-SA 4.0 (via Wikimedia Commons)

Government agencies are increasingly relying upon computer programs known as algorithms to make life-altering decisions about their citizens, and these decisions could even become fatal.

David DesRoches

Consider what happened to Semaj Crosby, who went missing from her Chicago home in April of 2017. Police organized a search for the 17-month old, and community members joined in. But she’d apparently been right under their noses the whole time. She was found dead in her own home.

Chicago’s Child Protective Services knew Semaj’s home life was problematic, according to a CPS review. Her family had been subject to “multiple child protection investigations” over the years – as many as 10, according to the Chicago Tribune.

On top of that, Chicago was using a privately-owned algorithm to help the city prevent child deaths by predicting them in advance. Not only did this technology not predict Semaj’s death, it failed to predict other deaths. Making matters even worse, the software inaccurately flagged 4,100 children as having an almost certain chance of death or injury. This error overwhelmed caseworkers and averted their eyes from children who needed their attention. Children like Semaj.

Chicago dropped the companies that made this algorithm, Eckerd Connects in partnership with Mindshare Technology. The head of CPS at the time, Beverly Walker, told the Tribune that the algorithm “didn’t seem to be predicting much.” But that didn’t stop Connecticut from continuing a contract with the same companies. The Nutmeg State didn’t cancel its no-bid 2016 contract with the companies until the end of 2019 – more than two years after Semaj’s death and other high-profile system failures.

Poorly-implemented algorithms have real-world impacts on real people. A recent analysis by the Media Freedom & Information Access Clinic at Yale Law School reveals that the problem runs deep and wide, and is exacerbated by arcane trade secret protections, state employees’ indifference to the public’s right to know as well as their own ignorance about the algorithms they use. Their ignorance is partially the result of the algorithms being owned by private companies that refuse to reveal the actual formulas that are used to predict something, like a child’s death or injury.

So, the question then becomes: What other algorithms is the state using? To find out, the Yale students submitted Freedom of Information Act (FOIA) requests to three different state agencies – the Department of Children and Families, the Department of Education, and the Department of Administrative Services – to see what sort of algorithms they use, how much they cost, whether they’re effective, and if there’s any oversight or review process. What did they get in return? Cue the crickets.

In addition to flagrant violations of the Connecticut FOIA, the report also revealed a deeper and more complicated problem that’s been under the radar for too long – state agencies use technologies they don’t understand, often leaving marginalized communities further behind. This happens because biased people create the data these algorithms use to predict the future.

When used in child welfare cases, algorithms consider things like interactions with police or the welfare system. However, many of these data are proxies for race or poverty. For example, people are more likely to call police on a Black family and give a white family the benefit of the doubt. That interaction with police then becomes data an algorithm considers when determining risk.

Again, the data are biased because it comes from biased people, and sometimes the data are even racist. A computer doesn’t know the difference between a racist complaint and a real one. They are both data, and in a computer’s eyes, equally as useful. Essentially, it’s “automating inequality,” as author Virginia Eubanks puts it.

Sadly for the state, we don’t know how algorithms are impacting our most vulnerable and marginalized communities. Are algorithms placing students in the wrong school through the magnet system? We don’t know. Is anyone reviewing the algorithms to see if they’re biased? We don’t know. What algorithms is the state even using? We don’t know much.

It’s not entirely the state’s fault. State managers simply want to do their jobs better and more effectively, which everyone agrees is a good idea. But this is a case where we need to get off the train, pause, take a breath, and think deeply about the algorithms we use, how we use them, and how we understand them.

As the Yale students pointed out, access to the algorithm is only part of the solution. Access is only meaningful if enough people understand what the hell they’re looking at. The state should educate its own employees and the public on algorithms, and show us how to understand them, and most importantly, how they’re being used, why they’re being used, and what the outcomes are.

We certainly cannot avoid using algorithms, they are here to stay. But we can at least use them with more sensitivity and intention, so we don’t worsen the problems that centuries of bias started.

David DesRoches is an Associate Professor at Quinnipiac University. He also hosts a podcast called Baffled, which explores problems in the journalism profession.