Pixabay

In 2008, Tammy Dobbs, a woman with cerebral palsy, moved to Arkansas. She applied to the state to receive help from a caretaker, and after an assessment, Dobbs was allotted the maximum 56 hours of home-care visits per week. During her annual reassessment in 2016, she received a shocking verdict: despite her situation remaining the same, her care would be cut to 32 hours per week.

Dobbs expressed her outrage and shock as the “program she relied on for years fell out from below her.” What changed? The decision-maker. Arkansas began using an algorithm to determine health care allocation. For Tammy, like many relying on that government service, this algorithm was life changing.

Algorithms are in all parts of our lives. From Netflix’s newest recommendations to Domino’s pizza bots, algorithms are integrated into most of our day-to-day experiences. Governments, too, are capitalizing on these technologies to become faster and more efficient. Proponents of this expansion argue that algorithms are better since they can consider a wider array of factors with greater speed.

But what happens when an algorithm is making a life-altering decision? What happens if you don’t even know that an algorithm is being used? As Tammy discovered, something is intuitively unfair here and it highlights a fundamental issue in algorithmic accountability: transparency.

As state and federal governments increasingly utilize algorithms, people should know where algorithms are used and be able to evaluate their efficacy. A large body of evidence suggests many algorithms contain implicit bias and may discriminate against people based on race, gender, or other factors. Particularly when it comes to government allocation of resources, it is critical that algorithms don’t become, as the Equal Employment Opportunity Commission warns, a “high-tech pathway to discrimination.” The first step, then, is to ensure that information is available on how algorithms are being used in state and federal agencies.

To piece together part of the puzzle, the Yale Law School Media Freedom and Information Access (MFIA) clinic submitted a Freedom of Information (FOI) request to the Connecticut Department of Administrative Services (DAS) for information about its procurement, use, and assessment of algorithms used in the hiring process for state employees and contractors.

Given general concerns of algorithmic bias in hiring, the clinic hoped to evaluate the algorithm. Unfortunately, after months of waiting, DAS shut down the request, claiming that the algorithm is off-limits. The government said the public isn’t entitled to know such information due to trade secret protections given to the algorithm’s creator.

Despite numerous studies on the dangers of algorithmic bias, the clinic’s attempt at access in this case was met with a decisive no. Sadly, this experience is not unique; trade secret exemptions are often used to keep the functioning of algorithms in the dark.

Trade secrets offer valuable protections and help encourage innovation by ensuring that the creator has a financial interest in their product. As a result, many state Freedom of Information (FOI) laws, including Connecticut’s FOI law, along with the federal Freedom of Information Act (FOIA), protect trade secrets from disclosure. However, this broad-brush approach can’t be appropriate when algorithms are involved.

Agencies use the trade secret exemption to justify withholding information about how their algorithms work. Police departments, for instance, have repeatedly used this exemption to deny free access requests for facial recognition technology information. Thus, citizens concerned by growing evidence of implicit bias in facial recognition software cannot ensure that their criminal justice systems are not discriminating against minority groups.

Of course, trade secret exemptions can be a valuable protection. After all, many governments rely on private companies to create the algorithms that they use. If these algorithms can simply be made public, then what incentive do companies have to make them?

Indeed, recent innovations show that algorithms could be made to combat biases. MIT researchers have actually developed a new, exploration-based algorithm for hiring that can improve hiring quality and diversity. If removing protections disincentivizes advancements like these, some might argue that a lack of transparency is the necessary trade-off for efficiency and innovation.

This misses a major point, however. Innovations attempting to eliminate bias are effective precisely because of the available and troubling evidence of bias. By closing off algorithms and keeping them out of public scrutiny, identifying these major errors may be almost impossible. In fact, without public scrutiny, private developers may not have any incentive to identify and fix the issues.

While full algorithmic transparency may not be feasible, at least in the context of government algorithms, accountability should be possible. Many states are already taking steps to ensure it.

A recent Idaho law requires transparency for the use of risk assessment tools in the criminal justice system. The law explicitly states that “no builder or user of a pretrial assessment tool may assert trade secret … protections…” Likewise, the Algorithmic Justice and Online Platform Transparency Act, recently introduced in Congress, aims to “pull back the curtain on the secret algorithms.”

Impact assessments provide an avenue for accountability that both acknowledges the value of trade secret protections and aims to hold these algorithms accountable. These assessments evaluate existing proposed automated decision making system by examining the impact of an algorithm on fairness, bias, and justice. Laws and regulations mandating impact assessments and public disclosure of the results, like Canada’s Directive on Automated Decision making, can be valuable in understanding and auditing the use of algorithms while protecting trade secret interests. Currently in Congress, the reintroduced Algorithmic Accountability Act moves in this direction by requiring companies to assess the impacts of algorithms they use and sell.

As the Yale clinic’s experience with DAS makes clear, Connecticut must also take steps to ensure that citizens are fully aware of the way algorithms are being used and provide them a meaningful opportunity to evaluate these tools. Algorithmic judgements are used widely by Connecticut agencies. It is vital that legislative reforms enabling transparency are adopted before a decision becomes, as it was for Tammy, life changing.

Sruthi Venkatachalam is a Yale Law Student and part of the Yale Law School Media Freedom and Information Access Clinic.