The promise and peril of algorithms in local government November 20, 2018

The promise and peril of algorithms in local government

November 20, 2018

Algorithms are more than the invisible force deciding what posts show up in your social media feeds. Increasingly, they’re also a factor in how cities streamline services, prioritize work, and even predict when restaurant inspections, road work, or building permits will be needed.

Algorithms have great potential to make the everyday tasks of government more efficient and effective. But they also raise a host of questions for local leaders to answer — not just on matters of policy but on ethics. Algorithms built on faulty data or biased assumptions can produce unintended consequences.

To help think through those questions, the Center of Government Excellence at Johns Hopkins University, or GovEx, recently released an algorithms toolkit for local government leaders. Produced with partners from the city of San Francisco, the Civic Analytics Network at Harvard University, and Data Community DC, the toolkit gives city leaders a framework for understanding where the risks lie when it comes to using algorithms to improve services, and some strategies for mitigating those risks.

Bloomberg Cities spoke with Andrew Nicklin and Miriam McKinney of GovEx, who led the GovEx contributions to the toolkit and its rollout in cities, to learn more about the promise and peril of algorithms.

Bloomberg Cities: Let’s start with the basics. What’s an algorithm?

Andrew Nicklin
Andrew Nicklin is Director of Data Practices at GovEx.

Andrew Nicklin: The basic dictionary definition of an algorithm is a sequence of steps that help you get from one thing to another. You can view our laws as a set of algorithms, or rules, for the way we should operate in society. So, if you commit a crime, you might be fined or imprisoned. That’s a simplified example.

Or if you think about social media, algorithms influence what you see. Facebook, Twitter, and even Instagram prioritize what posts show up in your feed based upon what they think is most important to you.

Algorithms can also help us simplify, distill, and derive insights from large amounts of data that we couldn’t process without assistance.

Why do algorithms matter to local governments?

Nicklin: There are a few forces at work that are going to make the use of algorithms in government inevitable — and therefore, it’s inevitable that we’re going to have to deal with the challenges they present.

One of them is the ever-increasing volume of data that is around us every day. To cope with all this data, we need algorithms to understand it.

The second is that government leaders at every level are continuously under pressure to do more with less. That means looking for tools that can help them improve their efficiency, become more effective, and focus on the people who need help the most.

The third factor is the private sector. As more technology solutions and service providers want to stay competitive in the marketplace, they’re going to employ algorithms. Government is such a huge purchaser of technology and services that — whether it intends to or not — it’s going to acquire algorithmically driven technologies and services.

Before we get to the challenges algorithms present, can we discuss the benefits? How can algorithms be helpful to local governments?

Miriam McKinney
Miriam McKinney is an Analyst at GovEx

Miriam McKinney: Algorithms can do so many helpful things. In fact, that’s why I first entered the field of data science. I saw how powerful data science and algorithms could be, and sought out opportunities for them to create real, palpable social change in communities

Cities might want to use them to speed up tasks that might’ve taken people weeks to do, because of their high computing power and efficiency. City officials might want to use them to make predictions and plan for the future, such as predict when crime will spike in their city so that they can invest more money into the police department during those months. Or perhaps they might want to cut costs by decreasing the amount of time it takes to complete formerly manual computational tasks, overcome resource constraints, or simply improve accuracy of projections by injecting advanced mathematical concepts into their process of calculating. These are all ways that algorithms can be beneficial to local governments.

Algorithms can also serve as documentation: When you write code, years later, you’re going to be able to look back and say: “Oh, this was our process for that.” Those algorithms will reflect the processes of those times. And then, of course, governments want to stay competitive and artificial intelligence is a competitive, lucrative market in 2018. AI and its applications are referenced day in and day out in pop culture, so why wouldn’t cities want to get involved in something with the potential to be so influential?

What are some examples of local governments using algorithms?

Nicklin: There are tons of examples. Allegheny County, Pa., is using an algorithm to help predict whether a Child Protective Services hotline call will need an in-person investigation. With limited resources, they need to focus on the cases that represent the greatest risk to children’s safety and wellbeing. After working extensively with the community, they implemented an algorithm in partnership with a consortium of researchers from three different universities.

Another example is Chicago, where they’re trying to get health inspectors out to restaurants a little bit faster to avoid food poisoning issues. Or New Orleans, which used Census data to predict which blocks have the highest risk of fire and went door-to-door to distribute free smoke detectors — saving lives and reducing property damage.

Then there’s COMPAS, which is a risk assessment product lots of court systems across the U.S. are using. For example, it helps figure out whether people who are moving through the justice system should be remanded to bail or held in custody until trial. There is, frankly, a lot of suggestion that its results are biased. ProPublica had a big story on this.

And therein lies one of the big risks with algorithms — the potential for them to perpetuate or even amplify bias. Why is that such a problem?

McKinney: All data has some sort of bias, because all people have bias. We each have our own opinions, practices, thoughts, and beliefs. Data points are then reflective of those opinions and biases.

For example, perhaps in 1965, a data collector in the South went around his or her neighborhood collecting housing data. Who do you think lived in that neighborhood at that time? Who do you think didn’t? And why do you think that is? Due to what we know about the inequity and unfairness of housing practices in that decade and region of the country, we know that those data are likely to be highly biased and not representative, simply because of how it was collected and who did the collecting.

Or, think about an algorithm that decides whether or not to release first-time offenders out on bail. The algorithm was trained or created with national arrests data. Given what we know about the history and practices of policing in the United States, we should assume that the crime data is biased and highly skewed towards offenders of color. Therefore, the algorithm would likely show that and want to penalize offenders of color. If the data were more representative, perhaps the algorithm would not do that.

When you build an algorithm with data that is not representative, or is inaccurate, or has deeply historical biases baked into it, the algorithm itself is just going to reflect back to you those biases.

Nicklin: Another example comes out of the book “Automating Inequality” by Virginia Eubanks. In Indiana some years ago, there was a whole revision of access to the social safety net. They implemented some automation that allowed for decision making to be made by machines that would basically determine whether people were eligible for healthcare benefits, food stamps, and so on.

They built a system that was centered around fraud detection and preventing abuse, and its algorithm essentially rejected any application at the slightest detection of error. Eubanks estimates that in just three years, a million people were denied access to services because of that change in philosophy and the resulting technology that was applied. Whereas if the entire philosophy had been figuring out how to enable people to get access, and built in some other mechanism that allowed for corrections of innocent mistakes, a lot of people would’ve maintained access to services.

How does the new toolkit help city leaders navigate these issues?

Nicklin: The toolkit can’t really distinguish good intent from bad intent. But what it can do is create a conversation so you at least understand the broader dynamics at play. There are six characteristics of algorithms that we identify in the toolkit, and each one of them is broken down into several distinct questions. For example: Do you know where the data that is used to train this algorithm is coming from? If you don’t, then that’s a higher-risk situation than if you do. What that does is create a profile of what you need to be worried about in your use of the algorithm — and maybe where you can worry less.

And then there’s a second part of the toolkit that looks at those risks and says, “OK. If you’re high risk for this, then these are the kinds of things you should think about doing in order to mitigate those risks.” Examples may be things like trying to draw in additional data that is less biased or more complete. Or sometimes risk mitigation might mean creating some sort of governance body, similar to an academic institutional review board and conducting periodic reviews to assess whether an algorithm is causing harm or not.

McKinney: We want people to thoughtfully sit down with the toolkit, maybe print it out or have it on their laptop, and just have a conversation with colleagues and partners around those risks and talk about how to mitigate them. It’s a crucial conversation that we need to be having.

And it’s not a one-time conversation. It actually needs to be an ongoing dialogue that continues, essentially, over the lifetime of an algorithm. Because, particularly with machine-learning scenarios where an algorithm “learns” or builds upon itself multiple times is involved, algorithms actually evolve over time as they receive more data. Therefore, the outcomes that they are influencing can shift as well.

Who needs to be part of that dialogue?

Nicklin: We built this for the agents of change in government. Not necessarily data scientists, but maybe a program manager who wants to apply an algorithm to increase efficiency or create some sort of transformation.

And the idea then is that you, as the leader of that transformation, would use the toolkit to guide a conversation and bring in some of the other experts like a data scientist, like a social scientist, and anybody else you need to help get you answers to the questions the toolkit prompts.

What about these private-sector players selling solutions to government, where the algorithms may be proprietary?

Nicklin: That’s a really tough question. In the toolkit, we address whether a city has built the algorithm or acquired it. And it’s OK if a city has acquired it. The real question is: How much influence does a government have over changing it or having it adapted by a vendor to meet their needs?

In many cases, what we’re seeing is that governments have very little control over that. And often, the companies don’t talk about algorithms in their marketing. It’s just: “We can help optimize things for you. We can help automate your permitting approval so that it goes faster.” And so, it becomes sort of embedded into the DNA of a product in a way that you wouldn’t necessarily think, “Oh, this is an algorithm, this is something that I actually need to pay attention to because it may have unintended consequences.”

I don’t think we have a clear path for this yet. But what I can say is that the toolkit does have some notion of this in it, and it is just a question of how and where people choose to apply it.

You’ve made the toolkit open-source. Why?

Nicklin: As other governments start to apply this, they are going to learn from those experiences. In the ideal universe, we get the insights, and those successes and failures of all the people who applied it, and we help the toolkit evolve as a result of that.

We are also working on a theory that the toolkit, with some adaptation, could also work in the private sector, and in academic spaces. While GovEx intends to maintain it for government use, there’s no reason why, with some work, it couldn’t also be applied to projects that happen in startups or in other sectors. Plus, at GovEx we have a philosophy of working out in the open and in a collaborative way as much as we can.

Learn more about the algorithm toolkit and download it here.