Bloomberg CityLab 2024: The world’s local leaders come to Mexico City
Read More
Cities are ramping up to make the most of generative AI
November 8, 2023
Generative artificial intelligence promises to transform the way we work, and city leaders are taking note. According to a recent survey by Bloomberg Philanthropies in partnership with the Centre for Public Impact, the vast majority of mayors (96 percent) are interested in how they can use generative AI tools like ChatGPT—which rely on machine learning to identify patterns in data and create, or generate, new content after being fed prompts—to improve local government. Of those cities surveyed, 69 percent report that they are already exploring or testing the technology. Specifically, they’re interested in how it can help them more quickly and successfully address emerging challenges with traffic and transportation, infrastructure, public safety, climate, education, and more.
Yet even as a majority of city leaders surveyed are exploring generative AI’s potential, only a small fraction of them (2 percent) are actively deploying the technology. They indicated there are a number of issues getting in the way of broader implementation, including a lack of technical expertise, budgetary constraints, and ethical considerations like security, privacy, and transparency.
It was with the aim of getting ahead of these obstacles that close to 100 mayors came together last month in Washington, D.C., for the 2023 Mayors Innovation Studio (MIS). Part of the Bloomberg CityLab summit, this year’s MIS was designed to provide mayors hands-on experience with generative AI and the opportunity to work together and with leading experts to chart a course for how to most effectively and responsibly leverage this technology in city halls. Some of the many strategies they discussed include:
- Designate a leader who is free to explore uses and ask vital questions.
- Learn through testing the technology—and dreaming big.
- Share early guidance and guardrails without immediately imposing policies that discourage exploration.
- Understand how generative AI is already being used in city hall.
- Create a (safe) space for experimentation.
Designate a leader who is free to explore uses and ask vital questions.
The data show that mayors are eager to learn more about generative AI and how it can improve their work on behalf of residents. But they also have concerns and need help, especially with technical expertise. One key for cities, then, is to consider assigning someone to become an in-house point person on generative AI. This means identifying who on the team will make it their business to stay up to date on the technology—and explore its applications.
“You don't need someone with deep technical skills,” explained Beth Blauer, the associate vice provost for public sector innovation at Johns Hopkins University, where she oversees the Bloomberg Center for Government Excellence and the Bloomberg Center for Public Innovation. “You need someone who is curious, who is able to experiment with the technology, to lead the team, to ask the right questions, and to really connect the practice of using generative AI with the actual application of problem solving.”
Learn through testing the technology—and dreaming big.
Demystifying generative AI is central to ensuring a comprehensive and widespread understanding of its possibilities. As Harvard Business School Professor Mitchell Weiss noted, “A precondition for deciding how we should and shouldn’t use these tools is knowing how we could and couldn’t use these tools.” For that reason, cities should avoid siloing this work in an IT department. Instead, city leaders and their teams should engage with the technology directly and, perhaps, visit with local college faculty or businesses focused on generative AI applications. That way, leaders, their teams, and a cross-section of employees can develop a tangible sense of where the technology is going.
Additionally, because this technology is always changing, cities should build continual testing into their plans in order to best take advantage of any advances. “Don't freeze your image in your mind about what generative AI is based on what you see today, because it's evolving [and] probably getting better,” Weiss said.
That’s something that is core to Buenos Aires’ efforts with its chatbot Boti, which residents can text using WhatsApp to access services like bike sharing. Melisa Breda, the city’s undersecretary of evidence-based public policies, says they run constant tests because “the tools of the day go through evolution, they change.”
Share early guidance and guardrails without immediately imposing policies that discourage exploration.
Mayors and other city leaders are rightfully determined to avoid wading too deep—or too quickly—into a new technology, only to have experiments go wrong. After all, some of the data sets used to train generative AI systems might not represent the full reality and may lead to biased results. Even modest missteps risk spoiling resident outcomes and engagement.
“We could lose the trust of our residents if they feel that, when they interact with us, they’re not going to get genuine reactions from us—that they’re just getting this machine,” said Santiago Garces, the chief information officer in Boston.
Boston’s toolkit for AI use lays out three simple guides: First, don’t include sensitive or confidential information in prompts; next, disclose the use of the tool so citizens are aware; and finally, review AI outputs for accuracy and sensitivity. These straightforward guardrails leave plenty of room for AI use, from using the tech to write job descriptions to a still-exploratory effort to have it translate distinctive diction so as to more effectively engage residents who speak languages other than English.
Understand how generative AI is already being used in city hall.
With almost every city already considering, testing, or implementing generative AI, there’s a lot that city leaders can learn from each other about the process. Providing a space for the exchange of these ideas—along with access to data and technology experts—is the focus of Bloomberg Philanthropies’ new global learning community, City AI Connect. But local leaders should also look within their own city halls for ideas and inspiration.
One approach to that can be conducting an inventory of current AI use in their organizations. Cities can combine this with other, more real-time methods of communication. In Boston, the city went so far as to create a Slack channel to discuss AI potential and snags, and keep tabs on progress. One thing they learned is that citizens, community members, and other civic actors are making strides of their own.
“It’s already there, people already have access to it,” Blauer said. “And so you fundamentally need to know what's out there, and how it's being used.”
Create a (safe) space for experimentation.
In Buenos Aires, the city decided to initially focus solely on low-risk buckets of generative AI use to ensure chatbots and other tools they rolled out did not breach trust or touch on polarizing topics. This was important, given that generative AI tools have the potential to present incorrect information as fact. In other words, they can make things up.
Buenos Aires also put additional safeguards in place to prevent the technology from being prompted with—or displaying—sensitive content. “There's a first layer of security that makes sure that neither the input nor the output contains information that we don't want to deliver,” Breda said.
Cities that lack the resources to impose precise safeguards can consider starting their use of generative AI on low-risk tasks, such as drafting letters or materials to engage with the public more rapidly and responsively. Even then, it’s critical that a staff member review AI-generated materials, to ensure accuracy and appropriateness, before they are shared with the public.
Most importantly though, mayors should consider identifying clear-cut opportunities to experiment safely, rather than hold off on exploring the technology at all.
“Don't pick the hardest, most sensitive thing first,” suggested Cara LaPointe, a futurist who co-directed the Johns Hopkins Institute for Assured Autonomy. “Pick the things that are low risk, but, potentially, really high impact.“