Skip to content




Does the public comment system have an AI problem?

Featured Replies

rssImage-1c374b4610a59d40345296862eb5b919.webp

Last year, when an air quality agency in Southern California proposed a new rule to encourage consumers to buy heat pumps instead of gas heaters, the agency was flooded with 20,000 comments opposing the idea—many more than usual. “Due to the volume and nature of these submissions, South Coast AQMD had concerns about their authenticity,” says Rainbow Yeung, an agency spokesperson. The agency’s executive director got an email thanking him for his “opposition” to a rule that his own team had drafted.

To check the validity of the comments, the agency reached out to a small sample of commenters—172 people—to confirm that they’d actually sent the emails. Almost no one responded. But of the five people who did, three of them said that they didn’t know anything about the comments that had been submitted in their own names. In a separate investigation, a campaigner from the Sierra Club also started contacting people on the list; the four people he reached also said that they hadn’t sent emails.

The L.A. Times recently reported that CiviClick, a company that bills itself as a provider of “AI-powered advocacy tools,” had led the campaign to send opposition comments. The client was a public affairs consultant with ties to the gas industry.

CiviClick denies that it sent any email without consent or that it used AI to fabricate automated messages. The air quality management district is still investigating the situation; the executive director said in a recent meeting that the team was exploring more “aggressive” ways of sampling commenters, since it couldn’t draw definitive conclusions from the limited initial response.

Regardless of what happened, it points to a broader question: if AI can now easily impersonate humans—and if comments can be submitted without someone’s knowledge—how can government agencies actually know when a public comment was written by a citizen rather than a bot?

Fake comments aren’t new. In 2017, the FCC received 22 million comments during the debate on net neutrality rules—and around 18 million of them were later found to be fake. Millions came from a single college student; half a million came from Russian email addresses. After an investigation, New York Attorney General Letitia James fined “lead generator” companies that had collectively impersonated millions of real people when they submitted comments.

AI, in theory, could make it easier to write and submit fake comments that sound real. CiviClick says that it simply uses AI to help real people personalize their comments. The platform asks users questions related to the issue—for example, how an increase in taxes would affect their budget—and then tailors an email. (The company also uses AI to predict how likely someone would be to respond to a campaign.)

CiviClick founder and CEO Chazz Clevinger says he could not speak to the specifics of the Southern California campaign but insists it meaningfully captured the authentic views of people across the region. “A homeowner in Riverside County who had recently installed a gas furnace wrote a different message than a renter in Los Angeles who was concerned about landlord compliance costs,” he tells Fast Company. “A contractor in San Bernardino County who builds new homes wrote a different message than a retiree in Orange County worried about electricity grid strain during heat waves.” He argues that the tool is simply helping people “articulate their genuine concerns,” and that they’re no less legitimate than messages written from scratch.

The Sierra Club campaigner has a different take. Even if someone consents to have AI tweak a comment, it could be problematic. “Regulators give priority to customized comments, which require time and effort to send, versus form letters or petitions which do not,” says Dylan Plummer, campaign adviser for the Sierra Club’s Clean Heat campaign. “Using AI to generate custom comments creates the illusion of engaged individuals willing to spend the time to draft a thoughtful statement on an issue, when in fact, they are engaged at the same level as someone who signed a traditional form letter or petition.”

The bigger challenge, Plummer says, is whether some public comments are attributed to people who never had anything to do with them. In another case in California, he started calling people who had submitted comments on a proposed rule at the Bay Area Air District. Another nonprofit, the Energy and Policy Institute, filed a public records request to get copies of the emails that were sent in using a different software platform called Speak4. (Speak4 declined to talk; in a San Francisco Chronicle article, the company’s client, the Bay Area Council, said that neither it nor Speak4 submitted letters without consent.)

Of the seven people that Plummer spoke with, all seven said that they had no knowledge of the email. “Some even said that they didn’t know what the Bay Area Air District was,” he says. “One woman I spoke to said, ‘Why would I ever oppose regulations to protect clean air?’”

It’s very difficult to prove whether comments are actually fake after the fact. “I had to call dozens and dozens of numbers that I was able to access through internet sleuthing,” Plummer says. Most people didn’t want to talk. “When I’m talking, I’m like, ‘Hi, my name is Dylan, and I’m investigating a potential case of identity theft.’ And their first response is, ‘Oh, this guy’s totally a scammer,’ and hang up.”

In another case in North Carolina, county commissioners received hundreds of emails in support of a new gas pipeline. But when they started to respond to some of the emails, their constituents said that they hadn’t sent them. The mass email campaign backfired. “If they’re this sloppy with their advocacy work, what does that say about our concerns about their maintenance, which is the critical thing,” one commissioner told E&E News. The board voted unanimously for a resolution that raised concerns about the project and recommended that federal officials should deny a permit.

Williams, the company that wanted to build the pipeline, suggested that people might have forgotten that they sent an email. CiviClick, which facilitated the emails for the company, said the same thing about the campaign in Southern California. (It’s worth noting that the air quality agency contacted supposed commenters shortly after the comments were submitted, however.) Clevinger also suggested that there could be “deliberate mischaracterization or misuse of our tools” by groups like the Sierra Club that “have a vested interest in discrediting its authenticity.”

When agencies do receive a flood of fake emails, it’s not clear how much that necessarily affects decision making. “What matters is not the identity of the commenter,” says Steven Balla, a political science professor who studies public commenting. “What matters is the content of the comment.” Agencies are charged with considering the technical, legal, and economic information that’s submitted to them during the comment process, he says. But they’re not adding up how many comments they got on each side, and it’s the ideas that matter more than the name of the person who submitted them.

Fake or AI-generated comments “smell icky,” he says. “But I haven’t yet been moved that, wow, this is totally changing the way policy decisions are made.” In the case of net neutrality, he argues, the millions of comments didn’t ultimately sway what the first The President administration wanted to do.

“What I know about misinformation more generally is that misinformation generally has minimal effects on what people believe or what they do,” says Jonathan Brennan, director of the Center on Technology Policy at NYU. “I’d be far more concerned about the secondary effects of a general loss in trust— government officials saying, well, we can’t really trust any public comments, maybe they’re all fake, maybe they’re not, so we’re just going to give them less weight.” A local school board, for example, might theoretically listen more to people who show up to comment in person, making it harder for others to share their opinion if they don’t have time to attend.

Agencies can use technology to sort through digital comments and summarize duplicates, Balla says. That’s different from older mass comments that showed up on postcards. “Back in the old days in the 90s, I was talking to an agency that got at that time maybe 100,000 comments,” he says. “Those were still paper based. They literally had some warehouse space out in Rockville, Maryland, where they were basically putting the pieces of paper into piles. That was a lot of work. Now you get 100,000 comments, and 99,000 of them are going to be nearly identical. And you can figure that out in seconds.”

Still, if AI can easily generate a series of unique comments, the process could get harder. The Sierra Club’s Plummer suggests that something needs to change. “Astroturfing and the creation of front groups—polluting industry working to create the illusion of widespread support for a position—is nothing new,” he says. “Our big concern, though, is that these new technologies with AI proliferating is going to put these tactics on steroids and make them even more insidious and difficult to root out. And it is, in my opinion, a direct threat to democratic processes and decision making.”

At the South Coast Air Quality Management District, the board voted narrowly to defeat the proposed rule that would have curbed pollution. Though CiviClick touted its work in influencing the decision, it’s hard to say what impact the comments had. The board directed the agency to send the rule back to a committee for further discussion. The rule could be revisited later, though no timeline has been set.

Now, the Sierra Club is asking California’s attorney general and LA’s district attorney to launch a fraud investigation. State senator Christopher Cabaldon also recently introduced a new bill, called “People Not Bots,” which would clarify that AI tools don’t qualify as people and shouldn’t be offering fake public input.

And at the air quality agency in Southern California, staff are exploring ways to make comment submission more secure, including portals that could offer new ways to verify that a submission is coming from a human—though that’s a harder and harder task to perform. “Maintaining the integrity of our public process is a top priority,” says Yeung.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.