Jump to content




Misinformation is scaling. We need to get better at countering it

Featured Replies

rssImage-17907906c9cc65b57ebf4106cbe4da76.webp

Most days, an email lands in my inbox with the promise to amplify my growth—my newsletter subscribers, the reach of my podcasts, the number of client leads, etc. I’ve gotten used to random people pitching me on their services, and some of the messages expertly prey on my insecurities as a business owner (“you’re leaving so much on the table,” et al.). I never answer any of them, but I sometimes wonder which ones might actually be legit.

A few months back, I opened up the Assistant sidebar in my AI-powered browser when I was browsing one of these emails and asked if it looked suspicious (I think “this look sus?” was the actual prompt). It replied that yes, the message, which pitched finding funding for The Media Copilot, was missing key information that an established organization would include, plus it was sent by someone with an email address from a nonexistent domain and no LinkedIn profile.

I thought about my experience as I read in Time about how a team at MIT maintains an online portal that chronicles how harmful AI incidents have risen over the past few years. The TL;DR is that the use of AI to cause harm, whether deliberately or accidentally, has increased significantly over the past few years. The incidents include everything from simple mistakes to deliberate violations, and the broad categories that have increased the most have to do with misinformation and malicious actors. That sadly makes sense: Those looking to mislead, misinform, or outright scam people have never had better tools for doing so.

One of the roles of the news media is to provide a check on misinformation, and most high-profile incidents connected with AI—like when those Biden robocalls were making the rounds—are debunked pretty quickly. But incidents that rise to that level are the exception, not the rule. Deepfakes may never fool enough people to swing an election, but the numbers suggest the number of lower-profile incidents are accumulating rapidly. At the same time, the number of journalism jobs is shrinking, and the reporters who are left have only so much bandwidth.

Skepticism isn’t strategy

As misinformation from AI scales up, it’s creating a world where everyone is increasingly skeptical of what they read, see, and hear. Last year, a paper from the National Bureau of Economic Research found that exposure to AI-driven misinformation led to less trust in media in general. But skepticism alone isn’t productive. Where journalists can help the most isn’t in trying to debunk every deepfake or scam (clearly a losing battle), but in educating their audience on how to properly channel that skepticism. 

As with my email assistant, the tools of verification—which can very quickly check sources, analyze claims, and discover supporting evidence—are now conveniently available to everyone. That’s not to say everyone should immediately trust what an AI chatbot says about a particular story. But AI is a tool, and when used as a journalistic lens, it can be a powerful one.

The key is treating the AI as an assistant to skepticism, not an authority. To return to the email example, my back-and-forth with the browser found in seconds what would have taken me minutes, looking up subjects, flagging inconsistencies, and suggesting new questions to follow up on. This all aligns with the principles of good journalism, and by passing some practical guidance on that, readers will be empowered not just to spot bad info, but potentially not immediately dismiss the good info that’s out there.

How to avoid the cynical trap

So what does a good “AI verification layer” look like? It starts with understanding that skepticism is a starting point, not the goal. Using it effectively means leveraging AI to both interrogate the information and avoid reinforcing your own suspicions in an unproductive way. Here are three habits, based on journalistic principles, that can be applied to any AI tool.

  1. Ask the same question twice: Many incidents where AI has caused harm started innocently enough, but eventually the user was led down some kind of rabbit hole, sometimes ending tragically. A helpful habit that might avoid this in some cases is to ask the same question a second time, just rephrased or with different framing. Check how the answers compare, following up on any significant inconsistencies.
  1. Force specificity: All good interviewers apply this one in targeted fashion. When a person makes a broad claim or declaration, ask AI to make it more specific. What supports that claim? Who was involved, what were the facts of the underlying evidence, when did it happen? Any vague answers should be treated as a red flag.
  1. Spot-check sources: If any claim is based on a link on the internet, it should not take long to verify it. When you can’t verify something in a minute or two, that should make you think twice, though keep in mind there could be reasons some true claims are difficult to verify (anonymous sources, for example).

The world is increasingly fuzzy. Between AI hallucinations, deliberate disinformation, and the prevalence of meme culture, it’s understandable that everyone’s adopted a lot more skepticism of what they see. Without principles and habits to guide you to good information, though, that skepticism will too often slide into cynicism. Journalists might not be able to verify all the things we want them to, but their principles can help a new generation of news consumers tell the good from the bad—at scale.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.