Jump to content




AI CEOs are promising all-powerful superintelligence. Government insiders have thoughts 

Featured Replies

rssImage-d961566218be6f1975e26e02e45a3062.webp

Tech giants are making grand promises for the AI age. The technology, we are told, might discover a new generation of medical interventions, and possibly answer some of the most difficult questions facing physics and mathematics. Large language models could soon rival human intellectual abilities, they claim, and artificial superintelligence might even best us. This is exciting, but also scary, they say, since the rise of AGI, or artificial general intelligence, could pose an uncontrollable threat to the human species. 

U.S. government officials working with AI, including those charged with both implementing and regulating the tech in the government, are taking a different tack. They admit that the government is still falling behind the private sector in implementing LLM tech, and there’s a reason for agencies to speed up adoption. 

Still, many question the hyperbolic terminology used by AI companies to promote the technology. And they warn that the biggest dangers presented by AI are not those associated with AGI that might rival human abilities, but other concerns, including unreliability and the risk that LLMs are eventually used to undercut democratic values and civil rights. 

Fast Company spoke with seven people who’ve worked at the intersection of government and technology on the hype behind AI—and what excites and worries them about the technology. Here’s what they said. 

Charles Sun, former federal IT official

Sun, a former employee at the Department of Homeland Security, believes AI is, yes, overhyped—especially, he says, when people claim that AI is “bigger than the internet.” He describes the technology simply as “large-scale pattern recognition powered by statistical modeling,” noting “AI’s current wave is impressive but not miraculous.”

Sun argues that the tech is “an accelerator of human cognition, not a replacement for it. I prefer to say that AI will out-process us, not outthink us. Systems can already surpass human capacity in data scale and speed, but intelligence is not a linear metric. We created the algorithms, and we define the rules of their operation.

“AI in government should be treated as a critical-infrastructure component, not a novelty,” he continues. “The danger isn’t that AI becomes ‘too intelligent,’ but that it becomes too influential without accountability. The real threat is unexamined adoption, not runaway intelligence.”

Former White House AI official 

“I was worried at the beginning of this . . . when we decided that instead of focusing on mundane everyday use cases for workers, we decided at a national security front that we need to wholesale replace much of our critical infrastructure to support and be used by AI,” says the person, who spoke on background. “That creates a massive single point of failure for us that depends largely on compute and data centers never failing, and models being impervious to attacks—neither of which I don’t think anyone, no matter how technical they are or not, would place their faith in.”

The former official says they’re not worried about AGI, at least for now: “Next token prediction is not nearly enough for us to model complex behaviors and pattern recognition that we would qualify as general intelligence.”  

David Nesting, former White House AI and cybersecurity adviser

“AI is … fantastic at getting insights out of large amounts of data. Those who have AI will be better capable of using data to make better decisions, and to do so in seconds rather than days or weeks. There’s so much data about us out there that hasn’t really hurt us because nobody’s ever really had the tools to exploit it all, but that’s changing quickly,” Nesting says. “I’m worried about the government turning AI against its own people, and I’m worried about AI being used to deprive people of their rights in ways that they can’t easily understand or appeal.”

Nesting adds: “I’m also worried about the government setting requirements for AI models intended to eliminate ‘bias,’ but without a clear definition of what ‘bias’ means. Instead, we get AI models biased toward some ‘official’ ideological viewpoint. We’ve already seen this in China: Ask DeepSeek about Tiananmen Square. Will American AI models be expected to maintain an official viewpoint on the January 6th riots?

“I think we’re going to be arguing about what AGI means long after it’s effectively here,” he continues. “Computers have been doing certain tasks better than people for nearly a century. AI is just expanding that set of tasks more quickly. 

“I think the more alarming milestone will be the point at which AI can be exploited by people to increase their own power and harm others. You don’t need AGI for that, and in some ways we’re already there,” Nesting says. “Americans today are increasingly and unknowingly interacting online with fake accounts run by AI that are indistinguishable from real people—even whole communities of people—confirming every fear and anxiety they have, and validating their outrage and hatred.”

Abigail Haddad, former member of the AI Corps at DHS 

The biggest problem currently, Haddad argues, is that AI is actually being underused in government. An immense amount of work went into making these tools available inside of federal agencies, she notes, but what’s available in the government is still behind what’s available commercially. There are concerns about LLMs training on data, but those tools are operating on cloud systems that follow federal cybersecurity standards. 

“People who care about public services and state capacity should be irate at how much is still happening manually and in Excel,” she says. 

Tony Arcadi, former chief information officer of the Treasury Department 

“Computers are already smarter than us. It’s a very nebulous term. What does that really consist of? At least my computer is smarter than me when it comes to complex mathematical calculations,” Arcadi says. “The sudden emergence of AGI or the singularity, there’s this thing called Roko’s basilisk, where the AI will go back in time and—I don’t remember the exact thing—but kill people who interfered with this development. I don’t really go for all of that.”

He adds: “The big challenge that I see leveraging AI in government is less around, if you will, the fear factor of the AI gone rogue, but more around the resiliency, reliability, and dependability of AI, which, today, is not great.” 

Eric Hysen, former chief information officer at DHS

When asked a few months ago about whether AI might become so powerful that the process of governing might be offloaded to software, Hysen shared the following: “I think there is something fundamentally human that Americans expect about their government. . . . Government decision-making, at some level, is fundamentally different than the way private companies make decisions, even if they are of very similar complexity.”

Some decisions, he added, “we’re always going to want to be fundamentally made by a human being, even if it’s AI-assisted in a lot of ways. It’s going to look more long term like heavy use of AI that will still ultimately feed for a lot of key things to human decision makers.”

Arati Prabhakar, former science and technology adviser to President Biden

Prabhakar, who led the Office of Science and Technology Policy under President Joe Biden, is concerned that the conversation about AGI is being used to influence policy around the technology more broadly. She’s also skeptical that the technology is as powerful as people foretell. “I really feel like I’m in a freshman dorm room at 2 in the morning when I start hearing those conversations,” she says.

“Your brain is using 20 or 25 watts to do all the things that it does. That includes all kinds of things that are way beyond LLMs. [It’s] about 25 watts compared to the mega data centers that it takes to train and then to use AI models. That’s just one hint that we are so far from anything approximating human intelligence,” she argues. “Most troubling is it puts the focus on the technology rather than the human choices that are being made in companies by policymakers about what to build, where to use it, and what kind of guardrails really will make it effective.”

This story was supported by the Tarbell Center for AI Journalism.


View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.