Jump to content




5 reasons AI-assisted coding could break your business

Featured Replies

AI is helping teams build software and tools faster than ever—but that doesn’t mean we’re building smarter. I’ve seen entire prototypes spin up in a day, thanks to AI coding assistants. But when you ask how they were built, or whether they’re secure, you get a lot of blank stares.

That’s the gap emerging now, between what’s possible with AI, and what’s actually ready to scale.

What looks like progress can quickly become a liability. Especially when no one’s quite sure how the thing was built in the first place.

Before you go all-in on AI-assisted coding, check these five fault lines:

1. You can’t govern what you can’t see.

Perhaps the most overlooked risk of AI-assisted coding isn’t technical, it’s operational. In the rush to deploy AI tools, many companies have unintentionally created a layer of “shadow engineering.” Developers use these tools without official policies or visibility, leaving leaders in the dark about what’s being built and how.

As Mark Curphey, cofounder of Crash Override, told me: “AI is accelerating everything. But without insight into what’s being built, by whom, or where it’s going, you’re scaling chaos with no controls.”

That’s why visibility can’t be an afterthought; it’s what makes both governance and acceleration possible. Platforms like Crash Override are designed to surface how AI is being used across the engineering org, offering a real-time view into what’s being generated, where it’s going, and whether it’s introducing risk or value.

And that visibility doesn’t exist in isolation. Tools like Jellyfish help connect development work to business goals, while Codacy monitors code quality. But none of these tools can do their job well if you don’t know what’s happening under the hood.

Visibility isn’t about surveillance, it’s about building on a solid foundation.

2. Productivity is up. So is your risk exposure.

A 2025 study Apiiro, an application security firm, found that AI-assisted developers are shipping 3 to 4 times more code with GenAI tools. But they’re also generating 10 times more security risks.

These weren’t just syntax errors. The increase included hidden access risks, insecure code patterns, exposed credentials, and deep architectural flaws—issues far more complex and costly to resolve over time.

3. AI-generated code is a potential legal risk.

Because AI coding tools are trained on vast libraries of public code, they can generate snippets governed by restrictive open-source licenses. That raises important compliance questions, especially with licenses like GPL or AGPL, which could, in theory,  require companies to open-source any software built on top of that output.

But it’s worth clarifying: No company has been sued (yet) for using AI-generated code. The lawsuits we’ve seen (like the GitHub Copilot class action) have targeted the AI toolmakers, not the teams using their output. And the majority of GitHub’s claims were ultimately thrown out.

Still, this is a fast-evolving area with real implications. Auditboard’s 2025 study found that 82% of enterprise organizations were already deploying AI tools, but only 25% report having any sort of official governance in place.

That disconnect may not be a courtroom issue today, but it’s a visibility and audit issue that leaders can’t afford to ignore.

4. Speed is great, until only one person knows how it works.

The “bus factor” has long described a worst-case scenario: What happens if the one person who knows how your software works suddenly disappears?

“Powered by AI, an average developer becomes 100 times more productive. A superstar becomes 1,000 times,” Curphey noted. “Now imagine two of them are pushing all of that code into production. If they disappear, the company’s in serious trouble.”

But the goal isn’t zero risk—it’s coverage. Just like test cases help ensure software is resilient, teams need to ensure knowledge and ownership are distributed. That includes understanding who’s building what, where the AI is involved, and how those systems will be maintained over time.

Ironically, GenAI can help with this. It can surface patterns, identify gaps, and map ownership in ways traditional tooling can’t. More than just a productivity boost, it can be a tool for reducing fragility across your team and your codebase.

5. It’s easy to end up with “software slop.”

Good, scalable AI-assisted code starts with the prompt.

AI will generate exactly what you ask for. But if you don’t fully understand the technical constraints, or the risks you’re overlooking, it might give you code that looks good but has critical flaws in security or performance under the hood.

You certainly don’t have to be a developer to use these tools well. But you do need to know what you don’t know, and how to account for it. As Curphey notes in a company blog post, “If you wouldn’t accept that level of vagueness from a junior engineer, why would you accept it from yourself when prompting?”

Otherwise, you’re moving fast and creating a kind of digital brain rot: systems that degrade over time because no one really understands how they were built.

FROM VIBE CHECK TO REALITY CHECK

The takeaway: AI may accelerate output, but it also accelerates risk. Without rigorous review and governance, you may be shipping code that functions, but isn’t structurally sound.

So while AI is changing how software gets built, we need to be sure we’re building on a solid foundation. It’s no longer enough to move fast or ship often. As leaders, we need to understand how AI is being used inside our teams, and whether the things getting built are actually stable, scalable, and secure.

Because if you don’t know what your team is using AI to build today, you may not like what you’re shipping tomorrow.

Lisa Larson-Kelley is founder and CEO of Quantious.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.