Jump to content




Snowflake thinks AI coding agents are solving the wrong problem

Featured Replies

rssImage-97c517c69a70be053a23c21e365f8aaa.webp

AI coding agents are suddenly everywhere, the latest thing Silicon Valley cannot stop talking about. From venture-backed startups to splashy big tech keynotes, the promise sounds the same: just describe what you want, and the AI will build it for you. It is a seductive idea, especially in a world where software projects are notorious for moving slowly. But inside large companies, that vision is already starting to unravel.

What looks impressive in a demo often falls apart in the real world. As soon as AI-generated code runs into actual enterprise data, the problems show up. Schemas clash, governance breaks down, and a supposed breakthrough can quickly turn into a liability.

“Coding agents tend to break down when they’re introduced to complex enterprise constraints like regulated data, fine-grained access controls, and audit requirements,” Sridhar Ramaswamy, CEO of Snowflake, tells Fast Company.

He says most coding agents are built for speed and independence in open environments, not for reliability inside tightly governed systems. As a result, they often assume they can access anything, break down when controls are strict, and cannot clearly explain why they ran a certain query or touched a specific dataset.

This gap between what AI can write and what it actually understands is becoming one of the most expensive problems in enterprise AI. Gartner predicts that 40% of agentic AI projects will be canceled by 2027 because they lack proper governance, and only 5% of custom enterprise AI tools will ever make it into production.

Ramaswamy says the core issue in enterprise AI is writing functional code in a way that is secure, transparent, and compliant from the start. He argues that companies need to put trust, accuracy, and accountability ahead of unchecked automation, and that most coding agents today sit outside existing data governance systems instead of being built into them.

Snowflake’s answer is Cortex Code, a data-native AI coding agent designed to work directly inside governed enterprise data, not as a layer sitting on top of it. It comes alongside with a newly announced $200 million partnership with OpenAI. Together, they reflect a contrarian bet that the real battle for enterprise AI will be won at the data layer.

AI Coding Agents Don’t Understand Enterprise Context

Most AI coding agents are great at writing code on their own, but they struggle once that code has to run inside a real company. Large organizations live with constant constraints, from security rules and uptime demands to shared business logic that evolves over time. Agents trained mostly on public code and synthetic examples rarely absorb those realities, and the disconnect shows up almost right away.

Enterprise data also lives across data warehouses, third-party platforms, and legacy systems, and it carries layers of organizational meaning with it. Most coding agents treat that data like any other dataset, instead of the most tightly regulated asset a company has. The fallout shows up fast in production. Some enterprises say they spend weeks cleaning up AI-generated code that ignores internal data standards.

“In production, agents most often fail due to poor data integration, lax identity and security permissions, and hallucination for complex code workflows,” says Arun Chandrasekaran, vice president and analyst at Gartner. “Vendors often underestimate the gap because they assume that enterprises have centralized data and codified access policies, which isn’t true in most large enterprises.” 

Chandrasekaran adds that AI agents are embedded into developer IDEs without grounding in enterprise system semantics, which is the key reason why this issue persists. “This can result in trust erosion and security exposure,” he says, “which can hinder production.”

According to a CodeRabbit study, AI-generated code introduces 1.7 times more issues than human-written code, including 75% more logic errors and up to twice the security vulnerabilities, conflicting with enterprise standards. Likewise, another study found that 45% of AI-generated code samples fail security tests, posing critical web application security risks.

Ramaswamy says the most immediate consequence is slowed development. In some cases, teams quietly abandon agents altogether after early pilots fail governance checks. “Even when the consequences are minor in nature, the perception of risk alone can cause organizations to roll back or freeze AI initiatives until stronger guardrails are in place,” he says.

According to Anahita Tafvizi, Snowflake’s chief data analytics officer, this pattern points to a deeper design problem: Many coding agents can generate technically correct code, but they do not understand how business rules are applied, how access controls limit what is allowed, or how audit requirements determine whether a system can actually be trusted once it goes live.

“Meaningful enterprise innovation depends on context,” she says. “When an agent understands not just how to write code, but why certain controls exist and how decisions are governed, teams can build with confidence.”

Snowflake’s Thesis: Context Beats Cleverness

Snowflake’s latest product, Cortex Code, is a data-native AI coding agent built directly into its governed data platform, rather than layered on top of it. That distinction matters. Instead of trying to guess enterprise rules from prompts, Cortex Code is designed with built-in awareness of schemas and operational constraints. The company says the goal is to make AI follow the same rules people already do.

Ramaswamy says Cortex Code is not just about producing code faster than tools like Claude Code, but about understanding the realities of enterprise environments. Its value, he argues, comes from what he calls its “deep awareness of the context and constraints” that shape how large organizations operate, which allows a much wider range of employees to build solutions that are safe and reliable, even without advanced technical skills.

Snowflake’s $200 million partnership with OpenAI further reinforces its architectural bet. “It’s a direct, first-party relationship that allows OpenAI’s models to operate natively inside Snowflake, on top of enterprise data,” Ramaswamy says. “By bringing OpenAI’s frontier model capabilities into Snowflake, we remove the operational friction of stitching together disparate tools and significantly lower the barrier to deploying advanced AI responsibly.”

An Inflection Point or a Higher Bar?

Industry experts say that while Snowflake is making a big bet on a data-first approach with Cortex Code, it is far from alone. Rivals such as Databricks, Google BigQuery, and AWS Redshift are moving in the same direction, putting governance and auditability ahead of raw speed.

Experts say Snowflake’s main point of differentiation is how closely Cortex Code is tied to production data. As Doug Gourlay, CEO of data storage company Qumulo, puts it, most companies have “grafted increasingly capable agents onto developer tools” and then tried to manage risk after the fact. Snowflake, he says, is flipping that model by treating governance and data semantics as the foundation on which AI operates. (While rivals excel in niche strengths like machine learning flexibility or platform scale, Cortex Code is built for teams that need governed, low-maintenance AI coding directly on live enterprise data.)

“Over time, this approach is likely to become table stakes. Enterprises will increasingly view AI that operates outside their governed data fabric as an unacceptable risk, regardless of how impressive its capabilities appear in isolation,” says Gourlay.

Coding tools such as Anthropic’s Claude Code, for instance, are largely optimized for developer-centric workflows, emphasizing controls like explicit change approvals and tight IDE integrations. Claude Code, in practice, requires being combined with additional governance layers or secure platforms for enterprise compliance. Snowflake and Anthropic recently partnered to enable the direct integration of Claude models into Snowflake Intelligence and Cortex AI, allowing its models to run inside Snowflake’s governed data environment. 

Snowflake says its edge comes from working directly with enterprise metadata and semantic context. The company is betting that as organizations grow more cautious, they will turn away from agents that appear powerful but act unpredictably. If that proves true, those who ignore data context may define today’s hype, while those who embrace it will shape what comes next.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.