Jump to content




As AI fragments, enterprise control is the next battleground

Featured Replies

rssImage-a4a6d85ce9f1169934a4481a53a9ceab.webp

You wouldn’t pay a surgeon to file your tax return, and you wouldn’t ask your accountant to perform your appendectomy. The same is true for AI: Organizations should start realizing that different AI providers excel at different needs, from coding to specialized research or creative design.

Over the coming year, enterprises will absorb a variety of these AI providers’ technologies in earnest and at scale—department by department, role by role. Legal teams will standardize on tools like Harvey. Customer service teams will rely on Glean or purpose-built agents. Development teams may choose resources from Anthropic. Marketing, engineering, finance, and HR will similarly gravitate toward AI resources from Microsoft, xAI, or OpenAI, optimized for their specific needs.

In other words, enterprises will evolve from the idea that single-provider AI resources will solve their needs to an era of targeted, role-based, or need-based AI.

Making matters even more complicated, many AI providers are now beginning to roll out their own browsers.

Enterprise leaders thus face a new challenge: how to manage the onslaught of AI needs that are now arriving.

HISTORY IS REPEATING ITSELF

Enterprises have been here before.

When cloud computing emerged, many dipped their toes in the water by standardizing on a single provider. The logic was simple: fewer vendors, lower cost, less risk. But as cloud usage expanded, different workloads demanded different strengths, and organizations diversified their cloud infrastructure.

The same dynamic emerged with data platforms. Early efforts focused on centralized applications like data lakes, but as use cases multiplied, organizations often found that no single system served every real-world use case equally well. Most enterprises responded by adopting multiple tools around a shared data foundation.

In both cases, organizations that had prepared themselves for flexibility were better positioned.

AI is following this same trajectory, only faster. And unlike cloud or data infrastructure, AI adoption isn’t happening quietly behind the scenes. It’s happening in daily workflows across departments, often without central coordination.

Leaders can therefore best help their organizations succeed by embracing many tools, each chosen for what it does best, while managing them through shared controls.

THE RISK OF AI TOOL SPRAWL

As AI systems and use cases proliferate, failing to prepare poses real risks to the enterprise.

This proliferation extends beyond standalone AI tools. Increasingly, SaaS applications from CRM systems and productivity suites to finance and HR platforms embed their own AI. In many cases, AI adoption will happen by default, not by deliberate choice.

With these tools, teams will also inherit fragmented security policies, inconsistent controls, and limited visibility. Tools that seem harmless in isolation can create meaningful risk in aggregate.

This is the rise of shadow AI: systems introduced to solve real problems, but without the oversight to manage them responsibly. With agentic AI, where systems act on users’ behalf, those risks compound: permissions expand and accountability becomes harder to trace.

If these tools are left unchecked, leaders will lose sight of where AI is used, what data it touches, and which systems act autonomously on the organization’s behalf. Experimentation and innovation should not be allowed to scale faster than oversight.

GOVERNANCE IS THE MISSING LAYER

Multimodal flexibility does not have to come at the expense of visibility and security. Again, we have been here before. With SaaS, enterprises don’t manage a wide variety of capabilities by forcing everyone onto one system. They manage it by establishing shared controls across many tools.

Enterprises need a governance layer that sits above all AI vendors. That layer should provide:

  • Visibility across AI usage
  • Policy enforcement independent of model provider
  • Guardrails for data access
  • Safe experimentation
  • Support for bringing your own device, contractors, and distributed teams

Governance doesn’t restrict freedom. It enables it by allowing organizations to choose every model they want and assign them across their teams without introducing new risk.

And true governance can’t rely on technology alone. Leaders must cultivate a culture of AI literacy, where every employee can confidently evaluate, validate, combine, and challenge AI systems. Then organizations can embrace a multitude of AI tools, safely, and effectively.

PREPARE FOR MULTI-MODEL SUCCESS

Much like SaaS, the cloud, and data platforms before it, AI will soon spread across roles, workflows, and applications. Leaders that build in the capacity to manage all these models—through visibility, governance, and an AI-fluent workforce—will be best positioned to capture all of AI’s advantages without compromising safety, trust, or control.

Steve Tchejeyan is president of Island.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.