Jump to content


Recommended Posts

Posted

rssImage-2b533f2a1fa7c4d16c6122a1647bef73.jpeg

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

The model is not the product

Back in 2023, software engineer Matt Rickard wrote a short post titled, “The model is not the product.” It’s looking like he was absolutely right. He published it not long after the first wave of AI chatbot products hit the market—tools that let users query large language models trained on a compressed snapshot of the internet. I remember getting a demo of Microsoft’s Bing Chat at an event in Redmond that year and telling an enthusiastic Microsoft employee, “Yeah, this is cool, but it doesn’t seem to know anything useful—like flight information or baseball scores.” The model only knew what had been on the internet up to a certain cutoff date. It was an impressive AI model, sure—but not much of a product.

Today, the race to build the smartest AI model is still on—but it’s becoming increasingly clear that this won’t be the exclusive domain of a few wealthy tech giants. DeepSeek has already demonstrated what’s possible with its somewhat-open models. The real value, though, lies in what happens around the model. For example, LLMs became significantly more useful when they gained the ability to fact-check themselves using real-time web data—and cite their sources. Now, models are beginning to operate systems beyond themselves. Both Anthropic and OpenAI, for instance, have models that can control aspects of a personal computer.

Most recently, a small Chinese company called Butterfly Effect released Manus, which it describes as the first general autonomous agent. Manus is a system of agents and subagents built using Anthropic’s Claude 3.5 Sonnet model, along with specialized versions of Alibaba’s Qwen model. At the center of it is an “executor” agent that breaks down tasks and assigns them to subagents—some focused on specific objectives, others serving as knowledge or planning agents. Together, they collaborate under the executor’s direction to handle research, data analysis, report writing, workflow automation, and even code generation and deployment. And all of this happens autonomously in the cloud, without human supervision—so the user can simply walk away while the work gets done.

The real magic of Manus isn’t in the models themselves—the team is just using Anthropic and Qwen via APIs available to anyone. What’s powerful is the system’s architecture: a network of coordinated agents capable of sourcing information and collaborating dynamically. Manus may well be an early glimpse of where things are headed.

A frank talk about the future with Mistral CEO Arthur Mensch

I’m at the HumanX AI conference this week in Las Vegas, and I’ve had a number of conversations with people trying to sell AI models to enterprises. One of my most candid dialogues was with Arthur Mensch, cofounder and CEO of the French AI company Mistral—often referred to as “Europe’s OpenAI.” Mistral has seen strong adoption among European enterprises, some of which are drawn to the idea of working with a European lab rather than a U.S. one, Mensch told me. The company has now established a beachhead in the U.S., with a team of engineers based in Palo Alto. Mensch is bullish on Mistral’s U.S. prospects—he expects to grow the company’s American customer base tenfold by the end of 2025.

Enterprise leaders are thinking differently about AI in 2025. Several founders here told me that unlike in 2023 and 2024, buyers are now focused squarely on ROI. They want systems that move beyond pilot projects and start delivering real efficiencies. Mensch says enterprises have developed “high expectations” for AI, and many now understand that the hard part of deploying it isn’t always the model itself—it’s everything around it: governance, observability, security. Mistral, he says, has gotten good at connecting these layers, along with systems that orchestrate data flows between different models and subsystems.

Once enterprises grapple with the complexity of building full AI systems—not just using AI models—they start to see those promised efficiencies, Mensch says. But more importantly, C-suite leaders are beginning to recognize the transformative potential. Done right, AI systems can radically change how information moves through a company. “You’re making information sharing easier,” he says. Mistral encourages its customers to break down silos so data can flow across departments. One connected AI system might interface with HR, R&D, CRM, and financial tools. “The AI can quickly query other departments for information,” Mensch explains. “You no longer need to query the team.”

Eventually, Mensch says, every department will be represented by an agent. These agents will take on much of the day-to-day work: compiling research, writing proposals, building marketing campaigns. They’ll share data, coordinate, and collaborate—while humans shift into oversight roles. The big change? “Humans will no longer query the AI for information as they do now,” Mensch says. “Increasingly, the AI agents will query the humans.” Agents will tap the right people for domain expertise—asking someone to review a proposal, weigh in on strategy, or greenlight a document for the CEO. The result, Mensch predicts, will be a flattening of organizations, with traditional middle-management roles gradually disappearing.

Google DeepMind creates a brain for robots

In keeping with the theme, Google’s Gemini model is reaching into new realms—finding physical embodiment. This week, the company announced two new robotics models designed to serve as the “brain” for a wide range of robots, from simple robotic arms to more advanced humanoids.

The first, called Gemini Robotics, brings Gemini’s general world knowledge into robotic systems. It’s multimodal—meaning it can reason across visual, auditory, and textual inputs. In a demo, a robotic arm equipped with a camera “eye” sat in front of a toy basketball hoop. When asked to “do a slam dunk,” it picked up the ball and scored—even though it had never been specifically trained on that task. Thanks to Gemini’s broad, generalist understanding, it knew what a slam dunk was and how to perform it.

The second model, Gemini Robotics-ER (for Embodied Reasoning), builds on that foundation by integrating physical reasoning—an understanding of how objects move through space and time. This enables a robot to detect objects, predict their motion, and anticipate the consequences of its own actions. It might understand, for example, that an egg shouldn’t be gripped too tightly.

Of course, giving robots this level of intelligence and autonomy raises serious safety concerns. AI models are increasingly capable of acting independently—and when that autonomy is projected into the physical world, the risks grow. Google DeepMind acknowledges this, pledging to apply a multilayered safety framework to the Gemini robotics models. These systems will inherit Gemini’s existing safeguards against harmful or dangerous content, and will include an added layer of “constitutional AI”—a kind of built-in ethical guidance, reminiscent of Isaac Asimov’s Three Laws of Robotics.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

View the full article

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...