Jump to content




Featured Replies

rssImage-9a9010b3bbe6538cea6d44ba72e69bbf.webp

We have a growing problem making our institutions work for humans. Across society, and especially in business, humans are increasingly treated as resources to be squeezed rather than as individuals to be served. Employees become “human capital” to be optimized; customers become “users” to be converted or upsold. This tendency predates AI, but AI threatens to accelerate it dramatically—automating the depersonalization, scaling the indifference, and introducing another layer of abstraction that separates real human beings from real human beings.

Yet there is an alternative path. Human-centered design is often dismissed as a soft or unserious discipline, a distraction from the serious business of maximizing the commercial income to be extracted from every interaction. But it is actually the most practical route to value creation available to organizations today. When you design around real human needs—those of both customers and staff—you build the bridge between internal transformation and external results.

The Foundational Principle

In The Design of Everyday Things, design expert Donald Norman articulates a deceptively simple idea: pay close attention to the needs of human users when defining design goals. This principle applies far beyond product design. It is foundational to how organizations create value.

Human-centered design acts as a critical bridge that taps into and connects two groups of humans. On one side, customer experience drives revenue—people buy from, stay loyal to, and recommend organizations that understand and serve their actual needs. On the other side, the employee experience drives execution—staff who feel understood and supported deliver better work and stay in their roles for longer. Neglect either side and value leaks away, no matter how sophisticated your technology or how ambitious your strategy.

Crucially, human-centered design is not a one-time exercise conducted before systems are built. It is an ongoing discipline that begins with observation, continues through implementation, and persists as long as the system operates. Humans change. Their needs evolve. Their contexts shift. A design process that treats initial research as sufficient will produce systems that drift steadily away from the people they are meant to serve. The organizations that sustain value are those that build continuous feedback loops, returning again and again to observe, test, and refine.

Why AI Makes This Urgent

AI amplifies the consequences of getting human factors wrong. There are three reasons why human-centered design becomes especially critical in the age of AI.

First, speed and scale. When an AI system interacts with customers or processes employee workflows, its behavior can propagate across millions of touchpoints. A poorly designed interaction that might have affected dozens of people in a manual process now affects thousands or millions. The cost of inattention multiplies accordingly.

Second, the fallacy of confusing humans with machines. Management systems and technical architectures tend to assume that they are dealing with rational actors who process information logically and respond predictably. This is the same fallacy embedded in the economist’s concept of homo economicus—the fictional human who optimizes utility with perfect information and no emotion. Real humans bring biases and emotions to their decisions and interactions; they bring varied cultural contexts and needs that shift depending on circumstances. Different people come to AI from radically different angles, and a system designed for an idealized user will fail actual ones.

Third, the diversity of stakeholder interactions. Not everyone affected by an AI system interacts with it directly. Some draw on its outputs at second or third hand—a manager reviewing AI-generated reports, a supplier responding to AI-optimized orders. Other stakeholders—such as government agencies, labor groups, of consumer rights advocates—have regulatory or social interests in how you implement AI. Miss out any of these groups in your design process and you create friction that erodes the value you are trying to build.

Building Human-Centric AI Systems

Translating these principles into practice requires deliberate choices at every stage of AI development and deployment.

Start with personas designed for context. A single AI system may need to present itself differently depending on who it is interacting with. A customer-facing interaction might require conversational warmth, natural pacing, and even deliberate pauses that make the exchange feel human. An internal communication feeding data to supply chain managers might prioritize speed, precision, and structured formatting. An AI agent participating in a multi-agent orchestration layer might need yet another mode—one optimized for machine-readable clarity. These are not cosmetic differences. The persona an AI adopts shapes whether the humans on the other end can work with it effectively. Design these deliberately, not as afterthoughts.

Embrace the iterative spiral. Norman’s concept of human-centered design follows a cycle: observation, idea generation, prototyping, testing, and then back to observation. This is not a linear checklist to be completed once. Each round of testing reveals new information about user needs that the previous round of observation missed. For example, initial research might suggest that speed is the primary requirement for a customer service AI. But watching real users interact with a prototype might reveal that some customers prefer a “chattier” experience with more interaction, even if it takes longer. The spiral deepens understanding as experiments scale.

Recognize the limits of self-reporting. Users do not always know what they need, and they are often not well-placed to articulate their desired outcomes even when they do know. Customers might tell you they want human agents, but longer-term behavioral analysis may reveal a preference for AI solutions that eliminate waiting times. Subject matter experts and scholarly research are invaluable supplements to direct observation. The goal is to understand what actually serves people, not merely what they say they want. (This point is made particularly well with reference to the medical context in Joseph and Pagani’s Designing for Health: The Human Centered Approach.)

Build in human audit layers. The temptation with AI is to automate completely—to remove humans from the loop in pursuit of efficiency. Resist it. Introduce human checkpoints that look for systemic biases, catch edge cases, and intervene where required. This is not a failure of automation but a recognition that partnership between humans and AI produces better outcomes than either alone.

The Orchestration Challenge

As organizations deploy multiple AI agents—handling sales, compliance, operations, customer service—a new challenge emerges. These agents can conflict. Gartner predicts that 40% of enterprise applications will use multi-agent systems by year-end, and a common failure mode is already apparent: agent deadlock, where agents with different objectives provide contradictory instructions and freeze the workflow.

The solution is not purely technical. Orchestration layers can help resolve conflicts algorithmically, but they cannot substitute for human judgment in ambiguous cases. Human-centered design here means designing the human role in the system, not just the AI components. Someone must be empowered to adjudicate when the sales optimization agent and the regulatory compliance agent cannot agree. That role requires clarity about authority, access to relevant context, and the judgment to weigh competing priorities. Organizations that neglect this human layer will find their sophisticated multi-agent systems grinding to a halt.

Practical Steps

Five actions can move human-centered design from abstraction to operation:

1. Map your human touchpoints. Before any AI initiative, document every human who will interact with or be affected by the system. This includes direct users, indirect data consumers, and those with regulatory or reputational stakes. If you cannot name the humans involved, you are not ready to build.

2. Observe before you build. Spend time with actual users before defining requirements. Watch what they do, not just what they say. The gap between stated preferences and revealed behavior is where design insight lives.

3. Design your personas deliberately. For each AI system, specify how it should interact differently with different stakeholder types. Document these choices and revisit them as you learn more.

4. Build in human audit points. Identify where human judgment must remain in the loop and design those roles explicitly. Specify what authority they have, what information they need, and how their interventions feed back into system improvement.

5. Don’t stop—cycle. Treat testing as the beginning of observation, not the end of development. Build feedback mechanisms that allow continuous refinement as human needs evolve.

Conclusion

Human-centered design is not a constraint on AI ambition. It is what allows that ambition to create real value. Technology alone creates nothing—financial value emerges only when capabilities provide value that is meaningful for humans. Human-centered design is the discipline that makes that meeting possible, the bridge between what your systems can do and what actually matters to the people you serve.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.