
Design as Governance
I was on a call last year with a VP of customer experience at a large financial services company. They'd just rolled out an AI agent to handle first-tier support — account questions, balance inquiries, basic troubleshooting. The agent was technically solid. Fast. Accurate. Handled eighty percent of volume without escalation.
But the NPS scores tanked.
When they dug in, they found the problem wasn't accuracy. It was tone. The agent was resolving issues efficiently but coldly. It would approve a fee reversal and then immediately pivot to the next issue without acknowledging that the customer had been frustrated. It would explain a policy correctly but in language that felt like a legal brief. It was right about everything and wrong about the experience.
The VP asked me: "Who owns this? It's not a bug. It's not a policy violation. It's not a design problem in the traditional sense — there's no interface to redesign. It's just... the AI doesn't behave like us."
That question has been stuck in my head ever since. Because the answer, honestly, is that nobody owns it. Not yet.
For twenty years, the enterprise design conversation was about making things easier to use. Better interfaces. Smoother workflows. Fewer clicks. That conversation mattered — and it's basically over. Not because we solved usability, but because the thing being governed changed underneath us.
Enterprises don't just have digital products anymore. They have AI agents acting on behalf of customers. They have automated decision systems approving loans, routing shipments, escalating support tickets. They have models making judgment calls thousands of times a second that used to require a human being with context and discretion.
The question isn't "is the interface easy to use?" anymore. The question is "does this system behave in a way that's coherent, trustworthy, and aligned with what the organization actually intends?" That's a governance question. And it's a design question. They're the same question.
When an AI agent handles a customer complaint, it's not just executing a workflow. It's making a series of micro-decisions about tone, priority, escalation, compensation — decisions that collectively define the customer's experience of the entire organization. Who designs those decisions? Who decides what "good" looks like when the system is making ten thousand of them an hour? Who notices when the pattern drifts?
Right now, mostly nobody. Engineers build the systems. Product managers define the requirements. Legal sets the guardrails. But the holistic question — "what experience is this organization actually delivering, across every touchpoint, human and automated?" — that question falls between every chair in the room.
That's the governance gap. And designers are the people best equipped to fill it.
Not because designers know more about compliance or risk management than lawyers do. But because designers are trained to hold the human experience as the organizing principle. They're trained to think across systems, to notice when the seams between two perfectly functional components create a terrible experience, to ask "what does this feel like from the other side?"
That skill — the ability to evaluate a complex system from the perspective of the person it affects — is exactly what AI governance needs and mostly doesn't have.
Think about what governance looks like today in most enterprises. It's policy documents. It's review boards. It's checklists that get applied after the system is built. It's a compliance exercise, not a design exercise. And the result is predictable: systems that are technically compliant but experientially incoherent. The agent follows the rules but the customer feels like they're talking to a bureaucracy. The automated decision is within policy but the person on the receiving end has no idea why it happened or what to do about it.
That VP I talked to — they didn't have a technical problem. They had a design problem that nobody in their organization had the language for. The AI agent needed what any customer-facing employee needs: not just rules, but judgment. Not just accuracy, but empathy. And the only people in most organizations who think professionally about judgment and empathy at scale are designers.
I've started calling this "design as governance" because I think the framing matters. If you call it governance, you get compliance teams and policy documents. If you call it design, you get people who are obsessed with coherence, who notice when the system's behavior contradicts its stated values, who can hold the whole picture in their head and spot the places where it breaks.
The enterprises that figure this out are going to have a massive advantage. Not because their AI is better — the models are commoditizing fast — but because their AI behaves in a way that feels intentional. Trustworthy. Human. That's not an engineering problem. It's not a legal problem. It's a design problem at organizational scale.
And it's the most important design problem of the next decade.