One of the things I'm most proud of from my time inside a Fortune 100 retailer is also one of the hardest to talk about. Because it wasn't clean. It wasn't a framework or a governance model or something you'd put in a leadership book. What I learned — what actually worked — was that the formal budget process, the annual planning, the official channels for getting resources? That's the story the org tells itself. The real story is that the money is always there. Someone just has to be willing to make the argument for redirecting it. And making that argument requires something no executive program teaches you: years of trust built through over-delivery, so that when you sit across from a partner and say "the real problem isn't the screens, it's the infrastructure underneath them," they believe you enough to move half a million dollars.
That argument doesn't work if you're new. It doesn't work if you've been collecting wins in a silo. It doesn't work if your instinct is to wait for official approval before trying something. You have to already be the person they trust before the moment arrives when you need them to.
I've been watching companies hire their way around that problem for fifteen years.
Here's how the pattern usually goes. A team is underperforming. Products are shipping late, or shipping wrong, or shipping fine but not actually moving the numbers anyone cares about. The diagnosis is almost always the same: we don't have the right people. So you hire a VP of Product. Or a design leader. Or a whole new team of senior engineers. And six months later, the same things are happening — only now there are more meetings and more friction and a new vocabulary for describing the same dysfunction.
The people weren't the problem. The org was.
Specifically: the way the org makes decisions, the way it measures whether those decisions worked, who has actual authority versus nominal authority, and what behaviors get rewarded in the moment versus what gets celebrated in the post-mortem. Those things don't change when you add headcount. They change when someone changes them. And that takes a completely different kind of work.
Let me tell you about a specific initiative. When I was embedded in the design portfolio across store operations at Home Depot, one of the first things I ran into was app sprawl. Teams were building new applications to solve every new problem. An app for this workflow. An app for that one. Nominally separate things — but running on the same devices, maintained by the same people, consuming the same engineering capacity. And every app got its own spec, its own components, its own patterns. Nothing talked to anything else. The associate experience was fragmented in ways that were costing real time and real money, and most people in the org couldn't see it because they were only accountable for their piece.
So I pushed for what I started calling "No New Apps." Not a formal initiative at first — just a constraint I started arguing for in every planning conversation. If a team came wanting to build something new, the question became: does this have to be new? Or can we extend what already exists? And if we build it new anyway, what are we committing to reuse?
The argument I was making wasn't about good design. It was an economic argument. The accumulation of these small, isolated decisions was costing more in maintenance burden and associate cognitive load than anyone had bothered to add up. Once I had the numbers, the conversation changed. We weren't talking about design principles. We were talking about portfolio liability.
That's not a people problem. That's a structural problem with a structural fix. No amount of hiring would have resolved it — because the behavior creating the problem was being rewarded by the systems already in place. Ship something new, you're a builder. Argue for reuse, you're just slowing things down. Until someone changes what gets rewarded.
The version of this I see most often in organizations I've worked with is the measurement problem. And it's a trap, because it looks like a people problem… but it's not.
A team can't tell you whether what they shipped actually worked. The metrics they have are engagement metrics, completion metrics, traffic metrics. All of which tell you something about behavior but nothing about outcome. So decisions keep getting made on the wrong signal — and because nobody is measuring the right thing, nobody knows the decisions are wrong.
The team just looks unproductive. They keep making things that don't move the business, and nobody can trace why.
I built a metrics system at Home Depot specifically to fix this. Standardized scoring. Canonical data views aligned to the fiscal calendar. Semantic versioning on dashboard releases — treated it like a product, because that's what it was. And the thing that unlocked all of it wasn't the tooling. It was the decision that the organization would hold itself accountable to a shared measurement model instead of each team reporting its own numbers in its own format.
That's a governance decision. You can have the most talented analysts in the world working for you — they cannot make that decision for you. Only a leader with the credibility and the organizational access can walk into the room and say: we're all going to agree on what good looks like, and we're going to hold this standard even when it makes some teams' numbers look worse.
Now — I'm not arguing against hiring well. This isn't a contrarian take on talent. Good people matter enormously — I've spent significant energy over the years identifying, developing, and advocating for practitioners who had more range than their formal scope suggested. Getting someone to the next level when they've earned it is some of the most meaningful work I've done.
What I am saying is that hiring well solves maybe thirty percent of the problem in most organizations. The other seventy percent is structural. It's the decision rights model. The measurement systems. The way resources flow, who has to approve what, and what the incentive structure actually rewards day-to-day versus what the strategy deck says it rewards. You cannot talent-acquire your way out of those conditions. You have to fix the conditions.
The reason most organizations don't fix them isn't that they don't know they're broken. It's that fixing them is slower, less visible, and harder to point to in a quarterly review than adding headcount. Headcount shows up in org charts. Structural change shows up in outcomes — eventually, after a lag long enough that whoever made the call usually doesn't get the credit.
Here's the thing nobody wants to say out loud: if you've hired well and the org is still underperforming, the problem is almost certainly you. Not the people you hired. The system you've built around them — the one that makes it hard for good people to make good decisions, and hard for anyone to know whether the decisions they made were actually good.
That's a fixable problem. But it requires a different frame. It requires looking at the org the way a designer looks at a product — not as a collection of individuals with fixed outputs, but as a system with behaviors, incentives, feedback loops, and failure modes. A system that can be diagnosed. Changed. And measured against something real.
The instinct to hire when things are broken is human. I understand it. There's something about adding people that feels like action. It signals investment. It feels like you're doing something.
But the org is the product. And you wouldn't ship more features to fix a product whose architecture was failing… would you?
Fix the conditions. Then hire.
There's a phrase that shows up in every product review I've ever sat through: "we need to reduce friction." It doesn't matter what kind of product. It doesn't matter what stage the company is in. Signup flow too long? Reduce friction. Users dropping off at step three? Friction. Conversion rate flat? Must be friction.
And most of the time, the people saying it are right. Bad form design is friction. Unclear labels are friction. A modal that fires at the wrong moment — that's friction. Fix it.
But here's the thing nobody in the room ever says: some friction is doing work.
I don't mean that in some abstract philosophical way. I mean there are moments in a product where the effort required to complete an action is actually the point. Where the resistance isn't a barrier — it's a signal. To the user and to the system that a real commitment is being made. You take that away… and you haven't improved the experience. You've hollowed it out.
Let me give you a specific example because I think it makes the point better than the theory does.
I've been building a learning platform. Audio-first, designed for working professionals who need to develop real competency — not just check a box and move on. One of the early decisions we made was about consent. Before a learner enters a personalized development track, we ask them to acknowledge what they're signing up for. Not a terms-of-service checkbox. A real moment. Here's what this will require of you. Here's what the platform will expect. Here's what happens if you're not ready.
Now, the instinct from every product playbook — and I've heard it from smart people — is to streamline that. Get people into the content faster. Every screen between signup and first lesson is a potential drop-off point. The data will tell you that a consent screen is "friction."
But that consent screen is doing something. It's establishing a contract between the learner and the platform. It's saying: this is serious, and we're going to treat you like it's serious. When someone completes that flow and actually starts their first session, they arrive different than someone who got waved through. The effort changes the quality of what comes next.
I know that sounds like I'm justifying a longer funnel. I'm not. I'm saying the funnel isn't the only thing that matters.
Think about the best onboarding experiences you've had. Not the smoothest ones — the best ones. The ones where you came out actually understanding what you'd signed up for. There was probably a moment in there that required something from you. A decision. A configuration. A self-assessment… something you couldn't skip without losing the point.
Duolingo does this. Before you start a course, it gives you a placement test. You can skip it — go straight to Lesson 1 — but if you take it, the app actually meets you where you are. That test is friction. It takes time. It asks you to think. And the experience on the other side is fundamentally better because of it.
Now think about the worst ones. Some app you downloaded… everything was so frictionless that you arrived at the main screen with no context, no orientation, no investment. You'd been optimized through the funnel so efficiently that you landed with nothing. You opened it once, maybe twice, and then it just sat on your phone collecting dust.
The frictionless onboarding has better completion rates. It looks better in the product review. And it produces users who don't know why they're there.
The problem isn't that product teams don't understand this. Most experienced designers and PMs have a feel for when friction is structural versus accidental. The problem is the measurement systems don't care about the difference. Completion rate doesn't ask why someone dropped off. Time-to-value doesn't know whether the value was felt or just reached. The metrics treat all friction as cost, so the optimization always runs in one direction: less.
What's actually needed is knowing when to leave the resistance in. I think about it in a few buckets…
There's the friction that proves you mean it. Delete-account flows, consent screens, cancellation paths. If these are too easy, you lose the signal that the action was intentional.
There's the friction that makes sure you arrive prepared. Onboarding sequences, setup flows, preference configurations. If these are too smooth, you produce users who haven't actually oriented themselves.
And there's the friction that asks you to pause and think. Self-assessments, review screens, confirmation steps. Remove those and you lose the moment where someone processes what they've just done.
None of those are bugs. They're doing work the metrics don't measure.
I want to be clear — I'm not arguing against usability. I've spent twenty years reducing friction where friction is the problem. Bad form design is bad form design. Unclear navigation is unclear navigation. There is no version of this argument where confusion is a feature.
What I am saying is that the reflex to remove all resistance is itself a design failure. It treats the user as someone to be moved through a funnel rather than someone to be engaged in an experience. And that distinction matters a lot more in some products than others.
If you're building a checkout flow, optimize for speed. But if you're building something where the user's investment in the process changes the quality of the outcome — learning, health, financial planning, anything where engagement is a variable — then friction isn't just your enemy. It's one of your most important design materials.
The question isn't "how do we reduce friction." It's "what is this friction doing, and do we need it to keep doing it?"
Most of the time, you don't. Sometimes you do. The skill is knowing which is which — and having the conviction to leave it in when the data says take it out.
I want to leave you with something that has nothing to do with product design and everything to do with this argument.
Have you ever played Dark Souls? I did. Everyone kept talking about how incredible it was. How difficult. How rewarding. So I bought it. How hard could it be…
It is brutal. It is genuinely, unreasonably punishing. You wake up with no weapon, no context, no guidance. The game just says go — and then everything in the world tries to kill you. And I kept playing. And then I bought the sequel. And the one after that. Forty million people did the same thing — kept paying FromSoftware to punish them, three games deep.
Meanwhile, look at what's happening elsewhere. The Final Fantasy VII remake literally paints the cliffs yellow so you know where to go next. There's an entire new visual language in modern games designed to eliminate any moment where a player might have to figure something out on their own. Every surface is friction-free. And it's fine. It works. But it's not why people talk about Dark Souls ten years later.
FromSoftware never added an easy mode. Not because they couldn't. Because the difficulty is the product. The friction is what makes the victory mean something. Take it away and you don't have a better game — you have a different game. One nobody would remember.
Now — I'm not saying turn your enterprise SaaS into Dark Souls. Nobody's asking for that. But the next time you're in a product review and someone says "we need to reduce friction," just ask: is this friction a bug, or is it doing something we haven't thought to measure?
Now.. go play Dark Souls. You'll know the difference.
I have a test I run on every product I evaluate, and it has nothing to do with the homepage, the onboarding, or the core feature set. I go straight to the delete account flow.
Not because I want to delete anything. Because what a product does when someone is leaving… tells you more about its character than anything it does when someone is arriving.
Here's what usually happens. You click "Delete Account" — assuming you can even find it, which is already telling — and the product panics. Suddenly you're in a completely different experience. The brand voice disappears. The copy goes institutional. The design goes sterile. You get a wall of text that was clearly written by legal, a guilt-trip retention modal, seventeen "are you sure?" confirmations, and a process that feels like it was designed to make you give up rather than actually follow through.
The product just told you something about itself. It has no idea who it is when things get hard.
And then there's the other version — the one that might be worse. You hit cancel on a subscription and suddenly the product that spent months charging you full price is offering you 80% off for the next three months. Just like that. Where was that deal when I was a loyal customer? What it tells you is one of two things: either they're betting you'll forget to cancel again once the discount expires — which is just dark pattern math — or they've been overcharging you the whole time and only revealed it at the exit door. Either way, it's disingenuous. The brand didn't just lose its voice. It lost its credibility.
And this is the pattern in almost everything. SaaS products. Social platforms. Subscription services. The marketing site is warm and human. The onboarding is polished and encouraging. The core experience is thoughtful. And the moment someone tries to leave, all of that evaporates. The brand goes into witness protection.
I think about this a lot because I build products, and the moments that don't make the marketing deck are the ones I care about most. The empty states. The error messages. The edge cases… the destructive flows. These are where you find out whether a brand actually has a point of view or whether it's just a coat of paint.
A delete account flow is the most destructive action a user can take in your product. It deserves at least the same design attention as any other critical moment. Honestly, it deserves more — because the emotional stakes are highest exactly when the design attention is usually lowest.
So what does it look like when someone actually thinks about it?
The brand stays in the room. The copy acknowledges what's happening without trying to manipulate you. The process is clear, honest, and completable. You feel like you're being treated as a person making a decision — not a metric trying to escape a funnel.
I'll tell you what we did. I've been building a learning platform where the brand voice has real personality — warm, direct, has a point of view. When we got to the delete account screen, every product playbook says go serious. Go formal. Go legal. Drop the personality because the moment is too grave for jokes.
We went the other way. The delete screen has the same voice as the rest of the product. Not joking about it — but acknowledging the seriousness in the way the brand would naturally acknowledge anything serious. The tone shifts from playful to sincere… but it doesn't shift to someone else.
That was a harder decision than it sounds. There's a real risk of reading as flippant when someone is about to permanently delete their data. But going sterile carries a worse risk. It tells the user that the personality they've been engaging with was performative. That underneath the friendly copy, there's the same institutional indifference as everywhere else. The mask slips at the worst possible moment.
Here's the test. Go to any product you use regularly and find the delete account flow. Ask yourself three things:
Does the brand still sound like itself? If the warm, human product suddenly sounds like a legal department — you've found a seam. The brand identity isn't structural. It's decorative.
Is the process respectful? Respectful means clear, honest, and completable. Not buried behind seven settings pages. Not interrupted by guilt trips designed to exhaust you into staying. A product that makes it hard to leave is a product that doesn't trust its own value proposition.
Would you feel good about the company afterward? This is the one that actually matters. The user who deletes their account and walks away thinking "that was handled well" is more valuable to your brand than the user you kept through manipulation. They'll speak well of you. They might come back. They'll remember how it felt.
So… if this is so obvious — if you can run this test in five minutes and immediately see the gap — why doesn't it get fixed?
Because nobody gets promoted for improving the delete account flow. There's no OKR for "users who left felt respected." The optimization pressure all runs toward acquisition and retention, so the moments of departure get zero design investment.
But the things you don't measure still shape the experience. The delete account flow is a brand expression. The error state is a brand expression. The empty state when someone first logs in and nothing is there yet… that's a brand expression. If you only apply your voice to the moments that marketing screenshots, your brand isn't deep. It's a veneer.
The products I respect most are the ones where the voice doesn't change when the context gets difficult. Where the design team clearly thought about the sad paths as carefully as the happy ones. Where the brand shows up not just in the moments designed to attract you, but in the moments designed to let you go.
That's the delete account test. And most products fail it.
Every product team I've ever worked with tracks completion rates. Finish the onboarding flow. Complete the setup sequence. Get through the tutorial. The metric is the same regardless of what you're building: did the user make it to the end?
What almost nobody tracks is what happened at the moment they didn't.
Before I go further — a word on "user." In consumer products, the word is mostly unambiguous: a user is someone who uses the thing, usually the same person who bought it. But in the organizations I've spent most of my career in, "user" flattens a distinction that matters enormously. A user might be an internal associate — a warehouse operator, a store manager, a fulfillment team lead — who had no say in whether the product was purchased and doesn't have the option to churn. Or it might be a customer who does pay, but isn't the buyer who signed the contract. Or it might be the buyer who signed the contract and never touches the actual tool. Same word. Completely different behaviors, stakes, and skip patterns.
When I talk about skip behavior, I'm not talking about a B2C onboarding sequence. I'm talking about any flow where a real person is being asked to do a thing — and deciding, for reasons of their own, not to.
Skip behavior is one of the most honest signals in a product. Not because users are consciously sending you a message — they're not. They're just doing what feels right in the moment. But that instinct reveals something. When someone hits a screen and decides not to engage with it, they're telling you one of three things: they already know this, they don't believe it matters right now, or you haven't made a compelling enough case for why they should slow down.
Those are three completely different problems. And if you're only tracking that the skip happened, you have no idea which one you're dealing with.
I've watched teams respond to skip behavior the same way every time: simplify the screen. Reduce the copy. Make it faster to move through. Which occasionally works… and often makes it worse. Because if the skip was happening because users didn't believe the step mattered, making it shorter doesn't fix the belief. It just makes the thing they don't believe in go by faster.
Let me give you a specific example. We were building an onboarding sequence for a learning platform — the kind where the quality of the personalized experience on the other side depends entirely on what the user tells you upfront. Preferences, context, goals. The more someone engages with that intake, the better their experience. The less they engage, the more generic it feels — and the faster they churn.
Early data showed users were skipping the preference screens at a significant rate. The instinct from the team — entirely reasonable instinct — was that the screens were too long. Too many questions. Trim them down.
We didn't do that. Instead, we looked at where the skips were happening, not just that they were happening. The pattern was interesting: users were engaging fine with the first two screens, dropping off hard on the third. The third screen was asking about learning cadence — how much time they could commit per week.
That's not a complicated question. But it's a vulnerable one. You're asking someone to make a commitment before they've seen enough of the product to know whether it's worth making. The skip wasn't laziness. It was hesitation. They didn't skip because the question was too hard. They skipped because we hadn't earned the right to ask it yet.
We moved that screen. Put two value-demonstration moments in front of it first — let the user experience something before we asked them to commit to a schedule. Skip rate dropped significantly. Not because we asked fewer questions. Because we asked the same question at a different point in the relationship.
The measurement problem here is real. Most analytics setups tell you a user skipped step three. They don't tell you what the user was thinking at step three. And so teams make structural changes — fewer screens, shorter copy, bigger buttons — when the actual issue was sequencing, or framing, or trust.
Now — I'm not arguing against simplification. Plenty of onboarding sequences are genuinely too long. If you're asking eight questions when three would do, cut five. That's real.
What I'm arguing is that simplification is not the same as understanding. You can build a very clean, very fast onboarding experience that still skips the step that would have made the user successful. You've optimized the funnel. You've destroyed the outcome.
The discipline that actually helps here is treating skip behavior as a research question before it becomes a design question. What is this screen asking the user to do? What does completing it require of them — cognitively, emotionally, in terms of commitment? And is the product prepared to receive that level of engagement at this point in the relationship?
That last one is the question teams never ask.
Is the product prepared to receive it.
Because skip behavior is sometimes a signal about the screen. And sometimes it's a signal about everything that came before the screen. A user who arrives at your most important intake question with no context, no demonstrated value, no established trust… is going to skip it. Not because the question is wrong. Because the relationship isn't ready.
There's a version of this that applies well beyond any single flow. Any time someone doesn't engage with something you expected them to engage with, the reflex is to fix the thing. Make it more prominent. Add a tooltip. Reduce friction.
Sometimes that's right. But the more interesting question is always: what did they experience before they got here? What story has the product been telling them — and does this screen feel like a natural next chapter, or does it feel like a non sequitur?
A skip is a broken sentence. The word that got skipped might be fine. The sentence before it might be the problem.
This is why I push for journey-level instrumentation over screen-level instrumentation. Knowing that screen three has a high skip rate is useful. Knowing that users who skipped screen three had also skipped the value moment on screen one is diagnostic. Now you have a pattern. Now you know you have a trust problem, not a copy problem — and those require very different fixes.
The data is usually there. Someone just has to decide to look at it differently.
Completion rate is the metric that sounds rigorous and isn't.
Not because it's wrong to measure it. Because teams treat it as an outcome when it's actually a behavior. And that difference — behavior versus outcome — is the whole ballgame.
Here's what completion rate tells you: a user reached the end of a flow. That's it. It tells you nothing about what they understood, what they intended, whether they'll come back, or whether what they completed actually did anything for them. They moved through a series of screens and clicked the final button.
In some contexts, that's enough. Checkout flows, form submissions, transactional sequences where the goal is literally to reach the end — track completion rate, optimize for it, you're done. The behavior is the outcome.
But most products aren't transactional. Most products are trying to change something — a behavior, a skill, a habit, a decision. And in those products, reaching the end of the onboarding flow is not success. It's a precondition for success. Optimizing for the precondition while ignoring the actual outcome is one of the most common and most expensive mistakes in product.
I've seen this play out the same way in every org that's fallen into it. Onboarding completion goes up — real work, real improvement, teams feeling good about it. And then 30-day retention doesn't move. Or it moves slightly. And nobody can explain why, because the funnel looks great.
The problem is that the users who completed the flow… didn't actually get oriented. They clicked through it. They made it to the main screen with no context, no investment, no understanding of what the product was going to do for them.
They completed. They didn't arrive.
Those are different things. Completion rate cannot tell them apart.
Now — I want to be precise here, because this is not an argument against measuring completion. It's an argument against measuring only completion.
The question that needs to sit next to completion rate is: what did completing it produce? Not immediately — immediately, nothing has happened. But at day seven. Day thirty. Did users who completed onboarding actually engage with the core value of the product at a meaningfully higher rate than users who didn't? And within users who completed — is there variance? Are some completion paths producing better outcomes than others?
That last question is the one most teams never ask. They treat completion as binary. You either finished or you didn't. But how you finished matters enormously. A user who moved slowly through onboarding, paused on the preference screens, re-read the value proposition — that user is different from a user who clicked through every screen in forty-five seconds. Both completed. They're not the same.
The measurement fix here isn't complicated. It's just a different frame. Instead of asking "what percentage of users reached the end," ask "what did users do in the first week who completed onboarding, versus users who didn't?" Then ask it again at day thirty.
What you'll usually find is one of three things: completion is a strong predictor of retention and the work was worth it; completion is weakly correlated and the real variable is something else entirely; or completion is negatively correlated with a specific outcome — meaning something in the flow is creating false confidence in users who shouldn't have completed it.
I've seen all three. The third one is the most instructive and the most uncomfortable. When optimizing the completion rate actually makes your product worse, because you've made it easier to finish something that was never designed to be easy.
Metrics that measure behavior are useful. Metrics that measure outcome are essential. Most teams track the first kind and call it the second kind — then spend quarters confused about why improving the numbers doesn't improve the product.
Completion rate tells you something is broken if it's very low. It tells you nothing is guaranteed if it's very high. The work — the actual design and measurement work — is in the space between those two facts.
What happened after they completed? That's the question.
There's a version of design work that starts with a problem statement and goes straight to solutions. You've probably sat in that room. Someone says "people are struggling with X" and within twenty minutes there are wireframes on the whiteboard. The conversation has moved from problem to design without passing through current state — without anyone asking what actually exists today and why it is the way it is.
This is how you end up designing around problems that aren't the real problem.
I call it the audit-first pattern — and it's less of a methodology than a discipline. Before you design anything, you understand what's already there. Not from memory, not from assumptions, not from what the last team told you they built. You look at it. You document it. You ask why each decision was made. And only then do you have an opinion about what should change.
The reason this matters is that most design problems aren't what they appear to be on the surface. A team comes to you saying the flow is broken. You look at the flow… and it's fine. Not perfect, but not the problem. The problem is that the product is routing the wrong people to that flow in the first place — associates who haven't completed a prerequisite step, or customers who haven't experienced enough value yet to do what you're asking them to do. Redesigning the flow doesn't fix that.
An audit would have found it in a day.
The version of this I've done most often is an experience audit — mapping every touchpoint a person has with a product or system, documenting what currently exists, and identifying where the seams are. Not "what should this be." What is this, and is it doing what it was supposed to do?
There's a question I like to run through every step in the audit. I call it QtbA — Questions to Be Answered. It's an unashamed bastardization of Jobs to Be Done, but focused on the question layer: at each moment in the journey, what is the person actually trying to figure out? Not what task they're performing — what question is active in their head? "Is this the right step?" "Do I need to do this now?" "What happens if I skip this?" Surfacing those questions — the ones people are running silently whether or not the interface acknowledges them — usually tells you more about where a flow is failing than any click data will.
What you almost always find in the audit itself is one of three things.
Something that was built for a reason that no longer exists — a constraint the org had two years ago that was solved at the infrastructure level, but the user-facing consequence never got cleaned up. It's still there. Nobody knows why anymore. But it's creating friction for every person who hits it.
Something that was never finished — a flow that got shipped at 80% because the deadline moved, with the intention of coming back, and nobody ever did. The org moved on. The gap stayed.
And something that's actually working — a pattern or decision that's holding up better than expected, that deserves to be extended rather than replaced. This one surprises people. Teams often come into a redesign assuming everything needs to change. The audit shows them what to protect.
Now — the audit-first pattern is not an argument for analysis paralysis. I've seen teams hide in research indefinitely, using "we need to understand the current state better" as a way to avoid the discomfort of making decisions. That's not what this is.
The audit has a scope and a deadline. It answers specific questions. It produces a document — not a living document, not an ongoing process, a document with a version number and a date — that says: here is what exists, here is what's working, here is what isn't, and here are the three to five things most worth addressing. Then the work starts.
The difference between audit-first and analysis paralysis is that the audit is in service of a decision. You know what question you're trying to answer before you start. When you've answered it, you stop.
The deeper reason this pattern matters has less to do with design quality and more to do with organizational credibility. When you walk into a room and propose a redesign without having done an audit, you're asking people to trust your instincts. Some will. Many won't — especially in orgs where design has historically overpromised and under-delivered, where the last redesign took eighteen months and shipped something that looked different but worked the same.
When you walk in with an audit, you're showing your work. You're saying: I understand what we have, I understand why it is the way it is, and I have a specific diagnosis — not a general direction, a diagnosis — of what needs to change. That's a different conversation. Decisions get made faster. The skeptics come along.
The audit is not just a design artifact. It's a trust-building instrument.
Most teams skip it because it feels like overhead. Like the real work is the redesign, and the audit is just postponing it.
It's not postponing it. It's making sure you're doing the right redesign.
"Being a UX Team of One" was a talk I gave often enough that I stopped counting. First time was 2012 — I was co-founding a company, running design and product with no budget and no team, and I had figured out enough to survive it. The talk was about research tactics, prioritization, how to protect your time, how to make the case for design when nobody in the room thinks they need it. It traveled. Conferences kept asking for it. I built workshops around it.
I've been thinking about what I would say if I gave it today.
The version with the grey stubble on my chin, talking to the version with all that hair on his head.
The practical advice I gave back then was mostly fine. Ruthless prioritization is still real. The research tactics still work. Learning to read a room and pitch design to skeptics — that skill compounds. I don't take any of it back.
But there's something I didn't say, because I didn't know it yet: being a team of one is a temporary condition, and what you do during that period determines whether you ever stop being one.
Not whether you hire people — though that's part of it. Whether you build the kind of credibility that lets you operate at a different altitude when the time comes. Whether the org starts to see design as infrastructure or keeps seeing it as decoration. That outcome is almost entirely determined by what you do when you're the only one in the room.
Here's what was incomplete.
I framed the talk around survival. How to stay relevant. How to keep your seat at the table. That's a reasonable frame when you're in it — when you're fielding twelve requests from eight stakeholders with no budget and a launch date that moved up two weeks. Survival thinking made sense.
What I didn't frame around was leverage. The question isn't just how do you get through the sprint — it's what are you building that outlasts the sprint. The team-of-one who spends two years surviving is in the same position at year two as they were at year one. The team-of-one who spends two years building something — a shared vocabulary, a measurement baseline, a documented process that the org starts to depend on — that person has changed their situation. They're not surviving anymore. They're operating.
The difference isn't talent. It's orientation.
The other thing I would say now — and this one's harder — is that the loneliness is real, and pretending it isn't makes it worse.
There's a particular kind of isolation that comes with being the only person in an organization who cares about a certain kind of quality. Not because your colleagues are bad at their jobs. Because their jobs are not your job. They're optimizing for different things. And the gap between what you believe the experience should be and what everyone else is willing to ship… that gap is yours to carry alone.
I didn't name that in the original talk. I was 28 and I thought naming it would sound like complaining. What I know now is that naming it is the only way to manage it. Because if you don't name it, you either internalize it as your failure or you externalize it as everyone else's. Neither is accurate. The gap is structural. The loneliness is the job. Knowing that doesn't make it disappear — but it stops it from being surprising.
Now — I want to be careful here, because this can slide into "suffering is noble" territory, which isn't the point. The point is calibration. If you go in expecting that design advocacy is a solved problem you'll crack in your first quarter, you'll be demoralized by month three. If you go in knowing that you're changing something structural — that the work is slow and often invisible and requires more patience than any job description will tell you — you'll pace yourself correctly. You'll still be there at year two when the things you built in year one start to matter.
The team-of-one role is a long game. Most of the people I've seen flame out of it were playing a short one.
The last thing I'd add — and this wasn't in the original talk at all: figure out early who your allies are, and invest in those relationships before you need them. Not networking. Not stakeholder management in the corporate-training-video sense. Actual investment — understanding what they're trying to do, making their work easier when you can, being the person who shows up prepared and follows through.
Because the moment always comes. The moment when you need someone to believe you enough to move resources, or change a timeline, or tell their team to build the thing you're saying they should build. And in that moment, you're not asking them to trust your design skills. You're asking them to trust you.
Those are different asks. Only one of them can be built in advance.
I don't know if I'd give this version of the talk. It's less actionable than the original. There are no frameworks, no tactics, no list of research methods you can use when you have no budget. It's mostly just things I wish I'd known going in.
But the people who would have benefited from the original talk have probably figured out the tactics by now. What they're still working out — what I was still working out fifteen years in — is everything else.
In 2013 I gave a talk called "Execute a Non-Reactionary UX Strategy." The premise was simple: most design teams spend their time responding to whatever lands in their queue, and that reactive posture makes it impossible to do anything that matters. The talk was about how to get ahead of the work instead of behind it — how to operate with intention when the org around you is operating with urgency.
I still believe all of it. But I'd add a chapter now that I didn't have then.
The original talk was about the individual designer or small team trying to carve out strategic space inside an organization that didn't naturally make room for it. The tactics were real: align with roadmaps early, establish design principles before the debates start, make the criteria for good decisions visible so you're not relitigating them every sprint. All of that holds.
What I didn't have a framework for yet was what happens when you've won that fight — when design has a seat at the table, when nobody questions whether UX should be involved, when the problem isn't reactivity but scale.
It turns out that scale has its own version of the reactivity problem. And it's harder to see.
Here's what it looks like. The team is big enough now that there are designers embedded across multiple product areas. Each of them is doing good work within their lane. The portfolio is producing. Nobody is running purely reactive. And yet… the sum of all this non-reactive work doesn't add up to a coherent experience. Each team is executing their strategy. Nobody is executing the strategy.
I saw this at Home Depot. Talented practitioners operating with real intentionality inside their domains — and a product experience that felt fragmented at the seams. Because the seams were nobody's domain. The handoffs between teams, the shared patterns that weren't quite shared, the decisions made independently that made sense locally and created confusion globally. That's not a reactivity problem. That's a coordination problem that looks like one.
The fix isn't adding more governance. More governance usually just slows down the individual teams without solving the coordination failure. The fix is establishing shared language and shared accountability at the portfolio level — someone whose job it is to see the whole thing and name what doesn't cohere. Not to control. To diagnose.
That's the chapter I'd add: non-reactionary strategy at scale requires a different posture than non-reactionary strategy at the individual level. When you're one person, the discipline is about protecting your time and your focus. When you're running a portfolio, the discipline is about maintaining coherence across people who are all doing their jobs well.
Those require different skills. The first is about conviction. The second is about systems thinking — the ability to hold a mental model of how dozens of decisions interact, and to notice when something that looks like a local win is actually a systemic problem.
Most design leaders get promoted for the first skill and then discover they need the second. There's not a clean training path for it. You either develop it by being in the room when portfolio-level decisions get made — or you develop it the hard way, by being responsible for the outcomes of decisions you didn't realize were connected until something broke.
Now — the original insight still holds. A reactive team is a team that has handed control of its priorities to whoever is loudest or most urgent. The antidote is still the same: establish criteria, align early, make your logic visible. That work never stops.
What I know now that I didn't know in 2013 is that you can do all of that and still find yourself reactive at the portfolio level. Still running behind the compounding consequences of independent good decisions. Still explaining to someone why the experience doesn't feel like one product when every team will show you their perfectly reasonable rationale for the choices they made.
Non-reactionary strategy isn't a phase you graduate into. It's a practice you maintain at every level — and the work changes shape as the scope grows.
The question I'd leave practitioners with now isn't the one I was asking in 2013. Back then the question was: how do you protect your strategic space?
Now it's: what are you doing to make sure the strategy you're executing at your level connects to the one above it and below it?
If you can't answer that, you're not reactive. But you might be isolated. And at scale, isolated strategy is almost as costly.
Most teams treat their process like an internal operating procedure — something that lives in a Confluence page nobody reads, gets explained to new hires in a thirty-minute onboarding call, and then drifts quietly away from reality as the actual work accumulates its own habits and workarounds.
That's a mistake. And it's a fixable one.
The teams I've seen do this well treat process the same way they treat product. They define it explicitly. They version it. They instrument it — meaning they actually check whether it's producing the outcomes it was designed to produce. And when it isn't, they update it the same way they'd update a feature that wasn't working.
This sounds obvious when you say it out loud. Most teams will nod and agree. And then they'll go back to a process that hasn't been intentionally revisited in eighteen months and has accumulated three conflicting variations that different team members are each convinced is the "real" one.
Here's where the analogy breaks down — and why it matters. When a product is broken, you get signals. Users churn. Tickets pile up. Metrics drop. The feedback loop — even if it's slow — exists. When a process is broken, the signal is diffuse. Things feel slow. Friction is high. Handoffs get messy. Meetings multiply. None of those symptoms point clearly back to the process as the cause, so teams diagnose them as people problems or priority problems or communication problems… and they add more process on top of the broken process.
That's the failure mode. Not that the process was wrong to begin with. That nobody treated it as something worth diagnosing.
The version of this I've practiced — and pushed teams around me to practice — starts with a simple question: what is this process supposed to produce, and is it producing it?
Not "is the process being followed." Whether it's being followed is a compliance question, and compliance is the wrong frame. The right frame is outcomes. A process that is being perfectly followed and producing bad outcomes is a broken process. A process that is being loosely interpreted but producing good outcomes is a process with a gap between the documentation and the real behavior — which is useful information, because it means the actual process that's working is undocumented and therefore invisible to anyone new.
Both are worth fixing. Neither gets fixed if the only question being asked is "are we doing the thing we said we'd do."
Now — I want to be specific about what "versioning a process" actually means in practice, because it can sound like unnecessary ceremony.
It means this: when you change how you work in a meaningful way — when a step gets dropped, or a new artifact gets added, or the sequence changes — you write it down as a change, not just an update. You note what changed, when, and why. You give the previous version a name so you can refer to it.
That's it. It doesn't require a tool. It requires the discipline of treating the change as a decision that was made — not something that drifted.
The payoff is disproportionate. When someone new joins, they inherit a documented process with visible rationale — not just the current state but the reasoning behind it. When something breaks, you can trace the change that preceded the breakage. When a debate surfaces about "why do we do it this way," you have an answer that doesn't depend on who has the longest institutional memory.
The deeper argument is about organizational learning. Teams that treat their process as a living artifact are teams that actually learn from what they build. Teams that treat it as background infrastructure are teams that repeat the same mistakes across iterations — because the lessons from iteration one were never captured anywhere that iteration two could find them.
Process is how a team thinks out loud about how it works. When it's invisible and undocumented, that thinking disappears the moment the people who were in the room leave. When it's versioned and maintained, it compounds.
A good process isn't permanent. It's the current best understanding of how this team does this work — with the evidence showing what led to that understanding.
That's what a product is too.
I've been building with AI as a genuine collaborator for well over a year now — closer to two, though the last eighteen months are where it got serious. Not as a search engine, not as an autocomplete, but as something closer to a working partner with context and a set of operating agreements that govern how we work together. It's changed how I work in ways I expected and ways I didn't.
The unexpected one is this: AI collaboration has forced me to document things I didn't know were undocumented.
Here's what I mean. When you work with another person over time, enormous amounts of context transfer implicitly. They were in the room when the decision got made. They remember the conversation where the direction shifted. They have a feel for what you care about that's been calibrated through dozens of interactions, most of which neither of you could reconstruct if asked. That implicit transfer is one of the things that makes long-term working relationships so valuable — and one of the things that makes them so hard to replace.
AI doesn't have that. Or rather — it has the version of it that you've explicitly given it. The context that lives in your head, undocumented, doesn't exist for the AI unless you've written it down somewhere it can read. So if you want consistent, useful collaboration, you have to surface the things that were previously just known.
The discipline this requires is not a limitation. It's a forcing function.
In practice, what this looks like for me is a set of files — working agreements, decision logs, operating protocols — that capture the things I've always known about how I work but never bothered to write down because nobody had ever needed me to. The principles behind how I make decisions on a project. The standing rules that would otherwise live only in my head. The hierarchy of files that tells a collaborator where to look for what.
Writing those things down did something I didn't anticipate: it made them legible to me. When you have to be specific enough for a system to act on, "I run projects a certain way" stops being useful. The act of getting that specific has sharpened my own understanding of what I'm actually doing — and surfaced some inconsistencies I'd been carrying for years without noticing.
The same thing happened with process. I've run projects a certain way for a long time. Teams adapted through osmosis. New people learned by watching. When I started working with AI on project management — sequencing tasks, tracking decisions, managing handoffs between sessions — I had to write down the operating model explicitly. Not because the AI couldn't function without it, but because a working agreement that lives only in my head produces a collaborator that keeps asking me to re-explain things I've already explained… and that's everyone's least favorite version of collaboration.
So I wrote it down. All of it. How sessions open, how decisions get logged, what the hierarchy of files means, what happens at close. And in writing it down, I found three things I thought were one thing, two redundancies I'd been carrying for years, and one rule I'd been applying inconsistently without realizing it.
Now — I'm not arguing that everyone should build elaborate working agreements with their AI tools. Most people don't need that level of structure, and building it prematurely is its own kind of overhead.
What I am arguing is that the friction you feel when AI collaboration isn't working — when the output keeps missing something, when you keep re-explaining the same context, when it feels like the AI doesn't "get it" — that friction is usually pointing at something undocumented. Something that lives in your head as a felt sense that you've never had to articulate, because the humans around you absorbed it without being asked.
That's not an AI problem. That's a documentation problem that AI makes visible.
The most useful thing AI collaboration has done for me isn't the output. It's the audit. Every time I've had to explain how I work in enough detail for an AI to replicate it, I've come away with a sharper model of how I work. The things I thought were instinct have turned out to be patterns. The patterns have turned out to be principles. And the principles, written down, are more useful than the instinct ever was — because they're transferable.
That's the part nobody talks about. Not what AI produces. What it forces you to figure out about yourself in order to produce it.
This article couldn't be found. It may have moved.