You’re just staring at a navigation bar that has seven top-level items. The marketing team wants to add an eighth: "Resources." Simple, right? But a cold knot forms in your stomach.
Where does it go? Does it sit next to "Pricing"? Or does it belong under "Support"? And come to think of it, is "Resources" even the word a real human would look for?
This isn’t about adding a link. It's about redrawing the mental map your users rely on to navigate their world inside your product. You're not just organizing content; you're either clarifying the path or laying a trap.
The Gap Between Your Org Chart and Their Brain
Too often, a product’s navigation is a perfect mirror of the company’s internal structure. Engineering, Marketing, Sales: each department gets a section. This creates a quiet, constant friction for users whose minds don’t work like an org chart. What your team calls "Integrations," a customer might call "Connections" or "Apps."
Information architecture isn't a filing system. It's a translation layer.
Card sorting UX is the practice of decoding your user's mental language so you can build that translation layer correctly. It's a beautifully simple method: you hand users a messy pile of your product's concepts and ask a fundamental question.
How would you organize this?
Building on a Foundation of Evidence
This isn't a new idea, but its role in a world of complex software is more critical than ever. The practice has deep roots in psychological research, formalized for UX in the early 2000s. It’s grounded in the idea that to build intuitive systems, you must first understand the user's innate sense of order. Understanding core user experience design principles is the first step, but card sorting makes those principles actionable.
A friend at a B2B SaaS company had a lightbulb moment last week. They ran a card sort and discovered that their most important feature, "Compliance Reporting," was consistently sorted by users under a "Security" category.
The product team had always thought of it as a financial tool. Their users, however, saw it as a feature for managing risk.
That single insight changed their entire onboarding flow.
This is what I mean: card sorting pulls you away from assumption-based design and anchors you in evidence. It gives you a map of the user’s mind before you draw the map of your product. This is a foundational step in understanding the entire customer experience, something we dig into in our guide on what is a user journey map.
The Three Lenses of Card Sorting: Which Question Are You Asking?
Imagine you’re asked to organize a library. The books are in towering, chaotic piles and the shelves are completely bare. How do you start?
Do you provide pre-labeled shelves like ‘Fiction’ and ‘History’ and ask people to place the books? Or do you hand them a pile of books and empty shelves, asking them to invent the categories from scratch? Maybe you offer a few established shelves but leave others blank for their ideas.
Each approach is a different way of understanding how people organize their world. In card sorting UX, these are distinct strategies. Choosing the right one isn’t about which is best. It’s about which lens will give you the clearest picture of the problem you’re trying to solve.
Open Sorting: The Lens of Discovery
The open card sort is for pure discovery. You give users a stack of cards, each a concept in your product, and ask them to group them in a way that makes sense. The critical step? You then ask them to name those groups.
You're not testing your own assumptions. You’re uncovering your users’ actual mental models.
It’s the right call when you’re:
Building something new: You have no structure, so you must understand your user’s logic from the ground up.
Doing a major redesign: The old information architecture is clearly broken, and you need a fresh perspective.
Entering a new market: A different audience might have a completely different mental map.
The output can be messy, but that’s where you find the gold. It reveals the language your users actually use, not the jargon your team has grown comfortable with. A clear user flow, like these user flows for a Shopify Checkout Setup, often starts with the raw vocabulary discovered in an open sort.

Closed Sorting: The Lens of Validation
In a closed card sort, you provide the categories. The shelves are already labeled. The user's job is to put the conceptual "books" on the right shelf.
This method isn’t for discovery. It’s for validation.
You use a closed card sort when you’re confident in your categories but need to confirm that users understand them the way you do. It answers the question: “Given our structure, can people find where things belong?”
A friend at a Series C company did this recently. They had established navigation like 'Integrations', 'Security', and 'Reporting'. Before launching ten new features, they ran a closed card sort to see if users would put the new concepts under the expected headings. The results gave them the confidence to move forward.
Hybrid Sorting: The Lens of Evolution
The hybrid card sort is the pragmatic middle ground. You provide some predefined categories but also give users the freedom to create new ones. It's a tool for evolving a system without a complete teardown.
This approach accepts a simple truth: your current structure might be mostly right, but it probably has critical gaps.
Use a hybrid sort to refine an existing information architecture. It lets you test the known parts of your system while giving users a chance to show you what’s missing. Maybe they create a new category that elegantly consolidates a few of your ideas, like combining items from ‘Profile’ and ‘Billing’ into an ‘Account Management’ group. This is how architecture evolves.
Designing a Study That Delivers Signal, Not Noise
A poorly designed card sort is worse than no study at all. It doesn’t just waste time; it gives you a false sense of confidence, sending you down a product path paved with bad assumptions.
This is your guide to designing a session that delivers a clear signal.
The basic gist is this: your cards must reflect real concepts, and your participants must reflect real users. I once saw a team run a card sort for a complex developer tool using marketing managers as participants. The results were a masterclass in confusion because the actual user's mental model was completely absent.
Selecting the Right Cards
Your first decision is what goes on the cards. Each card represents a piece of your product's soul. Your job is to make these concepts unambiguous.
Avoid Jargon: What your team calls a "back-end data ingestion module," your user might think of as "Import." Use their language, not your internal slang.
Maintain Consistent Granularity: Don't mix high-level concepts like "Account Settings" with hyper-specific features like "Change Avatar." All cards should exist at a similar conceptual level.
Aim for the Sweet Spot: A common recommendation is to stick to 30–50 cards. Any fewer and you won't see meaningful patterns. Any more, and you risk overwhelming participants.
Defining the Scope and Participants
Who you ask is just as important as what you ask them.
Your participants must be representative of your actual user base. If you have multiple distinct user personas, you might even consider running separate studies. As you plan your recruitment, diligent primary customer research isn't a nice-to-have; it's essential.
Getting the right people in the room is the difference between a study that validates a direction and one that sends you on a wild goose chase.
And people are surprisingly willing to participate. One detailed study found an 89.8% completion rate, with an average completion time of just over 21 minutes, a reasonable ask for busy users. You can dig into the numbers in this detailed research from UXPA Journal.
Choosing Your Arena: Remote vs. In-Person
Finally, where will the study happen? Will you moderate it in-person, or run it unmoderated using a digital tool?
In-person sessions are gold for qualitative insights. You get to hear the "why" behind the groupings. The pauses, the debates, the "aha" moments are data points you will never see in a spreadsheet.
Remote sessions, on the other hand, give you scale and efficiency. They're often cheaper and let you reach a broader audience. Tools like OptimalSort or UXtweak can quickly gather quantitative data from dozens of participants, making it much easier to spot strong patterns. The choice isn't about which is "better," but what you need to learn.
From Clustered Cards to Coherent Architecture
The study is done. You're now staring at a visualization that looks like a tangled spiderweb.
What now?
Think of yourself as a detective. A single clue means little. But when you start seeing the same pattern over and over, a story emerges. Your job isn't to invent that story; it's to uncover the one the evidence is telling you.
Decoding the User’s Blueprint
Most card sorting tools give you visualizations like dendrograms and similarity matrices. Their job is simple: to show you which cards your participants consistently grouped together.
A similarity matrix is a grid showing the percentage of people who paired any two cards. The darker the intersection, the stronger the connection in your users' minds.
A dendrogram is a tree-like diagram that visualizes how those pairs clump together into larger clusters.
When you see cards like ‘Billing History,’ ‘Invoices,’ and ‘Subscription Plan’ in a tight cluster, you've found a major clue. Users see these things as a single concept. Your job is to give that concept a name. This is how you discover user-validated labels. Instead of three separate links, you now have one clear navigation item: ‘Account & Billing.’
This process makes it clear: good analysis is only possible if the study itself was built on a solid foundation.
The Zoom-Out Moment
This is more than a design tweak; it's a lever for the business. A logical, user-validated navigation system cuts down on support tickets, boosts conversion rates, and helps people discover features that were previously buried. Why does this matter at scale? Because every moment a user spends searching is a moment they aren't getting value, which is a direct threat to retention. It's a death by a thousand tiny papercuts of cognitive load.
Some research even suggests that 75% of confusion around information organization can be caught and fixed through card sorting before a single line of code gets written.
From Insights to Information Architecture
Once you’ve identified these core, user-defined clusters, you can sketch out a new information architecture. This structure becomes the backbone for more detailed design work. Understanding how these categories connect helps you build much clearer navigation, a topic we dive into in our guide to user flow examples.
The output should be a living blueprint.
This blueprint gives you the confidence to propose changes backed by direct evidence. It turns subjective arguments about navigation into objective, data-driven conversations. The question is no longer, "What do we think it should be called?"
It becomes, "What did our users show us it should be called?"
That shift is fundamental.
Turning a Treasure Map into a Real Road
A card sort result isn’t a blueprint. It's a treasure map. It points to where the gold is buried, but you still have to draw the roads. The real work begins when you translate those clustered cards into tangible product changes.
How do you take a spreadsheet of groupings and turn it into a navigation bar that just makes sense? This is where research graduates to reality.
The insights from your card sort provide the architectural DNA for your product. Instead of guessing, you can use these validated categories to make design decisions with confidence. Those user-defined groups become direct inputs for structuring new user flows.
For instance, mapping all failure states for a feature like a Dropbox file upload becomes much clearer. With a validated information architecture, you know where users expect to find error messages and recovery options.
From Architecture to Artifacts
Once you have a proposed structure, make it real. Build a quick, clickable prototype of the new navigation to see if it holds up. This is a critical validation loop.
Does that new ‘Account & Billing’ category actually help people find invoices faster? A simple prototype answers that question before anyone writes code. The path to a better Shopify Checkout Setup can be tested this way, ensuring each step aligns with the user's mental map.
Your card sort gives you the mandate for change, backed by the collective voice of your users. These findings can then be passed along to engineers with much greater clarity, a process we detail in our developer handoff playbook.
A card sort doesn’t just give you permission to change your navigation. It gives you a mandate.
A Continuous Conversation
A single card sort is a snapshot in time. The insights you gather today are the starting point for an ongoing conversation with your users.
As you implement changes, layer in other forms of feedback. For example, learning how to effectively manage course feedback surveys can keep insights flowing long after your initial study. The core principles of listening, clustering, and acting never stop.
Here’s the takeaway: use your card sort results as the first draft of your product's new logic. Build a simple prototype based on the strongest clusters. Then, get it in front of users. Watch them click. Turn the static insights into dynamic, testable artifacts that move your product forward.
Your Next Step: From Guessing to Knowing
You don't need a massive budget or a dedicated research team to get started. Your next step isn’t launching a fifty-person study. It’s something much smaller, something you can do this week.
Your goal is to create a single, achievable spark of insight.
Here’s the task. Over the next seven days, identify the top 15-20 most used features or pages in your product. Write each one on a virtual sticky note. No jargon.
From Friction to Focus
Next, find five colleagues from different departments and ask them to group those stickies. Grab someone from sales, one from engineering, one from support. Don't guide them. Just watch.
You will see patterns. You will also see profound disagreements.
That friction is your starting point.
It’s the visible gap between your company’s internal logic and a user’s mind. An engineer might group features by technology stack; a sales rep groups them by customer pain point. Which one mirrors your user?
This small exercise illuminates the hidden fault lines in your product’s logic. The disagreements aren’t a problem; they are the most valuable data you will collect.
This simple internal test gives you both the preliminary data and the conviction you need to make the case for a formal, user-facing card sorting study. It helps you articulate exactly why this work matters.
In short, it’s the first step from guessing to knowing. It moves the conversation from, "I think users are confused," to "Here is clear evidence that even we don't agree on how our own product is organized." And that is a powerful place to start.
A Few Lingering Questions
We've walked through the strategy and the synthesis of a card sort. But a few practical questions almost always pop up when teams are ready to jump from theory to practice.
How Many Participants Do I Really Need?
You can uncover foundational issues with a surprisingly small group. Jakob Nielsen's famous research showed you can find about 85% of usability problems with just 5 users. While that's for general usability testing, the principle holds: you don't need a massive sample to get powerful insights.
For card sorting, the number creeps up a bit. The sweet spot for a quantitative study is around 15-20 participants. That gives you enough data to see reliable clusters without taking weeks to recruit. If you're running more qualitative sorts where you're talking to people, even 5-8 users can give you a profound look into how they think.
What’s the Difference Between Card Sorting and Tree Testing?
They're two sides of the same coin. One is for building the house; the other is for seeing if people can find the bathroom.
Card sorting is the architectural blueprint. It helps you decide where to put the walls and doors in the first place. You ask users to group topics to figure out what your navigation structure should be.
Tree testing is the wayfinding test. It checks if people can navigate the building you've designed. You give them a task, like ‘Find where to update your password,’ and see if they can successfully click through your proposed site map.
So, use card sorting first to draw the map. Then, use tree testing to see if people can read it.
What if Participants Create Wildly Different Groups?
First, don't panic. This isn't a sign that your study failed.
It's a finding.
When groupings are all over the place, it's a powerful signal that your users don't share a clear mental model for your content. It could mean your concepts are ambiguous, your scope is too wide, or you're serving different personas with different needs. This is your cue to zoom in. Look for the moments where users hesitated. Inconsistent results often mean you need to simplify your offerings, clarify your terminology, or consider separate navigational paths for different users, like the ones we explored in this UX persona simulation.
Card sorting gives you the map, but turning that map into a living, breathing product is the real work. With Figr, you can take the validated IA from your research and instantly generate user flows, prototypes, and edge cases that are grounded in how your users actually think. Stop guessing and start designing with validated logic. Explore how Figr accelerates your workflow.
