Guide

What Is Interaction Design? A Practical Guide

What Is Interaction Design? A Practical Guide

The product review looked solid until the user tried to use it.

I watched a PM walk a customer through a prototype on a shared screen. A button pulsed for attention, but nobody could explain what made it urgent. The customer hovered, clicked away, came back, and asked the question that exposes weak product thinking fast: “What happens if I press this?” The confirmation message answered with generic copy. The room went quiet.

That quiet moment matters. It marks the gap between what the team intended and what the product communicated. Teams pay for that gap through abandoned flows, support volume, slower onboarding, and lost trust.

Interaction design defines how a product responds when a person acts. It covers the feedback after a click, the timing of a loading state, the behavior of a form field, the language inside an error, and the confidence a user feels from one step to the next. For product managers, that makes interaction design a product competency, not a design side quest.

Teams that miss this usually make the same mistake. They treat behavior as polish to discuss after roadmap decisions are done. In practice, behavior is part of the decision. If save works one way on one screen and another way elsewhere, users do not experience that as a small UX issue. They experience it as uncertainty about whether the product can be trusted.

That is also why AI belongs in the conversation. AI can help teams draft edge cases, generate state-based copy, stress-test flows, and document interaction logic faster. It cannot decide what kind of feedback earns user trust, where friction should stay for safety, or how much ambiguity a workflow can tolerate. Product judgment still carries the hard part.

PMs who understand interaction design write better requirements, ask sharper questions in reviews, and catch failure points before engineering builds around them. They also collaborate better with design on the fundamentals behind strong digital products, including essential UX design principles.

The rest of this guide focuses on resources and working methods that help product teams turn interaction design from theory into repeatable product practice.

1. Interaction Design Foundation

When teams ask what is interaction design, they usually want a clean definition first. That’s why the Interaction Design Foundation is a useful starting point. It gives cross-functional teams a shared vocabulary before they start arguing about wireframes, prototypes, and edge cases.

The part many PMs miss is that definitions matter operationally. If design says “feedback,” engineering hears “toast,” QA hears “system response,” and product hears “it should feel clearer,” you don’t have alignment. You have parallel interpretations. IxDF helps teams anchor the work in user behavior instead of subjective taste.

Why teams should start here

Interaction design sits inside the broader UX discipline, but it has its own center of gravity. It focuses on how systems respond to user actions. That includes micro-interactions, state changes, timing, copy, and the logic of flows. If visual design is the surface and UX strategy is the larger frame, ixd design is the choreography.

For PMs, this becomes practical fast. The best product requirements documents don’t just describe the feature. They describe how the feature responds. A field validates on blur or submit. A modal traps focus or doesn’t. A destructive action allows undo or asks for confirmation. Those are interaction design principles, not implementation trivia.

Practical rule: If a user story names an action but not the system response, the interaction design is still unfinished.

A good team exercise is simple:

  • Define the trigger: What exactly does the user do?

  • Define the response: What does the system show, change, save, block, or confirm?

  • Define the recovery path: If the user makes a mistake, how do they recover?

That framing keeps conversations grounded.

Where this helps in product work

IxDF is especially useful when product, design, and QA need a common lens for review. I’ve seen teams reduce vague design feedback just by switching from “this feels off” to “the feedback is delayed” or “the signifier is weak.” That’s a better meeting.

It also pairs well with a lightweight principles library inside your org. If your team needs a practical baseline before templates and prototypes, start with essential UX design principles. Then use those principles when reviewing product decisions, not just mockups.

2. Don Norman’s The Design of Everyday Things

The pattern is familiar. A team reviews a polished prototype, everyone agrees it looks clean, and the first user still hesitates on the primary action. They are not reacting to color or copy. They are trying to answer a simpler question: what happens if I click this?

That is why The Design of Everyday Things keeps earning its place on product teams. Norman gave the field a practical way to explain confusion before it turns into support tickets, drop-off, or expensive rework. His terms, affordances, signifiers, feedback, constraints, and mental models, are not academic decoration. They are review criteria.

For PMs, that shift matters. Interaction design is often treated as a design specialty, but the underlying judgment belongs in product work too. Every feature brief makes assumptions about what users will notice, predict, and trust. Norman helps teams test those assumptions early, including when AI tools generate flows that look convincing faster than the team can critique them.

The ideas that keep paying off

One of Norman’s strongest contributions is the idea that users have to cross two gaps. First, they decide what they want to do. Then they interpret the system’s response. If either step is unclear, the interface asks users to guess.

I use five checks in product reviews:

  • Affordance: Does the object or control support the intended action?

  • Signifier: Is there a clear cue that shows where and how to act?

  • Feedback: Does the system respond quickly and unambiguously?

  • Constraint: Does the flow reduce the chance of costly mistakes?

  • Mental model: Does the product behave the way a reasonable user expects?

These checks sound simple. Under deadline pressure, they catch the problems that polished mockups hide.

How PMs can use the book in real workflows

A bulk-edit tool I reviewed had all the right functionality. The failure was interaction logic. Selected rows barely changed state, the save pattern shifted between screens, and the risk of changing the wrong records felt high even in staging. Engineering had built what the spec asked for. The spec had still missed the user’s moment of doubt.

Norman is useful because he keeps the team focused on what users infer, not what the team intended. That is a strong habit for any PM. It is even more useful when AI speeds up prototyping, because generated concepts often produce plausible screens without a coherent interaction model underneath.

A practical way to use the book is to add a short Norman review to your product process. Before a design review or handoff, ask three questions: What tells the user where to act? What tells them what just happened? What prevents the worst mistake here? If the answers are vague, the flow is not ready.

For teams that want a lightweight review aid, pair Norman’s concepts with Figr's guide to design heuristics. It gives PMs and designers a shared checklist for judging whether a flow is understandable, recoverable, and predictable before usability issues show up in the wild.

Good interaction design feels obvious to the user because someone did the hard thinking earlier.

3. Nielsen Norman Group

A design review for a checkout flow can go off the rails fast. One person wants fewer steps. Another wants more reassurance. Engineering asks what is broken. Without a shared standard, the loudest opinion usually wins.

Nielsen Norman Group is useful because it gives product teams a way to judge interaction decisions on observable behavior. For PMs, that changes the job. The conversation shifts from taste to evidence, from "I like this pattern" to "users are likely to miss this state change and repeat the action."

That distinction matters in roadmap work, not just in design critiques. UX design covers the broader experience across the product. Interaction design deals with the cause-and-effect layer inside a flow: what the user does, how the system responds, and whether that response is clear enough to keep the task moving.

Why PMs should use NN/g as an operating standard

A lot of teams now have access to research, prototypes, and AI-generated concepts. The bottleneck is usually judgment. Screens look polished long before the interaction logic is sound.

NN/g-style thinking helps teams define quality in terms that survive handoff. Can users predict the result of an action? Do they get feedback at the right moment? Can they recover without support if they make the wrong choice? Those are product questions as much as design questions.

I have seen this reduce churn in reviews almost immediately. A PM comment like "the save state is ambiguous after submit" is useful. A stronger comment goes one step further and ties the issue to user risk and implementation intent.

  • Name the friction: "The status change is easy to miss after submit."

  • Name the likely outcome: "Users may click again because the system does not confirm completion clearly."

  • Name the fix direction: "Show immediate state feedback and make the post-action state persistent."

That format works well with AI-assisted prototyping too. Generated flows often look credible on first pass, but they regularly miss edge states, recovery paths, and feedback timing. Heuristics give PMs a fast filter before weak interaction logic reaches engineering.

A review habit that scales

One practice worth keeping is simple. Require every critical review comment to point to a principle, pattern, or known usability risk.

That lowers politics and raises signal.

For a lightweight reference, keep design heuristics nearby during reviews. It gives PMs and designers a shared way to assess interaction choices without turning every critique into a debate about personal preference. If the team is also standardizing behaviors across products, document those decisions alongside design system tokens and components so the interaction rules live next to the UI building blocks.

One more practical point. Good interaction design is not finished when the prototype looks right. It is finished when the shipped behavior stays right across releases. That is why mature teams pair heuristic reviews with QA practices such as visual regression testing software, especially for flows where small state changes carry real user risk.

Emotional response belongs in this discussion too. Confusing confirmation states, vague warnings, and dead-end forms create hesitation long before a user files a bug. That is why emotional design in product UI belongs in the same working set as heuristics and usability reviews.

4. Figma and design systems in practice

A PM approves a polished flow in Figma on Tuesday. Engineering ships it two sprints later. Suddenly the loading state is different on mobile, the error message appears in a new position, and one destructive action closes with Escape while another ignores it. Users read that as carelessness. Teams feel it as drag.

Figma exposes that gap fast. A component library can make screens look aligned while behavior still varies from flow to flow. Interaction design becomes operational when the team defines what each component does, how it fails, and which state changes are acceptable across products and platforms.

That matters to product managers because these decisions shape delivery speed as much as user experience. If validation timing, confirmation patterns, and keyboard behavior are unclear, every squad reopens the same debate. The cost is not only inconsistency. It is slower planning, longer QA cycles, and more edge-case bugs entering release.

The system behind the screens

Strong design systems carry behavioral rules, not just visual tokens. Teams need an opinion on questions such as when inline validation appears, whether a modal traps focus, what happens after retry, and how a disabled action explains itself. Once those rules are documented in components, reviews get faster and implementation gets less interpretive.

I usually look for three layers:

  • State coverage: Default, hover, focus, active, loading, success, error, disabled

  • Behavior notes: Click, tap, blur, Escape, back, retry, cancel, timeout

  • Platform constraints: Keyboard use, touch targets, reduced motion, slower connections

That structure often reveals product debt before analytics does. Three versions of a modal across settings, billing, and onboarding rarely come from deliberate strategy. They usually come from local decisions made under deadline pressure.

For PMs, Figma works best as a decision log with a visual interface. Tie a component to the rule, the rationale, and the known exception. If the team is building card-heavy interfaces, a focused guide to card UI design can help sharpen decisions around hierarchy, action density, and repeatable states inside a system.

Here’s a useful companion when your team needs to formalize that structure: design system tokens and components.

A quick walkthrough can help if your team is tightening execution inside Figma.

Where AI can help, and where it can’t

AI speeds up exploration. It does not remove the need for product judgment.

Figr applies interaction design principles automatically. When generating prototypes, it maps micro-interactions, state transitions, and feedback patterns from 200k+ real-world screens. That gives PMs and designers something reviewable early, which is useful when a team needs to compare options before spending engineering time.

You can see that in examples like the Gmail AI draft interaction and the X.com soft mute interaction.

The hard part starts after generation. Does the pattern match your product logic, trust model, and error tolerance? A plausible prototype can still create support burden if it confirms the wrong action, hides system status, or breaks expected keyboard behavior.

Shipped behavior needs the same discipline as designed behavior. If your team wants consistency after handoff, visual regression testing software belongs in the same operating model as Figma libraries and design system reviews.

5. Interaction design patterns libraries

A PM reviews a checkout redesign on Thursday afternoon. The new flow looks polished in the prototype. By Monday, support is logging tickets because coupon entry moved, the back button drops cart state, and the loading state after payment leaves people wondering whether they were charged.

That failure usually starts earlier than teams think. It starts when common interactions get treated as fresh creative problems instead of product decisions with known failure modes.

Pattern libraries help teams make those decisions faster and with less rework. For product managers, they are not a designer’s side asset. They are operating tools for reducing ambiguity in high-frequency flows, especially when AI makes it cheap to produce five plausible options before lunch. Speed creates volume. Pattern libraries create standards for choosing.

The value shows up most clearly in repeat interactions such as:

  • Forms: Validation timing, inline errors, password visibility, multi-step progression

  • Navigation: Tabs, breadcrumbs, back behavior, overflow menus

  • Feedback: Success states, pending states, retries, destructive confirmations

  • Discovery: Search suggestions, filters, sorting, result empty states

Good libraries do more than collect screenshots. They capture the decision rule behind the pattern. When should an error appear? What deserves a blocking confirmation? Which actions need optimistic updates, and which need explicit system acknowledgment? That level of documentation saves teams from re-litigating the same interaction in every sprint.

I have seen the strongest product teams keep their pattern set small on purpose. They document the few interactions that drive revenue, trust, and support volume. Each pattern includes the expected behavior, edge cases, and the reasons the team chose it. That gives design, engineering, QA, and PMs a shared reference point. It also makes AI-generated concepts easier to evaluate because the question becomes concrete: does this proposal match our product rules, or does it only look convincing?

Patterns still need context. A card layout that works in a content feed can fail inside a compliance-heavy workflow. If your team is refining browse-heavy screens or summary components, this guide to card UI design is a useful starting point. For end-to-end journey work, these user flow examples and broader user experience flows help connect local interaction choices to the full path a user takes.

Reuse standard patterns aggressively in routine tasks. Spend invention on the few moments where product behavior creates a real advantage.

6. Case studies from strong product teams

The meeting usually sounds confident right up until someone asks a plain question: what should happen when the card charge fails after the user has already completed setup? That is the moment strong teams stop arguing from taste and start looking for precedent.

Case studies help because interaction design lives inside constraints. Billing rules, support capacity, technical debt, regulatory risk, and user stress all shape the right answer. A good team studies how other products handled similar pressure, then adapts the reasoning to its own product.

I once watched a growth-stage team redesign a dunning flow after reviewing how established SaaS products handled failed payments. They did not copy screens. They examined timing, message hierarchy, recovery options, and fallback states. The useful insight was not visual. It was operational. Which failures deserved interruption, which could wait, and how to protect revenue without making users feel trapped.

That is the standard to use.

What to look for in a useful case study

Useful case studies show decision quality under pressure. They explain the user situation, the product constraint, the options considered, and the consequence of the final choice. The best ones also show what the team changed in its workflow after shipping.

Look for cases that expose:

  • Decision boundaries: Why one interaction pattern beat another in a specific context

  • Research influence: How user evidence changed the team’s initial instinct

  • Failure handling: How the product responded when users hesitated, retried, or got blocked

  • System evolution: How a one-off fix became a repeatable rule across the product

This matters for PMs because interaction design is not a side discipline. It is product economics. Confusing states create support tickets. Ambiguous confirmations create rework. Fragile flows slow activation and reduce trust. Strong case studies make those costs visible, which helps PMs prioritize design work with the same seriousness they bring to reliability or conversion.

They also give AI a better role. Instead of asking an AI tool for generic UI ideas, teams can feed it a real case and ask sharper questions: where are the likely edge cases, which moments need explicit feedback, what assumptions break under slow networks or partial completion? That turns AI from a mockup generator into a decision-support tool.

Turning outside examples into internal assets

The strongest product teams do not just collect famous redesign write-ups. They create their own case library after every meaningful release.

Keep it simple. Capture the original problem, the interaction options considered, the decision made, what users did, and what the team changed afterward. A short archive like that becomes useful fast. New PMs ramp quicker. Designers avoid repeating old mistakes. Engineers get clearer context on why behavior matters, not just what to build.

Over time, those internal case studies become more valuable than polished public ones because they reflect your users, your stack, and your trade-offs. They also make cross-functional reviews sharper. Instead of debating abstractions, the team can point to prior evidence from its own product.

As those examples accumulate, place them against the full user path, not just the local screen. That is where digital customer journeys becomes useful. A local interaction decision earns its keep when it supports the larger journey users are trying to complete.

7. Product and UX hybrid communities

A lot of PMs learn interaction design in public. Not on purpose, usually. They ship a flow, users struggle, and then they start asking sharper questions in Slack groups, cohort programs, office hours, and peer communities.

That’s not a weakness. It’s how cross-functional judgment often gets built.

Communities that sit between product and UX are valuable because they normalize the genuine tension in the work. How much friction is acceptable for safety? When should speed beat flexibility? When is consistency more important than experimentation? Those questions rarely have textbook answers.

Why peer learning works here

Product teams often operate inside local norms. The company’s way becomes the only way anyone knows. Communities break that isolation. They let PMs compare how other teams validate prototypes, manage design debt, govern systems, and review interaction trade-offs with engineering.

This gets more relevant as AI enters design workflows. According to Wikipedia’s overview of interaction design, one emerging view is that AI-enabled IxD tools can surface more edge cases by grounding design work in live app data, though the article also reflects how practical guidance often lags behind the tools themselves. That gap is exactly where communities help. They turn private experiments into shared operating knowledge.

A good PM question for these groups isn’t “what tool should we use?” It’s “how do you validate interaction decisions before engineering commits to them?”

What to ask instead of lurking

If you’re joining Product School, Reforge circles, local PM groups, or design-adjacent communities, ask questions with behavioral specificity.

  • Ask about failure states: How do teams review errors, retries, and cancellations?

  • Ask about governance: Who owns reusable interaction patterns?

  • Ask about proof: What evidence changes minds in design reviews?

  • Ask about AI workflows: Where does automation help, and where does it create false confidence?

Those questions usually produce better answers than broad prompts about “best practices.”

The same principle applies internally. If PMs, designers, researchers, and QA can talk about behavior with precision, the whole product team gets faster.

For decision reviews, I often pair that vocabulary with the golden rules of interface design. It gives teams a shared backbone without turning every meeting into a theory seminar.

8. Accessibility and interaction standards

A checkout flow ships on time. The team celebrates. Two days later, support tickets start coming in. Keyboard users cannot reach the promo code field. Screen reader users hear “button” three times with no distinction between actions. After an error, focus jumps to the header instead of the problem. The UI looked polished in review. The interaction was still wrong.

Accessibility makes the definition of interaction design concrete.

If a modal cannot be used by keyboard, if focus disappears after an action, if a gesture has no equivalent input, or if an error is visible but not perceivable, the product has a behavioral defect. PMs should treat those failures the same way they treat broken payments or failed saves.

Accessibility is behavior design

Accessibility cuts across the full interaction, not just the visual layer. Labels need to make sense. Controls need clear states. Device context matters. Timing cannot punish slower input. Recovery needs to be obvious when something goes wrong.

I have seen accessibility reviews expose weak product thinking faster than almost any design critique. A flow that only works for a fast, precise, fully sighted user usually has deeper interaction problems for everyone else too. Teams often discover that the issue is not a missing annotation in Figma. It is an unmade product decision about focus order, timeout rules, confirmation states, or error recovery.

Standards that sharpen product decisions

Standards such as WCAG, ARIA, and WebAIM guidance help because they turn vague intent into testable behavior. Can someone tab through the flow in a logical sequence? Does the error explain both the problem and the next step? When a dialog opens, does focus move to the right place? When it closes, does focus return predictably? Can users understand feedback without depending on color?

For PMs, the practical move is to write interaction acceptance criteria that include accessibility from the start. If the flow depends on timing, state changes, drag-and-drop, motion, or focus behavior, specify what must happen in each case. That gives design, engineering, and QA a shared definition of done.

AI tools can help teams spot missing labels, contrast issues, and pattern mismatches at scale. They are less reliable at judging whether the interaction works in context, especially across assistive tech, edge cases, and interrupted flows. Use automation for coverage. Keep human review for judgment. If your team needs a practical reference, web content accessibility guidelines is a good place to start.

From understanding to action

Understanding what interaction design is only gets you to the starting line. The true advantage arises when your team begins to treat product behavior as a first-class decision, not a finishing touch.

That shift changes how you write requirements, how you review prototypes, how you run QA, and how you interpret user confusion. Instead of asking whether a screen looks done, you ask whether the conversation is clear. Can the user tell what action is available, what happened after they took it, and what they can do next? If not, the work isn’t done.

The basic gist is this: pick one interaction concept and operationalize it this week.

Choose feedback loops, affordances, error recovery, focus management, or consistency across repeated actions. Bring your PM, designer, engineer, and QA partner into the same discussion. Spend thirty minutes reviewing one live flow. Not the whole product. Just one path. A signup step. A search result filter. A billing retry. A destructive action in settings. Then ask a tight set of questions.

  • What is the user trying to do here?

  • What tells them where to act?

  • What feedback do they get immediately after action?

  • Where could they misinterpret the system state?

  • How do they recover if they make a mistake?

Those five questions can upgrade the quality of a product conversation fast.

This is also where PMs become more effective without pretending to become designers. You don’t need to choose type scales or animate transitions yourself. You need to notice when product intent and user understanding drift apart, then bring the team back to observable behavior. That’s a product skill.

At scale, this matters because product complexity compounds naturally. More features mean more states. More integrations mean more edge cases. More platforms mean more interaction contexts. If your team doesn’t define behavior deliberately, inconsistency creeps in screen by screen until the product feels harder to trust. That’s why the teams that get this right usually don’t rely on talent alone. They build reusable principles, patterns, review habits, and system rules.

For the complete framework on this topic, see our guide to UX design process steps.

If your team is already trying to scale this work, tools can help, but only if they reinforce judgment rather than replace it. Figr is one option that fits naturally into this workflow because it grounds prototypes, flows, and edge cases in real product context, imports design systems, and applies interaction patterns drawn from analysis of 200,000+ screens. That can speed up exploration. The decision quality still comes from the team.

Start small, but start concretely.

Review one user story this week and rewrite it to include the action, the system response, and the recovery path. Then test that flow with someone who didn’t design it. You’ll learn more from a few minutes of hesitation than from a long internal debate.

That’s usually where the actual work of interaction design begins.


If your team wants help turning product intent into reviewable flows, prototypes, and edge cases, Figr is built for that. It learns from your product context, applies proven interaction patterns, and helps PMs, designers, and engineers work from the same behavioral model before handoff.

Product-aware AI that thinks through UX, then builds it
Edge cases, flows, and decisions first. Prototypes that reflect it. Ship without the rework.
Sign up for free
Published
April 28, 2026