Guide

Using Prototyping and Usability Testing Tools Together to Improve UX in SaaS and E-Commerce Products

Published
November 28, 2025
Share article

You build a prototype. Looks great. Ships to production. Then users struggle. Conversion drops. Support tickets flood in.

So what actually broke here? You shipped a good-looking flow that never met real users.

What went wrong? You didn't test the prototype with real users before building it. Prototype looked good to you. But you're not the user.

This guide shows how to combine prototyping and usability testing tools to improve UX in SaaS and e-commerce products before you waste engineering time building the wrong thing.

Why Prototyping Without Testing Is Risky

Prototypes show what product could be. Usability testing shows what product should be.

So which one tells you whether users can actually finish the job? Usability testing does.

The gap:

What designers think: "This flow is intuitive. Users will understand."
What users actually do: Click wrong button, miss key feature, abandon confused.

Example failures:

SaaS onboarding: Designed 5-step wizard. Users drop off at step 2. Why? Step 2 asks for information they don't have yet. Didn't discover this until post-launch.

E-commerce checkout: Redesigned to be "cleaner." Removed guest checkout button. Conversions dropped 15%. Users wanted guest checkout. Didn't test before shipping.

Dashboard redesign: Moved key metric to sidebar for "better layout." Users complained it's harder to find. Didn't validate with users first.

Each failure costs weeks of engineering time + lost revenue + damaged trust.

So do you really want to find these problems in production? Probably not.

Solution: Test prototype before building. Find issues when they're easy to fix.

flowchart LR
   A[Create Prototype] --> B[Usability Test with 5-8 Users]
   B --> C{Issues Found?}
   C -->|Yes| D[Iterate Prototype]
   D --> B
   C -->|No| E[High Confidence to Build]
   E --> F[Engineering Implementation]

The Complete Prototyping + Testing Workflow

Here's the workflow that works.

How do you know if you are doing "enough" before engineering starts? You follow a simple loop like this.

Step 1: Create prototype (1-2 days)

Build interactive prototype of key flows. Not full product, just the parts you're unsure about.

Tools: Figma, Framer, Figr

Step 2: Define test plan (2 hours)

What are you testing? Write specific questions:

  • "Can users complete signup without help?"
  • "Do users understand what this feature does?"
  • "Which checkout flow converts better: A or B?"

Step 3: Recruit participants (1-2 days)

Find 5-8 people who match your target user profile. Not colleagues. Not friends. Real users.

Step 4: Run usability tests (2-4 hours)

Watch users interact with prototype. Don't help. Take notes on where they struggle.

Wondering if you need a long research study for this? You don't, a tight round with a handful of right-fit users is enough.

Tools: Maze, UserTesting, Lookback

Step 5: Analyze results (2 hours)

Identify patterns: 4 out of 5 users struggled with X. That's a real issue, not coincidence.

Step 6: Iterate prototype (1-2 days)

Fix identified issues. If major, test again. If minor, proceed to build.

Step 7: Build with confidence (weeks)

Engineering builds validated design. Fewer surprises, less rework.

Total time: 1 week of prototyping + testing. Saves 2-4 weeks of post-launch fixes.

So is one week of testing worth trading for several weeks of rework and support pain? Yes, almost every time.

Tools for Prototyping SaaS and E-Commerce Products

Figma + Prototyping Mode

Best for: Quick clickable mockups

Pros: Fast, easy, designers already know it
Cons: Limited interactivity, no real data

Use when: Testing navigation and layout

Not sure where to start if your team already designs in Figma? Use it first, then add other tools only if you hit its limits.

Framer

Best for: Realistic interactions and animations

Pros: Code-like control, beautiful interactions
Cons: Steeper learning curve, can't handle real data easily

Use when: Testing micro-interactions and animations

Webflow or Bubble

Best for: Fully functional prototypes

Pros: Real app with database, users can input data
Cons: Takes longer to build

Use when: Testing complex workflows with state

Figr

Best for: Production-ready prototypes aligned to design system

Pros: AI-generated, component-mapped, fast iteration
Cons: AI might not capture exact custom branding

Use when: Testing SaaS flows with existing design system

v0 or Bolt

Best for: Code-first prototypes

Pros: Real React/Next.js code, developers comfortable
Cons: Requires coding knowledge

Use when: Engineer-led prototyping

How to choose:

Designer-led + simple flow: Figma
Designer-led + complex interaction: Framer
Product manager-led: Figr or Bubble
Engineer-led: v0, Bolt, or code from scratch

Still tempted to find the "perfect" tool before you start? Pick the one that fits your team today and optimize later.

Tools for Usability Testing

Maze

Best for: Unmoderated testing with quantitative data

How it works: Connect Figma prototype to Maze. Define tasks ("Complete checkout"). Maze measures success rate, time, misclicks.

Pros: Quantitative data, fast (test 100 users in hours)
Cons: No facial expressions or real-time questions

Cost: $75-300/month

UserTesting

Best for: Moderated and unmoderated video testing

How it works: Recruit from UserTesting panel or invite your users. Watch videos of them using prototype and talking through process.

Pros: See real reactions, hear thought process
Cons: Expensive, slower (hours per user)

Cost: $49-199 per video response

Lookback

Best for: Live moderated sessions

How it works: Schedule video calls with users. Watch them use prototype in real-time. Ask follow-up questions.

Pros: Deep insights, can probe issues
Cons: Time-intensive (1 hour per user)

Cost: $100-450/month

Hotjar or FullStory (for live products)

Best for: Testing live products, not prototypes

How it works: Watch session recordings and heatmaps of real users on your site.

Pros: Real behavior on real product
Cons: Only works after launch, can't test prototypes

Cost: $0-200/month (Hotjar), $500-2k/month (FullStory)

How to choose:

Need fast quantitative data: Maze
Need qualitative insights: UserTesting or Lookback
Testing live product: Hotjar or FullStory
Small budget: Lyssna (cheaper Maze alternative)

Worried you will overcomplicate things by mixing tools? Start with one quantitative and one qualitative option, then expand only if you hit a clear gap.

Figr's Built-In Usability Patterns for SaaS and E-Commerce

Most prototypes fail usability because they violate known UX patterns. Figr reduces this by generating designs with proven usability patterns built-in.

Figr's built-in patterns:

For SaaS:

  • Progressive disclosure (show complexity gradually)
  • Empty states with clear CTAs
  • Inline validation for forms
  • Success confirmations
  • Error recovery paths

For E-commerce:

  • Clear product hierarchy
  • Visible cart status
  • Guest checkout option
  • Trust signals (security badges, reviews)
  • Mobile-optimized checkout

Result: Prototypes start with better usability baseline. Testing validates specifics, not fundamentals.

So instead of asking "Is this pattern even sane?" you get to ask "Does this pattern fit our users and domain?"

Example:

You need checkout flow. Instead of designing from scratch (risk missing known patterns), Figr generates checkout with:

  • Progress indicator
  • Form validation
  • Guest checkout option
  • Multiple payment methods
  • Order summary sidebar
  • Security badges

You test prototype. Find issues specific to your product (e.g., discount code input confusing). Iterate just that. But foundation is solid.

Real-World Workflow: SaaS Onboarding Testing

Scenario: Redesigning onboarding for project management SaaS.

Week 1: Prototype

Create three onboarding variations in Figr:

  • Variant A: 5-step wizard (collect all info upfront)
  • Variant B: 2-step wizard + in-app prompts
  • Variant C: Skip wizard, learn-by-doing

Export to Figma for polish.

Week 2: Set up test

Create Maze test:

  • Task 1: "Create your first project"
  • Task 2: "Invite a team member"
  • Task 3: "How would you describe what this app does?"

Recruit 15 users per variant (45 total).

Week 3: Run test + analyze

Launch Maze test. Results:

  • Variant A: 60% completion, avg 8 min, users say "too long"
  • Variant B: 85% completion, avg 3 min, users say "helpful"
  • Variant C: 40% completion, avg 12 min, users confused

Decision: Build Variant B.

Week 4-6: Engineering builds Variant B

Engineers implement validated flow. Launch. Onboarding completion: 80%+ (close to test prediction).

Does this mean tests will perfectly match production metrics every time? No, but they will usually get you directionally right.

ROI: 3 weeks of testing saved 6+ weeks of building wrong thing and post-launch fixes.

Real-World Workflow: E-Commerce Checkout Testing

Scenario: E-commerce company wants to optimize checkout.

Week 1: Hypothesis

Current checkout has 40% abandonment. Hypothesis: Multi-page checkout creates too much friction. Test single-page vs multi-page.

Week 2: Prototype

Create two prototypes in Webflow:

  • Version A: 3-page checkout (cart → shipping → payment)
  • Version B: 1-page checkout (all fields on one page)

Week 3: Test

UserTesting with 10 users per version:

  • Version A: 70% complete checkout. Comments: "Familiar," "Feels secure"
  • Version B: 60% complete checkout. Comments: "Overwhelming," "Too much at once"

Surprise: Users prefer multi-page, even though it's "more friction." Why? Feels less overwhelming, builds trust step-by-step.

Decision: Keep multi-page, optimize within that constraint.

Week 4: Iterate

Focus on optimizing 3-page flow:

  • Add progress indicator
  • Pre-fill shipping address
  • Simplify form fields

Week 5: Retest

10 more users. Completion: 85%.

Week 6-8: Build and launch

Abandon rate drops from 40% to 25%. Worth $200k annually.

So should you always push for "fewer steps" in checkout? Not blindly, test how your users react instead.

ROI: $5k testing investment → $200k annual revenue increase.

Common Testing Mistakes

Mistake 1: Testing with wrong users

Testing with colleagues or friends who don't match target users. They give feedback that doesn't represent real users.

Fix: Recruit real users, even if it costs $50 per user. Worth it.

Mistake 2: Leading users

"Click the green button to proceed." You told them what to do. Doesn't test if design is intuitive.

Fix: Give tasks, not instructions. "Complete signup" not "Click signup button."

Mistake 3: Testing too late

Waiting until design is "perfect" to test. By then, team emotionally invested. Harder to change.

Fix: Test rough prototypes early. Easier to change, less ego attached.

Mistake 4: Sample size too small

Testing with 1-2 users. Can't distinguish pattern from coincidence.

Fix: 5-8 users minimum. After that, diminishing returns.

Mistake 5: Not acting on results

Running test, seeing issues, shipping original design anyway because "we're out of time."

Fix: Don't test if you won't act on results. Commit to iterating based on findings.

If you only want confirmation of your original idea, should you even bother testing? Probably not, unless you are ready to change your mind.

How to Combine Quantitative and Qualitative Testing

Best approach uses both:

Quantitative (Maze): Tells you what is happening

  • 60% of users fail Task 1
  • Average time: 5 minutes
  • Misclick rate: 40%

Qualitative (UserTesting): Tells you why it's happening

  • Users expect button in top-right, it's in bottom-left
  • Label "Submit" is confusing, expected "Next"
  • Users don't notice confirmation message

Combined workflow:

  1. Quantitative first: Maze test with 20-30 users. Identify issues.
  2. Qualitative to diagnose: UserTesting with 5-8 users. Understand why issues happen.
  3. Iterate: Fix based on insights.
  4. Quantitative again: Maze test to validate fix worked.

Example:

Quantitative: 70% of users abandon at payment step.
Question: Why?

Qualitative: Watch 5 users. Discover: They don't trust site with credit card info because no security badges visible.

Fix: Add trust badges (SSL, payment provider logos).

Quantitative retest: Abandonment drops to 35%.

So if you can only pick one method to start with, which should it be? Start where your biggest blind spot is, usually quantitative if you lack metrics or qualitative if you have numbers but no context.

How to Test When You Have No Budget

Can't afford UserTesting or Maze? Test anyway.

Budget alternatives:

Recruit users yourself:

  • Post in relevant communities (Reddit, Facebook groups)
  • Email existing users
  • Offer $25 Amazon gift card incentive

Use free tools:

  • Share Figma prototype link
  • Schedule Zoom calls
  • Take notes manually

Guerrilla testing:

  • Find people in coffee shops or libraries
  • Offer free coffee for 10 min of feedback
  • Test on phone or laptop

Internal testing:

  • Test with customer support team (they hear user complaints daily)
  • Test with sales team (they talk to prospects)
  • Better than nothing, but not substitute for real users

What you lose: Scale and convenience
What you keep: Core insights about usability

Worried that scrappy testing will look "unprofessional"? The only thing users see is whether the product works for them, not how polished your testing setup was.

Measuring Testing ROI

How do you know testing is worth it?

Formula:

Cost of testing: Tools + time + participant incentives
Example: $200 Maze + 20 hours time × $75/hour + $250 incentives = $1,950

Cost of building wrong thing: Engineering time × hourly rate × weeks
Example: 3 engineers × 40 hours/week × 2 weeks × $100/hour = $24,000

Rework avoided: If testing prevents one major rework cycle, ROI is 12x ($24k saved / $2k spent).

Additional value:

  • Opportunity cost of shipping later
  • Customer satisfaction (fewer complaints)
  • Reduced churn (better UX)
  • Competitive advantage (ship right thing first time)

Real ROI is much higher than direct cost comparison.

If leadership still sees testing as a "nice to have," ask them which number they prefer on the next burn report: $2k in testing or $24k in rework.

The Bigger Picture: Testing as Product Culture

Companies that test regularly ship better products. Not because they're smarter. Because they validate assumptions before committing resources.

Testing isn't a phase. It's a discipline. Best teams test continuously:

  • Test prototypes before building
  • Test features after building (beta)
  • Test live product (analytics, recordings)
  • Test variations (A/B tests)

Culture of testing beats individual designer brilliance. Even great designers guess wrong. Testing corrects guesses.

AI tools like Figr are making prototyping faster, which makes testing faster. In the past, creating 3 prototype variations took 2 weeks. Now it takes 2 days. More prototypes → more testing → better products.

So what is the real unlock from AI here? It compresses the cost of being wrong, letting you explore more options and still validate rigorously.

Takeaway

Using prototyping and usability testing tools together improves UX before you waste engineering time. Create interactive prototypes with Figma, Framer, or Figr. Test with 5-8 real users using Maze (quantitative) or UserTesting (qualitative). Identify issues, iterate prototype, test again. Ship validated designs with confidence.

For SaaS products, test onboarding, core workflows, and empty states. For e-commerce, test product pages, add-to-cart flows, and checkout. Use quantitative testing to find issues, qualitative testing to understand why. Combine both for complete insights.

Testing seems like extra time upfront but saves weeks of post-launch fixes. The best products aren't designed by the smartest teams. They're designed by the teams that test most rigorously.