Guide

How the best PMs use AI to think through problems, not just generate solutions

How the best PMs use AI to think through problems, not just generate solutions
Published
December 2, 2025
Share article

Most PMs use AI like a vending machine. What does that look like? Insert prompt, receive output. The faster the output, the better. Why does it feel so efficient? The thinking happens before you type. Where does the thinking happen? The thinking happens before you type. The AI is just a production tool. So what is AI doing here? The AI is just a production tool.

The best PMs use AI differently. What is different about the best PMs? They use it to explore before they execute. To surface what they have not considered. To think through problems more thoroughly than they could alone.

The difference in output quality is dramatic. But the difference in learning is even more important.

The Vending Machine Trap

"Build me a checkout flow."

The AI generates a checkout flow. Five screens. Cart, shipping, payment, confirmation, success. It looks reasonable. You move forward. Should you move forward yet? But is it right for your users?

The vending machine approach skips these questions. Does it handle your edge cases? Does it fit your product's existing patterns? Does it consider scenarios you have not thought about?

Are these questions optional? The vending machine approach skips these questions. It optimizes for speed of generation, not quality of thinking. You get output quickly. You learn nothing in the process.

I used this approach for months before realizing what I was missing. Sound familiar? The outputs looked fine. The features shipped. But I kept encountering the same edge cases, the same user confusions, the same "why did not we think of that?" moments after launch.

The Exploration Approach

"I need to design a checkout flow for our e-commerce product. Before generating anything, help me think through: What edge cases should I consider? How do our competitors handle this? What patterns work for high-value purchases versus quick buys?"

This prompt invites exploration. Is this still about output? The AI surfaces considerations you might have missed: empty cart handling, out-of-stock during checkout, payment failures with retry logic, address validation errors, partial shipments for multi-item orders.

You learn something before you generate anything. The final output is better because your understanding is better.

The Shopify checkout project used this approach. Before generating screens, Figr explored: What are the common drop-off points in checkout flows? What happens when items go out of stock mid-checkout? How should we handle partial shipments?

See the Shopify checkout flow, designed after thorough problem exploration

Three Exploration Patterns

Pattern 1: Edge Case Surfacing. "What could go wrong with this feature? What states have I not considered?" What are we trying to surface? What states have I not considered?

Figr has analyzed 200,000 screens. It knows what file uploads need (14 states, not 1). It knows what network degradation looks like (a spectrum, not binary). What is the point of pattern knowledge? This pattern knowledge surfaces what your individual experience might miss.

See the Zoom network degradation states

Pattern 2: Competitive Analysis. "How does Product A solve this differently from Product B? What are the trade-offs?" Are we copying competitors? Not copying. Learning.

Understanding competitors helps you make informed choices. Not copying. Learning. Why did they make this decision? What trade-off did they accept? Would that trade-off work for our users?

Pattern 3: User Persona Simulation. "How would a power user experience this? A newcomer? Someone with accessibility needs?" Who else should we consider? A power user, a newcomer, someone with accessibility needs.

Different users reveal different friction points. The Gemini vs Claude vs ChatGPT comparison simulated three personas interacting with each product. Each persona exposed different strengths and weaknesses.

See the persona simulation across three AI products

The Basic Gist

Why does output improve?

The PM who explores before generating produces better outputs. Not because AI is smarter when you ask nicely. Because the PM's own thinking improves through the exploration process.

You learn what edge cases exist. You understand competitor trade-offs. You consider perspectives you would have missed. The exploration makes you better at the job.

In Short

So what is AI, here?

AI is not just a generation tool. It is a thinking tool.

The best PMs use AI to surface what they do not know, not just to produce what they already imagined. The exploration makes the generation better. And the exploration makes the PM better too.

Try the exploration approach on your next feature