Heatmaps are seductive. You generate one, look at where users click, say "interesting," and then... nothing changes. The heatmap goes in a slide deck, gets shown in a meeting, and is promptly forgotten.
This is how most teams use heatmaps. It's not how they should.
Heatmaps are only useful if they produce specific, actionable insights that you act on and measure. Here's how to get there.
The three types of heatmaps and what each is actually for#
Click maps#
Click maps show where users click on a page. They're most useful for answering two questions:
Are users clicking on things that aren't links? If you see heavy click concentration on a non-interactive element — a headline, an icon, a product image — users expect it to be clickable. Either make it clickable or rethink your visual design. False affordances cost conversions.
Are users finding the right CTA? If your primary CTA is getting 15% of clicks and a secondary element is getting 40%, your visual hierarchy is inverted. Users are being drawn to the wrong thing.
Click maps are also good for catching dead zones — areas of the page that users simply never interact with. If you've put effort into content in the bottom-left quadrant and nobody ever clicks there, you might be wasting real estate.
Scroll maps#
Scroll maps show how far down the page users scroll before leaving. The gradient typically goes from hot (near the top) to cold (near the bottom), and the fold — the point where most users stop scrolling — is the most important number on this chart.
The fold isn't just about placement. It tells you how much of your page is actually being seen. If 80% of users leave before reaching your pricing section, pricing isn't the problem — discoverability is.
Common findings from scroll maps:
- Important information placed below where most users stop
- Pages that are longer than they need to be (cold bottom sections nobody scrolls to)
- Unexpected engagement with sections you thought were secondary
Move maps (cursor tracking)#
Cursor position correlates with eye movement, especially on desktop. Move maps show where users' attention goes as they read your page.
Move maps are most useful for:
- Understanding what users actually read vs. skim
- Identifying content that attracts attention but doesn't convert
- Catching visual anchors that pull users away from your intended flow
See your own heatmaps
Grain captures click maps, scroll maps, and cursor tracking on every page — no setup required. See exactly where your users click, scroll, and hesitate.
Reading heatmaps without fooling yourself#
Heatmaps are easy to misread. Some things that look like problems aren't, and vice versa.
Hot areas near the top aren't always good. If users concentrate their clicks in the navigation because they can't find what they're looking for in the page content, that's a problem. High activity near navigation elements often signals confusion, not engagement.
Low scroll depth might mean the page is working. If users are clicking your CTA before they scroll 30% of the page, that's fine. Low scroll depth + low conversion rate is a problem. Low scroll depth + high conversion rate is a feature.
Click dispersion isn't always confusion. Some pages — like dashboards or feature-rich pages — legitimately have distributed click patterns. Compare to a page that should have focused action and interpret accordingly.
Small sample sizes lie. A heatmap from 200 sessions is noisy. Patterns in 2,000+ sessions are meaningful. Check your sample size before drawing conclusions.
The heatmap analysis workflow#
Step 1: Define what you're trying to learn#
Don't open a heatmap without a hypothesis. Good starting hypotheses:
- "I think users aren't seeing our pricing CTA because it's below the fold for most screen sizes"
- "I think users are confused by our nav structure because they're clicking the wrong menu item"
- "I think our hero section isn't communicating value because users scroll past it quickly"
The hypothesis determines what you're looking for and what counts as confirming or disconfirming evidence.
Step 2: Segment your heatmaps#
The same page looks completely different to different user segments. Always compare:
- New vs. returning users — Returning users know where things are. New users reveal your onboarding friction.
- Mobile vs. desktop — A layout that works beautifully on desktop can be completely broken on mobile.
- Traffic source — Users from ads have different intent than users from organic search. Their behavior patterns will differ.
- Device type — Scroll behavior differs significantly between laptop trackpads, mice, and mobile scrolling.
A heatmap that combines all segments often obscures the most important patterns.
Step 3: Find the specific problem, not the general pattern#
"Users don't scroll far enough" is not actionable. "Users from mobile stop scrolling at approximately the point where our horizontal feature comparison table appears — probably because it's hard to read on mobile and creates friction" is actionable.
The more specific your finding, the more targeted your fix can be, and the more clearly you can measure whether the fix worked.
Step 4: Form a specific hypothesis about the fix#
Once you've identified a specific problem, your hypothesis about the fix should be equally specific:
- "If we move the pricing CTA above the feature list, CTA click-through rate will increase by at least 15%"
- "If we replace the horizontal comparison table with a vertical mobile-friendly format, mobile scroll depth will increase and mobile conversion will improve"
Vague fixes produce vague results. You won't know if your change worked unless you defined "worked" in advance.
Step 5: Ship and measure#
Make the change. Re-run your heatmap. Compare click distributions and scroll depths. Check your conversion metrics.
If the heatmap improved but conversion didn't, your hypothesis about the underlying issue was wrong — the visual behavior wasn't the real blocker. Keep investigating.
If both improved, you have a reproducible loop: heatmap → hypothesis → fix → measure.
Combining heatmaps with other signals#
Heatmaps alone are blind to context. They show what users do but not who those users are or what happened before and after.
Heatmaps + funnel analysis. Funnel analysis tells you where conversion drops off. Heatmaps on that specific page tell you why. The combination is much more powerful than either alone.
Heatmaps + session replay. A click map shows you that 30% of users clicked something unexpected. Session replay shows you what those users were trying to do and what happened next. Together you have the full story.
Heatmaps + A/B tests. A/B tests give you statistical confidence that a change improved (or didn't improve) conversion. Heatmaps on both variants help you understand why one variant outperformed the other. This understanding generalizes — you can apply the lesson to other pages.
Common heatmap findings and what they usually mean#
| Finding | Common cause | Common fix |
|---|---|---|
| Heavy clicks on non-linked images | Users expect zoom or navigation | Make it interactive, or adjust visual style to reduce affordance |
| High nav click rate on a content page | Page content isn't answering the user's question | Improve content clarity or page targeting |
| Clicks concentrated above the fold, almost nothing below | Users find what they need early (good) or leave frustrated (bad) | Check conversion rate to disambiguate |
| Low engagement with value proposition section | Copy isn't resonating or users are skimming past it | A/B test headline and subhead |
| Mobile scroll depth much shorter than desktop | Mobile layout creates friction early | Audit mobile layout at common drop-off point |
| Clicks on feature names (not CTAs) in pricing | Users want more information before committing | Add feature explainer tooltips or expand feature descriptions |
What a good heatmap program looks like#
Running heatmaps ad hoc — generating one when someone asks for it — is low value. The teams that get the most from heatmap analysis run a systematic program:
- Always-on heatmaps on key pages. Landing pages, pricing, signup, onboarding steps. Collect data continuously so you have sufficient sample sizes quickly.
- Scheduled reviews. Monthly heatmap reviews for each key page. Look for drift from baseline patterns — changes that might indicate UX regression or user behavior shifts.
- Pre/post analysis for major changes. Before shipping a significant page redesign, export the current heatmap. After shipping, collect new data and compare.
- Tag findings to outcomes. When a heatmap finding leads to a change that improves a metric, record it. This builds a library of what works for your specific users on your specific product.
A heatmap is just colored pixels until you connect it to a question worth asking. The color tells you where users go. Your job is to figure out why and whether that's a problem. That's the work.
Put this into practice
Generate your first heatmap in under two minutes. Grain shows you click maps, scroll depth, and cursor tracking across every page — with the segmentation and session replay to act on what you find.
The teams who use heatmaps best aren't the ones who generate the most heatmaps. They're the ones who follow through — from observation to hypothesis to experiment to measurement.