A Guide to Trade Promotions Effectiveness Analysis
Trade promotions remain the single largest discretionary spend on most CPG P&Ls — often 15–25% of gross sales — and the analysis behind them is still dominated by post-event recaps that arrive too late to influence the next plan. Most teams measure a single number — lift — and stop. That misses where promotions actually win or lose.
This guide walks through the five-signal framework we use with brands at Scout to evaluate trade promotion effectiveness, with a worked example, the failure modes that most often distort the numbers, a decision rule for repeat-versus-kill, and the calculations to pin each signal to. If you're trying to size a future event, start with How to Forecast Trade Spend ROI for Promotions first; if you're brand-new to the discipline, What Is Trade Promotions Analysis? is the prerequisite primer.
From analyses we've run with mid-market CPG brands across 2023–2025, roughly 1 in 3 promotions show what we call a velocity-only success — visible in-store lift, but negative or near-zero incremental profit once you net out trade spend, cannibalization, and the post-promo dip. The framework below was built to catch that pattern before the next plan locks.
The five-signal trade promotion effectiveness framework
Healthy promotions show balanced strength across five dimensions. Each maps to a different question a sales, finance, or category lead is going to ask:
- Velocity tells you the immediate response — did units lift relative to baseline?
- Profit ensures commercial viability — did the lift cover its own trade cost?
- Mix reveals where the lift came from — incremental category demand, or share stolen from the rest of the line?
- Retailer reveals strategic alignment — was the gain concentrated at the partners that matter, or scattered across non-strategic accounts?
- Sustain-lift tells you whether the gains stuck or unwound — did the brand grow, or did you borrow from the future?
A healthy promotion will show balanced strength across all five. When one signal spikes while another collapses (i.e. strong velocity but negative profit), it's a sign the strategy needs refinement. Two signals red is rarely fixable with depth tweaks — it usually means changing mechanic or killing the event.
Over time, tracking these five consistently transforms post-event recaps into a true feedback system that can teach your organization about what drives effective trade spend.
The calculations
These are the calculations we use:
| Metric | Formula | Interpretation |
|---|---|---|
| Incremental Units | Promo Units − Baseline Units | Core lift measurement. |
| Incremental Revenue | Incremental Units × Promo Price | Sales generated by the promotion. |
| Incremental Profit | Incremental Revenue − Incremental Cost − Trade Spend | True financial gain. |
| Trade ROI | Incremental Profit / Trade Spend | Return on trade investment. |
| Elasticity | % change in volume / % change in price | Shopper sensitivity indicator. |
| Sustain-Lift Ratio | Post-promo baseline / Pre-promo baseline | Longevity of effect. |
A worked example: the sustain-lift ratio
The sustain-lift ratio (SLR) is the single most diagnostic post-event signal, and the one most likely to flip a conclusion. It compares the average baseline sales in the 8 weeks immediately following a promotion to the 8 weeks immediately preceding it.
Worked example. Take a mid-tier natural snack brand running a 4-week TPR at one West Coast retailer:
- Pre-promo baseline (W-8 to W-1): 14,200 units / week
- Promo window (W1 to W4): 23,500 units / week — a 65% lift on the headline
- Post-promo baseline (W5 to W12): 13,100 units / week
The lift is real: about 9,300 incremental units per week during the promo, or roughly 37,200 across the 4-week window. But the post-promo baseline drops to 13,100 — an SLR of 0.92 — which costs about 8,800 units across the 8 weeks of dip. Net effect: 28,400 incremental units, around 76% of what the headline lift number suggested.
If trade spend was sized against the headline number, the promotion likely lost money. This is the discipline a five-signal analysis enforces. Velocity alone said "huge win." The sustain-lift ratio said "you've borrowed from the future, and the future paid back less than expected." Same data, different conclusion.
Common Failure Modes
Most breakdowns in trade promotion analysis are typically caused by either data, timing, or behavior.
Data
Different systems capture different truths. Syndicated data, retailer portals, and internal shipments rarely line up on timing or hierarchy. When baseline calculations are inconsistent, so are lift and ROI numbers.
Teams compensate by "adjusting" results manually, which undermines trust in the analysis.
The fix is standardization. Have one source of truth for baseline definitions, calendar weeks, and spend attribution. Banner-level pricing decisions live at the banner, not the corporate parent — Andronicos and Safeway roll up to Albertsons but make different trade calls, and aggregating them obscures the answer.
Timing
By the time a recap is complete, the next promotional plan is already locked. The team learns what happened but can't apply it.
Promotional data has a short shelf life, and its value decays with each passing week. Shortening analysis cycles through automation or templated reporting ensures that insights can influence future planning instead of just explaining the past. A useful target: the recap of a promotion should be in the next planning meeting, not the one after that.
Behavior
Creating foundational activity around analysis will give your team additional edge. Common patterns include:
- Anchoring bias: assuming last year's tactics will work again.
- Volume bias: equating lift with success, regardless of profit or mix.
- Siloed interpretation: sales, finance, and category each draw their own conclusions from the same numbers.
Aligning teams around a common interpretation of key metrics and standardized dashboards creates a culture where emphasis shifts from assigning outcomes to improving future choices.
Based on calculations we've done for brands, lift does not always translate into ROI. It's possible for a promotion to create a short-term spike and erode baseline loyalty. Category-aligned, moderate-depth promotions tend to outperform when viewed over 8-week horizons. For a closer look at how to validate that lift didn't just pull volume forward, see How to Tell If a CPG Promotion Actually Worked.
When to repeat, change depth, or kill
Once a promotion has been analyzed through the five signals, three decisions follow. The table below summarizes the conditions for each:
| Decision | Velocity | Profit | Sustain-lift | Mix / Retailer |
|---|---|---|---|---|
| Repeat as-is | ≥ 1.5× baseline | Positive after dip | ≥ 0.95 | Balanced across SKUs and retailers |
| Reduce depth (10–15%) | ≥ 1.5× baseline | Marginal | 0.85–0.95 | Imbalanced — concentrated in a few SKUs |
| Change mechanic | 1.2–1.5× baseline | Negative | ≥ 0.95 | Wrong vehicle for category |
| Kill | < 1.2× or negative | Negative | < 0.85 | Concentrated, cannibalistic |
A useful rule of thumb: if two of the five signals are red, you're either changing mechanic or killing the promotion. Two greens in isolation aren't a green light — they're an invitation to look harder at the other three.
New steps to promotional planning
- Forecast likely ROI and lift ranges before approval — see How to Forecast Trade Spend ROI for Promotions.
- Monitor mid-event velocity deltas so you can intervene before the promo window closes, not after.
- Extract structured lessons into a searchable playbook — one row per event, indexed by retailer, mechanic, and depth.
- Measure consistency across the 5-signal radar rather than chasing one metric to the exclusion of the others.
This converts trade analysis from a backward-looking report into an adaptive model that compounds insight. The shift in operating model — from cost ledger to capital allocation — is covered in From Cost Center to Profit Driver: Rethinking the Role of Trade Spend.
The next frontier is AI-driven trade spend optimization. The future of trade promotion management is autonomous learning loops. An AI system will run elasticity simulations, sales teams will choose 2–3 to test, finance will validate the ROI post-event, and the learnings auto-update next cycle's spend recommendations. We are seeing mid-market leaders head there within the next few quarters.
What this framework will not catch
Two things the five-signal framework is bad at, and we want to be honest about. First, brand-equity effects below the 12-week horizon. If a deep discount erodes shelf-price expectations and shows up as elasticity drift six months later, no single-event recap will catch it. The signal there lives in trend lines, not event recaps — and the only defense is a baseline price discipline that the trade calendar enforces upstream of the promo itself.
Second, competitor-driven category effects. If a competitor runs a counter-promotion that compresses category demand during your post-promo window, your sustain-lift ratio will read worse than it should. Mitigate by always pulling category-level sales over the same window — if your brand dipped and the category did too, your dip is partly category, not all yours. We treat that adjustment as advisory, not mechanical: better to flag than auto-correct, because the wrong adjustment is worse than no adjustment.
Frequently asked questions
- What's the difference between lift, incrementality, and ROI?
- Lift is the gross change in units or dollars during the promo window. Incrementality is the portion of that lift that wouldn't have happened without the promo (lift minus what baseline + cannibalization would have produced). ROI is incremental profit divided by trade spend — it answers whether the incremental part paid for itself.
- What's a good sustain-lift ratio?
- Above 1.00 is genuine brand growth — the promo expanded baseline demand. 0.95–1.00 is acceptable. 0.85–0.95 means meaningful pull-forward; the promo was net-incremental but smaller than the headline. Below 0.85 indicates significant borrowed demand, and the promo likely lost money once netted.
- How long after a promotion ends should I wait to evaluate it?
- An 8-week post-window is the standard. 4 weeks usually misses the back half of the dip; 12+ weeks introduces too much noise from competitor activity and category seasonality. If the category is highly seasonal (allergy, ice cream, etc.), match the window to the prior-year same period rather than calendar weeks.
- How do I separate the post-promo dip from competitor activity?
- Look at category-level sales over the same window. If category sales held steady while your brand dipped, the dip is real (loyalists pulled forward). If category sales also dipped, you're seeing a category effect — competitor promotion, seasonality, or macro factors — and your dip is overstated.
- What's the most common reason a 5-signal analysis disagrees with a sales recap?
- The sustain-lift ratio. A sales recap stops at the end of the promo window; a 5-signal analysis carries the read into the 8 weeks after. Roughly a third of promotions that look like clear wins on the recap turn out to be margin-negative once the post-promo dip is netted in.
- Should I run the analysis at SKU level or brand level?
- Both. SKU-level catches cannibalization within the line — the promoted item gaining at the expense of siblings. Brand-level confirms that the line as a whole grew, not just the promoted SKU. If SKU-level shows lift but brand-level doesn't, you have an internal-cannibalization problem, not a category-growth one.
Every dollar should teach you something. Reach out to us at hello@cpgscout.ai if you want to see how leading CPG brands have been implementing this playbook.
See this on your own data
Scout gives CPG sales teams the analytics infrastructure they need — without spreadsheets.
Get a 15-min demo