D2C & E-commerce

Product and Catalog Analytics

Your catalog is where revenue meets complexity. Hundreds of SKUs, bundles, variants, and seasonal collections are generating data every day, but standard reports tell you only what sold, not which products actually fund the business after discounts, returns, and holding costs, or which items quietly consume warehouse space without contributing margin.

D2C product analytics changes this. FireAI connects sales, margin, inventory, and marketing data so product managers and merchandising teams can see profitability at the SKU level, identify slow movers before they become write-offs, and measure whether bundles and cross-sells genuinely lift margin or only redistribute volume between lines.

For new launches, instead of a static sales chart you get a structured view of ramp velocity, cannibalization of older SKUs, return-adjusted performance, and channel mix, so you can amplify winners and correct positioning before the launch window closes. The result is a catalog that is managed by economics rather than by instinct.

SKU-Level True Profitability

True SKU profitability is not selling price minus COGS. For a D2C or e-commerce brand, every SKU carries a stack of variable costs that must be attributed before you know whether the product is making or losing money: marketplace referral and closing fees, payment gateway charges, shipping and last-mile costs, return processing and restocking costs, packaging, and sometimes influencer or affiliate commission tied to specific catalog lines.

Without this full cost attribution, bestsellers on revenue can be destroyers of margin. A high-volume apparel SKU with a 22% return rate and aggressive promotional pricing may look strong in a top-line dashboard but show negative contribution when all costs are netted out. This mismatch between apparent and actual performance is one of the most common sources of margin erosion in D2C businesses.

FireAI rolls up all variable costs you configure and attributes them to each SKU and order line, then aggregates to SKU-month or SKU-quarter views. Product and finance teams see gross margin, contribution margin, and post-return margin side by side for every SKU in the catalog.

What FireAI tracks for SKU profitability:

  • Revenue minus COGS, marketplace fees, payment costs, shipping, and returns for each SKU on a per-order and per-period basis
  • Return rate by SKU and its impact on effective margin, separately tracked from the pre-return margin so you can see both numbers
  • Channel-level profitability for the same SKU sold on D2C website, Amazon, Nykaa, Flipkart, and other channels to identify where each product performs best
  • Margin trend over time: did profitability change after a packaging cost increase, a platform fee revision, or a promotional campaign?
  • SKU ranking by contribution margin to surface the products that actually fund the business versus those that are consuming resources
  • Breakeven discount calculation: for each SKU, at what discount level does the contribution margin go to zero? This prevents promotional decisions that inadvertently destroy margin

Real example: A skincare brand with 86 active SKUs used FireAI to compute post-return contribution margin across the catalog and found that 11 SKUs, which collectively drove 24% of total revenue, contributed only 6% of total contribution margin. Three of these were in negative contribution territory after returns and marketplace fees. Repricing two of them and pulling the third from marketplace-only listings increased overall brand contribution margin by 3.8 percentage points in one quarter without any revenue investment.

FireAI natural language queries:

  • "Rank all skincare SKUs by contribution margin after returns for last 90 days"
  • "Which top-revenue SKUs have below-median margin after marketplace fees?"
  • "What is the margin trend for SKU-0042 since we changed packaging in January?"

Ask FireAI

See how your team can ask questions in plain language and get instant analytics answers.

Which SKUs look profitable on revenue but not on margin?

SKU Profitability Dashboard

Catalog Avg Contribution
21.4% 2.8%
Negative Margin SKUs
6 -40%
Post-Return Margin
18.2% 1.4%
Top 20% SKU Revenue Share
68.4% 3.1%
Catalog Avg Contribution Margin TrendLast 12 months (%)
05111621
Contribution Margin by CategoryCurrent quarter (post-return, all channels)
Hair CareBody CareSkin CareFace WashAccessories

Dead SKU and Tail Inventory Analysis

Dead stock and tail SKUs are a working capital problem that most D2C brands underestimate. Dead stock is inventory that has not sold in a defined period while stock on hand remains. Tail SKUs are the long list of low-velocity lines that collectively tie up cash, consume pick-face slots, and add SKU complexity without meaningful revenue contribution.

The cost of dead and tail inventory is rarely captured in standard reporting. Finance sees inventory at book value; operations sees SKU count; neither sees the combined picture of cash trapped per SKU, days of cover at current velocity, and write-off risk if the product is perishable or fashion-sensitive.

FireAI combines sales velocity data, stock on hand from your warehouse management system or OMS, inbound purchase orders, and shelf life data where applicable to segment every SKU by health status and generate recommended actions for each segment.

SKU health segments FireAI creates:

  • Active: Selling at or above category-average velocity. No action needed.
  • Slow movers: Selling but below velocity threshold. Candidates for markdown or promotional push.
  • At risk: Velocity declining month-over-month with significant stock on hand. Markdown or bundle before becoming dead stock.
  • Dead stock: Zero sales in the configured period (default 90 days) with stock on hand above minimum quantity. Requires immediate action: liquidation, bundle as gift-with-purchase, or stock transfer.
  • Expiry risk: Products with shelf-life constraints where remaining shelf life is less than 2x the days of cover at current velocity. Flagged separately with exact expiry batch data.

What FireAI tracks:

  • Cash trapped per SKU: stock quantity multiplied by landed cost, giving finance a rupee value for the dead inventory problem rather than just a unit count
  • Days of cover at current velocity for each SKU, segmented by warehouse location
  • Tail SKU contribution analysis: what percentage of your total SKU count represents less than 5% of revenue? This measures catalog complexity cost.
  • Inbound PO risk: SKUs with declining velocity that have open purchase orders still incoming, creating future dead stock exposure
  • Reorder point calibration: for active SKUs, are safety stock levels set correctly given actual demand variability, or are they creating systematic overstock?

Real example: A personal care D2C brand with 220 active SKUs ran a dead SKU and tail analysis through FireAI and identified 38 SKUs that had not sold in 120 days but collectively had 4,200 units in stock at a landed cost of ₹18.4 lakh. An additional 54 tail SKUs represented 24.6% of SKU count but only 1.8% of revenue. Delisting 22 tail SKUs, liquidating 28 dead stock SKUs through a clearance bundle campaign, and using 10 units as gift-with-purchase on high-value orders recovered ₹11.2 lakh in working capital over 60 days and reduced pick-face complexity by 18%.

FireAI natural language queries:

  • "List SKUs with zero sales in the last 90 days that have stock above 50 units"
  • "Which tail SKUs together represent 5% of revenue but more than 20% of active SKU count?"
  • "Show me expiry risk in the next 60 days across all perishable batches"

Ask FireAI

See how your team can ask questions in plain language and get instant analytics answers.

Which SKUs have no sales but significant stock on hand?

Inventory Health Dashboard

Dead Stock Value
₹18.4 Lakh -38.2%
Dead SKU Count
38 -42.4%
Tail SKU Revenue Share
4.2% -1.8%
Expiry Risk Units
620 -28.6%
Dead Stock Value TrendLast 12 months (₹ Lakh)
010192938
SKU Health DistributionCurrent inventory snapshot
ActiveSlow moverAt riskTailDead

Bundle and Cross-Sell Performance

Bundles and cross-sell recommendations are powerful commercial tools when they increase basket margin, not just basket size. A shampoo plus conditioner bundle that discounts both items too aggressively may move units without improving profit compared to buying the components separately. A cross-sell recommendation with a high attach rate but low incremental margin may only be cannibalizing purchases that would have happened anyway.

Most D2C brands measure bundles by revenue contribution and cross-sell by attach rate. These are incomplete metrics because they do not capture whether the bundle or cross-sell is creating new margin or simply redistributing existing purchase intent into a different package.

FireAI tracks bundle and cross-sell economics at the combination level, computing incremental margin generated per bundle versus component-only purchases and measuring attach rate quality by separating organic cross-sells from paid-traffic-driven ones.

What FireAI tracks for bundle and cross-sell analytics:

  • Bundle blended margin: the weighted average contribution margin across all SKUs in a bundle at the bundle price, compared to the same items purchased at their individual prices
  • Incremental margin per bundle: does buying the bundle create more margin than the sum of individual purchases? Bundles that discount more than the margin buffer are destroying value even as they increase revenue
  • Cross-sell attach rate by placement: how does the attach rate differ between PDP widget, cart page upsell, checkout recommendation, and post-purchase email? The placement with the highest attach rate is not always the most profitable
  • Organic versus paid attach: cross-sells driven by organic browsing behavior reflect genuine purchase affinity; cross-sells from paid retargeting may have a cost that offsets the incremental revenue
  • Cross-sell pair analysis: which product combinations have the highest incremental margin, meaning purchases that would not have happened without the recommendation
  • Bundle cannibalization check: is the bundle taking revenue from full-price individual SKU purchases, or is it genuinely adding net category revenue? Cannibalization analysis compares channel and customer segment behavior before and after bundle introduction

Real example: A hair care D2C brand introduced 4 new bundles and measured performance through FireAI after 60 days. Two bundles had strong attach rates but negative incremental margin: the discount depth exceeded the margin on the added items, meaning the bundles were effectively subsidizing what would have been full-price purchases anyway. One bundle had a modest attach rate but the highest incremental margin in the catalog because it was pairing a discovery product with a replenishment product from different purchase cycles. FireAI identified this combination as the highest ROI bundle investment, and reallocation of promotional budget toward it increased catalog-level blended margin by 2.1 points.

FireAI natural language queries:

  • "Which bundles have lower blended margin than buying components separately at the current list price?"
  • "What is the cross-sell attach rate for the Vitamin C serum widget on the moisturizer PDP?"
  • "Top 5 cross-sell pairs by incremental margin, excluding paid traffic sessions"

Ask FireAI

See how your team can ask questions in plain language and get instant analytics answers.

Which bundles are actually increasing margin vs individual purchases?

Bundle and Cross-Sell Dashboard

Bundle Revenue Share
18.4% 2.8%
Avg Bundle Blended Margin
22.6% 1.4%
Cross-Sell Attach Rate
11.8% 2.1%
Incremental Margin MTD
₹2.8 Lakh 18.4%
Cross-Sell Attach Rate TrendLast 8 weeks (organic sessions only, %)
036912
Incremental Margin by BundleVs same items bought separately (last 60 days)
Growth Serum BundleHair DuoSkin Starter KitFestive Gift Set

New Product Launch Performance Tracker

A new product launch is more than week-one sales. You need velocity versus forecast, early return and review signals, cannibalization of older SKUs in the same category, channel mix performance, and ad efficiency versus organic traction. Most brands piece this together from 4 to 6 different tools and reports, arriving at a coherent picture weeks after the launch window has passed and course-correction opportunities are gone.

FireAI brings all launch performance threads into one tracker that updates daily. Product managers, brand managers, and growth leads share a single view of launch health rather than conflicting dashboards from separate platforms.

What FireAI tracks in the launch performance framework:

  • Daily and weekly sell-through velocity indexed against launch forecast and against analogous past launches from the same category
  • Return rate in the first 14 days: early returns are a quality or expectation signal; a return rate above the category baseline in week 1 predicts a long-term margin problem if not corrected
  • Review velocity and average rating: how quickly are reviews accumulating and what is the early rating distribution? A product with a 3.6 average after 50 reviews has a different trajectory than one with 4.4 after 50 reviews
  • Channel mix performance: is the launch performing differently on D2C site versus Amazon versus Nykaa? Channel mix tells you whether the product has been discovered organically or is dependent on paid visibility
  • Cannibalization versus incrementality: did the new SKU take sales from existing catalog SKUs in the same category, or did it add net new revenue to the category? This is the critical test of whether the launch expanded or redistributed the business
  • Ad efficiency on launch: ROAS and CAC for paid campaigns tagged to the new product, compared to catalog-average paid efficiency
  • Launch cohort repurchase: what percentage of first-purchase customers came back within 30, 60, and 90 days? Early repurchase rate is the strongest signal of long-term product-market fit

Health flag triggers: FireAI automatically raises a flag when any launch metric crosses a configurable threshold: return rate above category baseline by more than 2 percentage points, ad ROAS below 1.8, review rating below 4.0 after 30 reviews, or velocity below 60% of forecast by day 14. These flags appear in a shared launch dashboard so all stakeholders see the same signal at the same time.

FireAI natural language queries:

  • "Compare the launch of SPF Moisturizer to the prior year Vitamin C Serum launch -- first 30 days"
  • "Did the new shampoo variant cannibalize the existing shampoo SKU or add net category revenue?"
  • "What is the 30-day repurchase rate for customers who bought the new serum in their first order?"

Ask FireAI

See how your team can ask questions in plain language and get instant analytics answers.

How is the new product launch tracking vs forecast?

Launch Performance Dashboard

Velocity vs Forecast
78% -22%
Return Rate (Day 21)
11.4% 4.6%
Avg Review Rating
3.9 -0.4%
30-Day Repurchase Rate
18.4% 2.1%
Daily Units vs ForecastNew launch -- first 21 days
078155233310
Launch Performance vs BenchmarkCurrent launch vs prior analogous launch (Day 21 indexed)
Velocity %Return rate %Review ratingRepurchase %

Why did the new serum launch underperform by 34% in the first 45 days?

Frequently asked questions