Market Basket Metrics, Average Ticket & Distinct Customer Transaction Count


We offer a package that includes feasibility, proof of concept, methodology, implementation and technology to enable the seamless integration of distinct transaction counts and market basket metrics into your dimensional data mart or data warehouse. We developed point solutions in this space for large retailers over the past 10 years. Our current offering is a new design, overcoming functional and performance limitations of previous approaches, bringing for the first time a large variety of market basket metrics and performance indicators to the analyst's desktop through conventional BI tools.


Market basket metrics differ from the regular quantities and amounts (sales, costs, discounts etc.) in that they are calculated over the complete transactions involved, rather than just selected line items. This allows the formulation of new measures and indicators that take into account not only sales of a given product, but also the whole business generated in the same customer visits. Market basket sales, costs etc. are computed by first identifying all transactions involving specified location, timeframe and product criteria, and then aggregating all sales for these transactions, making sure that each transaction is counted exactly once. The specified criteria may span any level of product, location and time dimensions, as well as special taxonomies and classifiers. Here are some relevant example questions and capabilities that involve market basket metrics:

  • What were the distinct transaction count and average sales for a given product category in the Atlanta district last week?
  • What was the weekly average ticket sales, and margin by product category for all transactions involving Department Y in the Eastern US over last quarter?
  • Study sales and costs of a given product category vs to total sales and costs of involved transactions
  • Analyze transaction size, type and department mix for transactions that included a specified product, location and time.
  • Rank products by total transaction margin or by average ticket at different levels of the product hierarchy.
  • Rank low-or-negative margin products of a given category by total ticket sales or average ticket margin.
  • Execute these actions from summarized data interactively in seconds using tools such as Cognos, Microstrategy, Business Objects, Microsoft Analytics or even simple spreadsheets.
  • Ensure that all results are based on recast data according to the most recent taxonomy of the product hierarchy, reflected in the current dimensions

To answer such questions, we need key performance indicators (KPI) that involve distinct count of transactions and other non-additive transaction-level metrics that pose integration challenges in OLAP/ROLAP environments. Examples of such metrics and indicators are:

  • Distinct Transaction Count: The number of distinct transactions that involve line items satisfying given filtering criteria. This metric approximates the count of distinct customer visits and is typically used as a denominator to compute various key indicator averages.
  • Ticket Sales: The total sales across all items of transactions that satisfy certain filtering criteria. This is in contrast to regular sales, which is an additive metric and only involves line items satisfying filtering conditions. Imagine a store that has only two transactions one day. A customer walks in the store and buys 10 different items including item A. Another customer buys 5 items other than A. Item A sales is the amount paid only for item A. Item A Ticket Sales is the amount paid for all 10 items of the first customers transaction. The second transaction is not included because it does not involve item A.
  • Average Sales: The sum of sales satisfying given filtering criteria divided by the distinct number of transactions that generated them. Average margin, average discount and other similar indicators are similarly defined.
  • Average Ticket Sales: First, find all transactions that involve items satisfying filtering criteria. For those transactions calculate the sum of their ticket sales divided by the number of transactions. Average ticket margin, average ticket cost, average ticket discount etc. follow the same pattern. The key to this computation is to count each transaction only once, regardless how many of its items satisfy the filtering criteria.
  • Transaction-to-Sales Factors: This is a class of ratios, such as of Ticket Sales to Sales, indicating the leveraging effect of filtered items relative to the total purchase. Similar computations can be made for margin leverage derived from sales and cost metrics.
  • Contributing Rank: The idea is to rank products within subcategories, subcategories within categories, categories within super-categories etc. with the intention of ranking the children of any given parent in the product taxonomy based on its contribution to the parent's ticket sales. These KPIs are customized to address specific merchandising objectives.
  • Statistical Variance Metrics: Along with the average ticket it is possible to track sum of squares which can aid in determining how scattered or how consistent the underlying observations are.

Current Practice and Workarounds

Distinct metrics and counts are not additive and cannot be pre-aggregated into summaries to aid with performance. Some of the problems derived from this limitation are:

  • Queries are limited to long-running batch reports, having to run over large volumes of data.
  • Market basket metrics and KPIs are not easy to integrate with regular OLAP/ROLAP cubes and tools, particularly when it comes to features such as "same store", comp/non-comp, TY/LY, store relocation handling etc.
  • Merchants usually decide on a convenient product and location grain at which to track average ticket. Then distinct transaction counts are calculated for these levels and transported to the DSS/BI environment where they are used for reporting Average Sales at these levels only. This is inflexible, limiting such reporting to only a few high level product categorizations, such as super-category or department.
  • As product and other taxonomies mutate over time, pre-stored counts do not get properly recast and introduce bias, particularly in trending and "this year" to "last year" comparisons.

Analysts are looking for a way to calculate these useful metrics on demand, in interactive response time, and lower grain of time, product, location, promotion type, department or vendor classifications. However, on-demand computation involving all of these dimensions and level combinations from the transaction detail is impractical. It either demands enormous size of precalculated data stored for all possible classification combinations, or it requires long running queries and expensive resources to roll-up the distinct numbers from the transaction details. This is true even with specialized database appliances that facilitate large data and massive scans.

A Better Solution

Our VDM approach decomposes all distinct metrics into components that can be stored efficiently and have the following advantages over batch and/or outsourced methods:

  • Seamless integration with other DSS/BI facts and metrics in the EDW or Marts, so that average sales, average ticket, average margin comp and non-comp, TY vs, LY can be presented along with regular additive metrics such as sales amounts and quantities.
  • Interactive response time.
  • Consistency with current dimensions. Cubes are recast to current product hierarchy across all history daily
  • Easily architected into most BI tools
  • Ability to leverage aggregations and other optimization features in the database technology used, such as materialized query tables (MQT) in DB2 or materialized views in Oracle.

Next Steps

InfoKarta offers the following low risk process of these progressive steps:

  1. 3-week assessment of the existing DSS model and business requirements, and resulting in a feasibility report, general recommendations and a plan to implement this capability.
  2. We will take a small amount of your transaction data and dimensions, and construct a prototype that demonstrates the model and its capabilities. After the demonstration we will review and refine your specific requirements for customization.
  3. We will support, assist, or undertake agreed modifications or enhancements to your environment, according to the plan developed during the assessment phase, and revised after the pilot demonstration.
  4. We will build and deliver the solution including DDL, views, tables, maintenance scripts, recasting scripts and provide the computational algorithms in the form of executable code, database procedures or ETL framework components.
  5. We will assist you in architecting the access views into the BI tool of your choice.
  6. We will provide validation, tuning and on-going support.

Please contact us at for further details.