Versions of this article have been published in Luminate and RetailWorld magazine.
In the midst of a pervasive cost-of-living crisis, the escalating prices of everyday essentials has become a concern for both consumers and businesses alike. The price of groceries has been steadily increasing, further burdening the already mounting weekly supermarket bill. Like me, you may have been surprised to see a bag of potato chips reach $9, dairy-free milk increase by 60% and spreadable butter hit $8 for a tub.
As product costs surge and raw material shortages persist, the increasing price of supermarket items is set to continue. Prices will inch closer to critical pricing thresholds, which will start having a direct impact on consumers’ likelihood to purchase. In some cases, consumers will trade-down to lower-priced or private-label alternatives, shop at different retailers or stop purchasing products altogether. To combat these challenges, businesses are looking at strategies to protect profit margins, avoid hitting critical pricing thresholds, and maintain their market share. Pack Price Architecture (PPA) is one of these valuable tools that marketers and revenue managers can use to maintain or optimise their position, despite evolving market challenges.
This piece explores the PPA levers that marketers and revenue managers can pull to protect market share and optimise profitability. Second to this, we explore best in class techniques for testing the impact of future PPA changes and how to get closer to predicting in-market behaviour.
There are a number of levers marketers and revenue managers can pull to optimise PPA;
These pack/price hierarchy principles are nothing new, but amidst the cost-of-living crisis, it’s a key lever that marketing and revenue managers can pull to maintain (or even increase) penetration, value and profitability.
Revenue managers will have data on past pack/price changes in market. However, this data will only provide a retrospective view, not a prospective view of what could be. Hence, any impact of future PPA changes are unknown and would require estimation – estimates which may or may not be accurate depending on the nature of the data available in the category.
It’s dangerous and limiting to make pricing decisions about the future based solely on what’s happened in the past. While it may be tempting to rely solely on historical information such as past sales data or scanner panel data, it’s worth remembering this approach is often referent to a different window in time when consumer and shopper mindsets were different. And moreover, relying solely on historical data means having to project future scenarios within the confines of past price points. This is precisely why stated preference experiments are a powerful means of augmenting so-called ‘revealed preference’ patterns shown in historical data. Pricing optimisation is particularly important in inflationary times, and doing it well matters to the outcome.
One way of gaining stronger estimates of the impact of PPA changes, is to use experimental research, Choice Modelling being the gold standard. Choice Modelling is a powerful technique that allows us to present future scenarios to respondents and elicit any changes to their likely purchase behaviour. To put simply, an FMCG choice model consists of a series of shopping tasks, whereby a respondent selects from a shelf the next product(s) they would buy. Across these shopping tasks the pack and/or price (depending on what is being tested) will vary. Through modelling and analytics we are then able to determine the impact of changing pack and/or price on key commercial metrics such as units sold, volume and value. If COGS (cost of goods) are included as an analytical input, the impact on profitability by making PPA changes can also be determined.
The outcome of a Choice Model is to fill the gap in existing competitive data to help us extrapolate past what is known, and to consider all the possible scenarios that could be potentially better outcomes for the brand/portfolio at hand. However, a Choice Model will only be useful if it mirrors closely the choices that shoppers make, so that any deltas in their behaviour are as real as possible.
Getting closer to reality with choice modelling hinges on the design process. The more research replicates consumers’ purchase process, the greater the likelihood of obtaining realistic results. While no pricing research methodology is watertight, a carefully designed choice model means results are more likely to reflect real-world dynamics.
To close this gap, we need to get close to the category of focus and design for the key elements that impact on shoppers. Where this is not feasible from a design perspective, we can take this into account through modelling and analytics. However, modelling and back-end calibration will only get you so far, and paramount is getting the design of an FMCG choice model right from the start.
There are 4 design features that are critical to get right to ensure useful choice modelling outputs;
Our first step is to determine the experimental world that we are testing into. This is the experimental version of the existing world, covering the SKUs included, how they are visualised, and their pricing strategies – both level and mechanics.
Critical inputs at this stage are retailer planograms (e.g. Woolworths and Coles), sales data, and clarity on which existing SKUs we are looking to make changes to. This forms the basis of the shelf that is brought to life (as shown below). The SKUs in the example below cover >80% of the sales in market and reflect the position and share of shelf each SKU commands in reality.
At this stage, there can be retailer complexities that need to be taken into consideration, such as the presence/absence of brands in different retailers or differences in brands sales. In this situation, there may be merit in creating separate choice models and designs for Woolworths or Coles.
Brands don’t have prices, but rather pricing strategies, which play out over a 52-week period. For example, one brand may be on an “everyday low price” 52 weeks of the year, while another brand has 30 weeks at a bench price, 10 weeks on a shallow promotion (e.g. 20% off) and 12 weeks on a deep promotion (e.g. 40% off).
Different brands will have different pricing strategies, and our design needs to take this into account. And not just for our brand, but competitor brands as well.
We also need to bring these pricing mechanics to life, which we do through the creation of pricing tickets, as shown below. The way information is presented on the ticket intentionally reflects reality. This includes the colour of the ticket, how the everyday vs. promotional price is called out and the hierarchy of information. Getting the ticket right is another critical design element that helps to close the gap between research and reality.
Next, we need to determine which brands and SKUs to promote together. These promotional rules needed to be considered both within a brand, and across brands. Continuing with the soft drinks example from earlier, a retailer may decide not to promote Coca-Cola and Pepsi on a deep promotion (e.g. 50% off) at the same time. Our design needs to reflect this dynamic, otherwise a respondent could see a shelf where both Coca-Cola and Pepsi both have a 50% special concurrently. If this isn’t accounted for in the design, we would not only be showing respondents unlikely scenarios but it would impact on the quality of the outputs and outcomes drawn from the research.
In some categories, we need to consider where products are sold in-store. In some cases, products are available in multiple locations (e.g. ambient and chilled) or promoted off-location (e.g. gondola ends).
First, we need to ensure that the shopper is thinking about a buying occasion at shelf, and not a buying situation out of aisle. Rather than just asking the respondent what they would purchase, we need to anchor the choice model shopping tasks in a relevant mission. One way to frame this is to ask respondents about a recent supermarket trip when they purchased the category. One of the past trips is then used to frame a future buying decision. This helps makes the task feel more relevant to the respondent, as it’s a mission or trip they typically do within the category.
Secondly, when dealing with multi-location issues, we need to determine what proportion of sales in the category are off location versus at shelf. Sometimes this information is known, but at other times is unknown and estimations need to be made. Alternatively, direct questions within a survey can be asked to determine what proportion of category purchasers are made on vs. off location. This information can then be used to calibrate sales data, ensuring results are more accurate, and any noise or influence from off location sales is minimised.
In sum, amidst the cost of living crisis, PPA is a key lever marketers and revenue managers can pull to drive value and protect profit margins. Changing pack sizes and stretching prices are critical decisions that have wide reaching implications. How these changes are tested is of critical importance as they will underpin business cases and discussions with retailers. Marketers and revenue managers – before you embark on your next PPA study, ensure your design is watertight and the complexities of the category, pricing mechanics and off-location sales are taken into account.
Considering optimising PPA for your brands within an FMCG context? Leave your details below