This article was originally published in Luminate.
Combining economics, psychology and experimental analysis, BE is built on the observation that humans are not always predictable or rational in their decision making. Rather, human decision making can be characterised by ‘bounded rationality’. Limitations in knowledge and cognitive capacity lead people to ‘satisfice’ – in other words, they pursue a course of action that will satisfy the minimum requirements necessary to achieve a particular goal – rather than ‘optimise’, by collecting as much data as possible to make the best choice.
Much has been made of the application of BE in policy making and marketing, however its application stretches beyond this and has implications for research design. BE is typically applied in ‘problem framing’ and ‘delivery’ (as illustrated below), but it’s critical not to forget the middle step: the evidence base or research design phase.
he way in which surveys are structured, questions worded, and information framed impacts on the quality and validity of responses captured. Without an understanding of BE, researchers risk designing surveys and experiments that are biased, which leads to making incorrect or incomplete conclusions about consumers and their likely behaviour.
While there are hundreds of biases that exist, we’ll look at three applied examples: setting context, presentation of information and questionnaire ordering. Integrating BE into research design ensures that every study is as unbiased as possible. It’s important that potential biases are directly considered in the design process, as without careful consideration biases can be easily missed.
BE teaches us that we over-estimate the consistency of people’s behaviour, at the expense of understanding the context of a situation. When designing a survey, we need to carefully consider how we bring context to life.
Imagine this: you’re designing a survey to gauge consumer price expectations for a new brand of functional milk. Without context, you’re likely to get wildly inconsistent price expectations, as consumers may rely on vague memories of the price of their preferred brand of milk costs or on gut feel. The problem with this approach? It fails to reflect reality because, in the real world, this decision would be made in context – at the supermarket, when the consumer is standing in front of a fridge filled with similar items.
To get closer to reality, we need to leverage the BE principle of ‘anchoring’, which states that individuals rely heavily on an initial piece of information, known as the ‘anchor’, when making decisions. By including a suitable anchor we can more accurately re-create the context experience and understand the relative price trade-offs consumers make.
To do this, we might show other products and their prices (as illustrated). This acts as our anchor before we reveal the new functional milk brand and ask consumers their price expectations.
Presentation of information
People’s choices are influenced by the way information is framed, be that through wording, reference points or where emphasis is placed. When someone has a tendency to focus on information that is more noteworthy while ignoring other information, this is known as salience bias.
Imagine this: you’re testing a new product idea and have written a detailed description to include in the survey. The description includes an overview of the product purpose, functional traits, ‘reasons to believe’, flavours and distribution details (as illustrated below). To help bring this idea to life, you also include a visual mock-up of the product.
Salience bias is at play in this scenario, as respondents are likely to focus on the image, rather than reading the detailed description. Given respondents have limited attention, they will seek to reduce any cognitive load by looking for short cuts – in this example, it’s the visual mock-up, rather than the 200-word copy.
By including a visual mock-up, researchers and marketers risk the possibility they may not be truly testing the underlying concept idea and its commercial viability. Instead, they may give disproportionate weight to a visual representation, which may not even represent the look and feel when the product is launched.
One way to reduce salience bias would be to include a timer when the concept is revealed, forcing the respondent to wait 20 to 30 seconds so they’re more likely to read the concept before they can click ‘next’.
A staged concept reveal is another solution: firstly, the respondent would read the written description (without a product image) and respond to a series of questions; secondly, the visual mock-up would be presented and similar questions would be asked.
The order of questions, or how questions are framed, can ‘prime’ respondents to think about certain issues or concepts, which can in turn unconsciously influence their subsequent responses.
Imagine this: You’ve designed a survey to evaluate consumer willingness to pay for different sustainable packaging options, such as biodegradable packaging or carbon neutral packaging. Asking the respondent about their general attitudes to the environment and sustainable behaviours before you ask them to indicate their willingness to pay for different sustainable packaging options would introduce a ‘priming effect’. This type of question ordering would over-inflate results, with consumers more likely to be willing to pay for sustainable packaging options.
One solution to overcome unwanted priming effects is to include key questions first in a survey – before anything else can influence or prime responses to it. If this is not practical or feasible, , you could ask a small subset of the sample (say 20 per cent) the broader questions, and then compare the results of those who have been primed vs those who have not.
In sum, incorporating behavioural economics into research and survey design is critical to getting closer to actual decision making. The next time you’re writing a survey or designing research, think carefully about how information is presented and how questions are worded and ordered. Being aware of biases and remembering to check against them will ensure accurate design and better conclusions and insights. Don’t be afraid to lean on BE, not just in ‘problem framing’ or ‘implementation’, but also in design. It is critical to closing the research-reality gap and making better, more well-informed decisions.
If you would like to work together, leave your details below.