psm_analysis {pricesensitivitymeter}R Documentation

Van Westendorp Price Sensitivity Meter Analysis (PSM)

Description

psm_analysis() performs an analysis of consumer price preferences and price sensitivity known as van Westendorp Price Sensitivity Meter (PSM). It takes respondent's price preferences (from survey data) as an input and estimates acceptable price ranges and price points. For a description of the method see the Details section.

Usage

psm_analysis(
  toocheap, cheap, expensive, tooexpensive,
  data = NA,
  validate = TRUE,
  interpolate = FALSE,
  interpolation_steps = 0.01,
  intersection_method = "min",
  acceptable_range = "original",
  pi_cheap = NA, pi_expensive = NA,
  pi_scale = 5:1,
  pi_calibrated = c(0.7, 0.5, 0.3, 0.1, 0),
  pi_calibrated_toocheap = 0, pi_calibrated_tooexpensive = 0
  )

Arguments

toocheap, cheap, expensive, tooexpensive

If a data.frame/matrix/tibble is provided in the data argument: names of the variables in the data.frame/matrix that contain the survey data on the respondents' "too cheap", "cheap", "expensive" and "too expensive" price preferences.

If no data.frame/matrix/tibble is provided in the data argument: numeric vectors that directly include this information. If numeric vectors are provided, it is assumed that they are sorted by respondent ID (the preferences for respondent n are stored at the n-th position in all vectors).

If the toocheap price was not assessed, a variable/vector of NAs can be used instead. This variable/vector needs to have the same length as the other survey information. If toocheap is NA for all cases, it is possible to calculate the Point of Marginal Expensiveness and the Indifference Price Point, but it is impossible to calculate the Point of Marginal Cheapness and the Optimal Price Point.

data

data.frame, matrix or tibble that contains the function's input data. data input is not mandatory: Instead of using a data.frame/matrix/tibble as an input, it is also possible to provide the data directly as vectors in the "too cheap", "cheap", "expensive" and "too expensive" arguments.

validate

logical. should only respondents with consistent price preferences (too cheap < cheap < expensive < too expensive) be considered in the analysis?

interpolate

logical. should interpolation of the price curves be applied between the actual prices given by the respondents? If interpolation is enabled, the output appears less bumpy in regions with sparse price information. If the sample size is sufficiently large, interpolation should not be necessary.

interpolation_steps

numeric. if interpolate is TRUE: the size of the interpolation steps. Set by default to 0.01, which should be appropriate for most goods in a price range of 0-50 USD/Euro.

intersection_method

"min" (default), "max", "mean" or "median". defines the method how to determine the price points (range, indifference price, optimal price) if there are multiple possible intersections of the price curves. "min" uses the lowest possible prices, "max" uses the highest possible prices, "mean" calculates the mean among all intersections and "median" uses the median of all possible intersections

acceptable_range

"original" (default) or "narrower". Defines which intersection is used to calculate the point of marginal cheapness and point of marginal expensiveness, which together form the range of acceptable prices. "original" uses the definition provided in van Westendorp's paper: The lower end of the price range (point of marginal cheapness) is defined as the intersection of "too cheap" and the inverse of the "cheap" curve. The upper end of the price range (point of marginal expensiveness) is defined as the intersection of "too expensive" and the inverse of the "expensive" curve. Alternatively, it is possible to use a "narrower" definition which is applied by some market research companies. Here, the lower end of the price range is defined as the intersection of the "expensive" and the "too cheap" curves and the upper end of the price range is defined as the intersection of the "too expensive" and the "cheap" curves. This leads to a narrower range of acceptable prices. Note that it is possible that the optimal price according to the Newton/Miller/Smith extension is higher than the upper end of the acceptable price range in the "narrower" definition.

pi_cheap, pi_expensive

Only required for the Newton Miller Smith extension. If data argument is provided: names of the variables in the data.frame/matrix/tibble that contain the survey data on the respondents' purchase intent at their individual cheap/expensive price.

pi_scale

Only required for the Newton Miller Smith extension. Scale of the purchase intent variables pi_cheap and pi_expensive. By default assuming a five-point scale with 5 indicating the highest purchase intent.

pi_calibrated

Only required for the Newton Miller Smith extension. Calibrated purchase probabilities that are assumed for each value of the purchase intent scale. Must be the same order as the pi_scale variable so that the first value of pi_calibrated corresponds to the first value in the pi_scale variable. Default values are taken from the Sawtooth Software PSM implementation in Excel: 70% for the best value of the purchase intent scale, 50% for the second best value, 30% for the third best value (middle of the scale), 10% for the fourth best value and 0% for the worst value.

pi_calibrated_toocheap, pi_calibrated_tooexpensive

Only required for the Newton Miller Smith extension. Calibrated purchase probabilities for the "too cheap" and the "too expensive" price, respectively. Must be a value between 0 and 1; by default set to zero following the logic in van Westendorp's paper.

Details

The Price Sensitivity Meter method for the analysis of consumer price preferences was proposed by the Dutch economist Peter van Westendorp in 1976 at the ESOMAR conference. It is a survey-based approach that has become one of the standard price acceptance measurement techniques in the market research industry and is still widely used for during early-stage product development.

Price acceptance and price sensitivity are measured in van Westendorp's approach by four open-ended survey questions:

Respondents with inconsistent price preferences (e.g. "cheap" price larger than "expensive" price) are usually removed from the data set. This function has built-in checks to detect invalid preference structures and removes those respondents from the analysis by default.

To analyze price preferences and price sensitivity, the method uses cumulative distribution functions for each of the aforementioned price steps (e.g. "how many respondents think that a price of x or more is expensive?"). By convention, the distributions for the "too cheap" and the "cheap" price are inverted. This leads to the interpretation "how many respondents think that a price of up to x is (too) cheap?".

The interpretation is built on the analysis of the intersections of the four cumulative distribution functions for the different prices (usually via graphical inspection). The original paper describes the four intersections as follows:

Besides those four intersections, van Westendorp's article advises to analyze the cumulative distribution functions for steep areas which indicate price steps.

To analyze reach (trial rates) and estimate revenue forecasts, Newton/Miller/Smith have extended van Westendorp's original model by adding two purchase intent questions that are asked for the respondent's "cheap" and "expensive" price. The purchase probability at the respondent's "too cheap" and "too expensive" price are defined as 0. The main logic is that the "too expensive" price point is prohibitively expensive for the respondent and a price at the "too cheap" price level raises doubts about the product quality.

By combining the standard van Westendorp questions with those two additional purchase intent questions, it becomes possible to summarize the purchase probabilities across respondents (using linear interpolation for the purchase probabilities between each respondent's cornerstone prices). The maximum of this curve is then defined as the price point with the highest expected reach. Moreover, by multiplying the reach with the price, it also becomes possible to estimate a price with the highest expected revenue.

It has to be noted that the van Westendorp Price Sensitivity Meter is useful in some cases, but does not answer every pricing-related question. It may be a good tool to assess very broadly if the consumers' price perceptions exceed the actual production costs. For more complex analyses (e.g. defining specific prices for different products to avoid cannibalization and drive at the same time incremental growth), other methodological approaches are needed.

Value

The function output consists of the following elements:

data_input:

data.frame object. Contains the data that was used as an input for the analysis.

validated:

logical object. Indicates whether the "validate" option has been used (to exclude cases with intransitive price preferences).

invalid_cases:

numeric object. Number of cases with intransitive price preferences.

total_sample:

"numeric" object. Total sample size of the input sample before assessing the transitivity of individual price preferences.

data_vanwestendorp:

data.frame object. Output data of the Price Sensitivity Meter analysis. Contains the cumulative distribution functions for the four price assessments (too cheap, cheap, expensive, too expensive) for all prices.

pricerange_lower:

numeric object. Lower limit of the acceptable price range as defined by the Price Sensitivity Meter, also known as point of marginal cheapness: Intersection of the "too cheap" and the "not cheap" curves.

pricerange_upper:

numeric object. Upper limit of the acceptable price range as defined by the Price Sensitivity Meter, also known as point of marginal expensiveness: Intersection of the "too expensive" and the "not expensive" curves.

idp:

numeric object. Indifference Price Point as defined by the Price Sensitivity Meter: Intersection of the "cheap" and the "expensive" curves.

opp:

numeric object. Optimal Price Point as defined by the Price Sensitivity Meter: Intersection of the "too cheap" and the "too expensive" curves.

NMS:

logical object. Indicates whether the additional analyses of the Newton Miller Smith Extension were performed.

weighted:

logical object. Indicates if weighted data was used in the analysis. Outputs from psm_analysis() always have the value FALSE. When data is weighted, use the function psm_analysis_weighted.

data_nms:

data.frame object. Output of the Newton Miller Smith extension: calibrated mean purchase probabilities for each price point.

pi_scale:

data.frame object. Shows the values of the purchase intent variable and the corresponding calibrated purchase probabilities as defined in the function input for the Newton Miller Smith extension.

price_optimal_reach:

numeric object. Output of the Newton Miller Smith extension: Estimate for the price with the highest reach (trial rate).

price_optimal_revenue:

numeric object. Output of the Newton Miller Smith extension: Estimate for the price with the highest revenue (based on the reach).

References

Van Westendorp, P (1976) "NSS-Price Sensitivity Meter (PSM) – A new approach to study consumer perception of price" Proceedings of the ESOMAR 29th Congress, 139–167. Online available at https://archive.researchworld.com/a-new-approach-to-study-consumer-perception-of-price/.

Newton, D, Miller, J, Smith, P, (1993) "A market acceptance extension to traditional price sensitivity measurement" Proceedings of the American Marketing Association Advanced Research Techniques Forum.

Sawtooth Software (2016) "Templates for van Westendorp PSM for Lighthouse Studio and Excel". Online available at https://sawtoothsoftware.com/resources/software-downloads/tools/van-westendorp-price-sensitivity-meter

Examples for companies that use a narrower definition than van Westendorp's original paper include Conjoint.ly (https://conjointly.com/products/van-westendorp/), Quantilope (https://www.quantilope.com/resources/glossary-how-to-use-van-westendorp-pricing-model-to-inform-pricing-strategy), and Milieu (https://www.mili.eu/learn/what-is-the-van-westendorp-pricing-study-and-when-to-use-it)

See Also

The function psm_analysis_weighted() performs the same analyses for weighted data.

Examples


set.seed(42)

# standard van Westendorp Price Sensitivity Meter Analysis
# input directly via vectors

tch <- round(rnorm(n = 250, mean = 5, sd = 0.5), digits = 2)
ch <- round(rnorm(n = 250, mean = 8.5, sd = 0.5), digits = 2)
ex <- round(rnorm(n = 250, mean = 13, sd = 0.75), digits = 2)
tex <- round(rnorm(n = 250, mean = 17, sd = 1), digits = 2)

output_psm_demo1 <- psm_analysis(toocheap = tch,
  cheap = ch,
  expensive = ex,
  tooexpensive = tex)

# additional analysis with Newton Miller Smith Extension
# input via data.frame

pint_ch <- sample(x = c(1:5), size = length(tex),
  replace = TRUE, prob = c(0.1, 0.1, 0.2, 0.3, 0.3))

pint_ex <- sample(x = c(1:5), size = length(tex),
  replace = TRUE, prob = c(0.3, 0.3, 0.2, 0.1, 0.1))

data_psm_demo <- data.frame(tch, ch, ex, tex, pint_ch, pint_ex)

output_psm_demo2 <- psm_analysis(toocheap = "tch",
  cheap = "ch",
  expensive = "ex",
  tooexpensive = "tex",
  pi_cheap = "pint_ch",
  pi_expensive = "pint_ex",
  data = data_psm_demo)

summary(output_psm_demo2)

[Package pricesensitivitymeter version 1.3.0 Index]