We use cookies. You have options. Cookies help us keep the site running smoothly and inform some of our advertising, but if you’d like to make adjustments, you can visit our Cookie Notice page for more information.
We’d like to use cookies on your device. Cookies help us keep the site running smoothly and inform some of our advertising, but how we use them is entirely up to you. Accept our recommended settings or customise them to your wishes.

Defining the Survey Response Scale

Sooner or later you will ask a question that will require your marketing team to launch a survey. Let’s say the issue is product satisfaction and your team does a bang-up job answering your questions. Inevitably, this exercise generates more questions: how has satisfaction changed over time; what’s the overall satisfaction by gender, geography, industry; are these levels better or worse than other products; are they trending more or less favorably than other products; and so on.

These follow-ups will not require a disproportionate amount of work — that is, until your team goes back to the responses only to find Product B’s survey used a 1-5 scale, Product C used A-E, and while Product D used the same 1-10 as your team, they reversed the meaning of the scale. Your team is now forced to expend a great deal of effort in aligning the results consistently before even beginning the real work of answering your questions.

A little advance planning and some not-readily-obvious standards can go a long way in alleviating this problem. The secret is being consistent with the scale you use and differentiating between how that scale is presented versus how the results are used in analysis.

Regarding the scale being used:

  • It should always be numeric and always be the same.
  • Yes/no type questions should be stored as the extreme values along that scale.
  • Additional values should be reserved for a response of “does not apply” or “not enough information to answer” (say 0) and “no response was received” (say -1).

Of course, the scale should be presented to the responder in the most consistent, intuitive way possible. For example, “agree strongly” should always appear on the same side of the scale directly opposed to “disagree strongly.”

What is not obvious is the hole you can dig by simply storing and using those responses exactly as they are received. If a response of 10 (“agree strongly”) is received in response to the statement, “your product is durable,” it represents the best possible news. If a response of 10 is received to the statement, “I am waiting too long to speak with a representative,” it represents the worst possible news. Flipping that second 10 to a 1 aligns the responses along the axis of favorability to your organization. This is the only axis that allows for consistency in compiling, consolidating and analyzing results across any / all dimensions.

The “flip” process can be easily automated to support this concept by including a new indicator on your database’s Question Catalog table. With this indicator, set one way the response to that question is flipped to align to favorable / unfavorable: 1 would be stored as 10, 2 as 9 and so on. With the indicator set the other way, the response is left alone. (You say your database doesn’t contain a Question Catalog table? If you conduct surveys and value consistency, it probably should, but that’s a topic for another day.)