Does question order matter?
In a word – absolutely. The order of your questions can have a real effect on how respondents answer. Studies over the last several decades have demonstrated that question order affects responses in surveys about everything from presidential campaigns to employee opinions.
Data skewing created by the order of survey questions is a form of response bias – bias that results from the way a survey is designed.
Priming and anchoring
One of the major pitfalls in survey question ordering is the risk of accidentally anchoring or priming your respondent.
Anchoring is where an early piece of information ‘sets the tone’ and limits or influences all of your participant’s subsequent answers. An example might be adding a dollar sign or other currency symbol to the answer box for a question on the value of their spouse. This will steer the participant towards answering not only this question, but other questions, in terms of monetary value rather than personal qualities.
In a questionnaire, anchoring could accidentally be established by the nature and wording of prior questions or by the survey introduction, creating bias later on in the questionnaire.
Priming is even easier to do by accident. It happens when someone is influenced by an idea, maybe an unrelated one, that has been introduced before they answer the question. For example, say you set a question that asks respondents for their three favourite vacation destinations, just after you ask them a question about their liking for French food. Will France be more likely to make the dream vacation list? Priming theory suggests so.
Priming can be very hard to avoid in survey questionnaires, since questions must be answered sequentially and early items are almost bound to have an effect on the subject’s state of mind when they come to answer later questions. Later, we’ll introduce a technique for minimising priming effects.
Primacy and recency
Primacy and recency bias theory (aka the Serial-position effect) says that the first and last items will receive the most attention from an individual. The first item (primacy) is salient because it falls at the beginning of an experience when the person is most alert and is learning and understanding the nature of the task. The final item (recency) lingers in the memory because it’s the last thing the person sees when they complete the survey.
So according to this theory, the first and last questions in your survey will have the most impact and receive the most attention, while the material in the middle of your questionnaire is more vulnerable to being rushed or skipped over.
Primacy and recency can also have an effect on answer choice in e.g. multiple choice questions.
Assimilation and contrast
People love to be consistent. Assimilation-contrast theory holds that a person’s judgement of something can act as a type of anchor, influencing their later judgements. Once someone makes a judgement, they’ll become biased towards maintaining their point of view, “assimilating” other neutral information in a biased way so that it seems to support their ideas.
If something they see or read goes against their anchored judgement, they will be biased against it – this is the “contrast” part of the equation.
What this means for the ordering of survey questions is that if you ask someone to take up a position, agree or disagree with something early in the questionnaire, their later answers will be coloured by this judgement, and their responses may be polarised in favour of ideas that support the judgement, and in rejection of things that don’t.
A similar effect is described as additive and subtractive question order bias (Moore, Gallup 2002).
Fatigue and drop-offs
Much as we love them, surveys can be taxing to take, especially when a respondent is tired, distracted, or otherwise disengaged. You can typically expect to see a person’s attention wane during a task like survey-taking after a peak of attention at the beginning.
In some cases, people may become bored or distracted before the end of the survey and drop out of it altogether, failing to finish. This has an obvious relevance to question order, since what comes last may be most at risk of being overlooked or ignored.
How to limit bias in your survey flow design
1. Start off with straightforward, non-sensitive questions
Although we want to maximise the attention window at the start of survey-taking, we don’t want to be too familiar too quickly. The start of a survey may also be the start of a relationship with a new respondent, so it’s a good idea to begin with easy, enjoyable and non-controversial questions.
Start with broad general interest questions that are easy for the respondent to answer. These questions serve to warm up the respondent and get them involved in the survey. By starting off with questions that promote a sense of ease and positivity, you’re laying the foundation for the respondent being willing to answer more complex and even sensitive questions later on.
2. Randomise to reduce question order bias
We know that priming effects are almost always present in surveys, but fortunately you can offset the effect to some extent by randomising your questions so that different groups of participants get them in different orders.
You may not want to randomise the whole questionnaire, as this could be confusing for participants, but you can randomise groups of related questions so that there’s still some logical order.
3. Leave sensitive questions until last
Some experts suggest that it’s best to leave any questions that may cause offence or seem intrusive until the end of your survey, so that they don’t bias the earlier responses or cause the participant to want to abandon the survey. This makes sense from the perspective of consistency and assimilation too, since someone who has observed themselves answering questions thoroughly and helpfully will be resistant to rejecting questions or aborting the survey.
However, keep in mind the effect of recency too – it may be better to place contentious questions near the end but not right at the end so that you don’t leave a bad taste in the respondent’s mouth.
4. Keep your survey short
Most long surveys are not completed. If a questionnaire is long, the person must either be very interested in the topic, an employee, or paid for their time if they’re going to make it through to the end. That means beyond a certain survey length, your questions will not receive equal care and attention from survey-takers.
How long is too long? The general rule of thumb is to keep the survey short, typically fewer than five minutes. This translates into about 15 simple questions (rather than matrix questions that require many responses in one).
The average respondent is able to complete about 3 multiple choice questions per minute. An open-ended text response question counts for about three multiple choice questions depending, of course, on the difficulty of the question. While only a rule of thumb, this formula will accurately predict the limits of your survey.
5. Write in plain language
As well as keeping questionnaires short and to the point, you can ease the risk of fatigue and drop-offs by using the most accessible language you can. You will help ensure your respondents understand your survey if you lower the level of sophistication in your survey writing to the 9-11th grade level.
6. Consider ringer or throw away questions
Questionnaires often include “ringer or throw away” questions to increase interest and willingness to respond to a survey. These questions are about hot topics of the day and often have little to do with the survey. While these questions will definitely spice up a boring survey, they require valuable space that could be devoted to the main topic of interest. Use this type of question with caution.
eBook: The Qualtrics Handbook of Question Design