Enhancing Survey Quality and Accuracy with Attention-Validation Questions

Enhancing Survey Quality and Accuracy with Attention-Validation Questions

Ensuring the reliability and accuracy of survey data is a critical requirement of conducting market research and is vital for getting results we can trust.

One of the most common ways we can lose trust in our survey data is because survey participants rush through survey questions, not paying full attention while answering, or answering dishonestly.

Incorporating attention-validation questions is one simple and effective step towards mitigating this issue and enhancing the quality of your data set. These strategically placed questions help identify and remove inattentive or dishonest respondents, ultimately leading to more accurate and valuable insights.

What are Attention-Validation Questions?

Attention-validation questions, also known as red herring questions, trap questions, or quality control questions, are designed to catch respondents who are not paying attention or are inputting random answers. By identifying and filtering out these low-quality and unreliable respondents, we can maintain the integrity and reliability of our data.

Benefits of Incorporating Attention-Validation Questions

Improved Data Quality: Attention-validation questions help weed out careless or disengaged respondents, ensuring that the data collected is from genuinely engaged individuals providing thoughtful answers.

Increased Survey Validity: By filtering out low-quality responses, the remaining data is more likely to reflect true opinions and behaviors, leading to more valid and actionable insights.

Enhanced Respondent Engagement: Knowing that there are attention-validation questions can encourage respondents to pay closer attention to the survey, leading to higher quality responses overall.

What are some Examples of Attention-Validation Questions?

Decoy Response Options: A decoy response option is an answer option that is included in a closed-end survey question that is not valid, meaning you would not expect any honest, real survey respondents to select it as an answer.

  • Purpose: This checks for alertness and honesty. If any decoy response options are selected, it may indicate that the respondent isn’t paying attention, or is selecting the decoy options because they think it will improve their chances of qualifying for a study.
  • Note: Do your research and verify that the decoy options that you include are actually fake or invalid so that you don’t elicit a false positive.
  • Example Question:
    • Which of the following social media sites have you heard of?
      • Facebook
      • Instagram
      • TikTok
      • YouTube
      • Portal [TERM IF SELECTED]
      • Sphere [TERM IF SELECTED]

The two bolded options (Portal and Sphere) are decoy options as they are not real social media sites.  Any respondents who select these options should be removed from the survey as they are likely not paying attention or providing dishonest answers.

Inconsistent Response Check: An inconsistent response check involves asking the same question twice at various points in the survey, in the same or similar ways, to ensure that respondents answer those questions in a consistent way.

  • Purpose: This checks for consistency. If a respondent initially gives a response that does not align with a response they give later, it flags potential inattentiveness or dishonesty.
  • Note: Using an inconsistent response check with questions involving opinions should be approached cautiously, as a respondent’s opinion can reasonably be expected to change slightly. For instance, if we ask about perceptions of brand X twice, we can expect respondents to give slightly different answers, although we wouldn’t expect them to change from strongly positive to strongly negative. Inconsistent response checks based on a fact, such as the respondent’s age, are more straightforward for identifying inconsistency.
  • Example Question:
    • First Question: How old are you?
    • Follow-Up Question: What year were you born?

If the birth year doesn’t align with the age given previously, we can reasonably assume the respondent is lying about their age or not paying attention.

Implausible Answer Detection: Implausible answer detection is a type of question for which a given answer is highly improbable or impossible.

  • Purpose: This question includes an obviously false option. Respondents who choose ‘Yes’ are clearly not providing truthful or thoughtful answers, indicating a lack of engagement with the survey.
  • Note: Certain questions and answers are highly improbable rather than impossible, such as “Can you name the state flowers of all 50 US states.” It is possible, although improbable, that someone could know the state flowers of all 50 US states. If a respondent answers this type of question affirmatively, the rest of their answers should be more closely examined for red flags. A question that has an answer choice that is impossible is a better fit for this type of question.
  • Question: “Have you visited Antarctica in the past year?”
    • If the respondent answers ‘Yes’ they are most likely rushing and/or not paying attention.

Simple Instruction Follow-Through in Grid Questions: Grid questions are a question type where respondents might be more likely to rush or lose focus. Instructing respondents which exact answer to choose in a specific row ensures they are fully reading and paying attention to all rows in the grid question.

  • Purpose: This question directly instructs respondents what answer to choose. Those who fail to follow this simple instruction are likely not paying close attention.
  • Question: How often do you use each of the following social media sites?
    •  COLUMNS
      • Everyday
      • A few days a week
      • Once a week
      • Less than once a week
    •  ROWS
      •  Facebook
      •  Instagram
      •  TikTok
      •  YouTube
      • Select “Everyday” for this row

If the respondent does NOT select “Everyday” for the last row, they were not reading and paying attention and most likely rushing through the grid question just to complete it.

Important Things to Keep in Mind When Using Attention-Validation Questions in Your Survey

While attention-validation questions can be beneficial in weeding out poor quality, it is important to be strategic and considered when using them; asking the wrong question or asking it in the wrong way could undermine your goal and create a negative survey experience for participants.

Poorly worded, oddly placed, or blatant “trick” questions can confuse respondents, leading to frustration and disengagement. They can also undermine trust, as participants may feel tricked or deceived, negatively impacting their willingness to participate in future surveys.

To ensure an attention-validation question is used correctly and does not confuse or irritate respondents, it is crucial to design the question clearly and straightforwardly. The attention-validation question should align with the survey’s overall context and flow naturally within the sequence of questions.

It’s also advisable to pretest the attention-validation question in a survey with a small, representative sample to identify any potential confusion or frustration before deploying it in major projects. Providing clear instructions and maintaining transparency about the survey’s purpose can also help in mitigating negative reactions.

Taking Your Data Quality to the Next Level

Incorporating attention-validation questions into surveys is a simple and straightforward method for enhancing the quality of your survey data, but, it is only one small piece of the overall Data Quality toolkit.

It is critical that you layer many levels of QC checks into your survey and research process to ensure that you end up with a data set that you can have total confidence in.  Attention-validation questions are one piece in the puzzle, but to have any tangible value, they must be layered with other QC checks such as location verification, behavioral checks (e.g., speeding, mouse-movement), verbatim quality, etc.  Bots and scammers are growing more sophisticated by the day.  The single best way to stay ahead of the curve is to layer QC checks throughout the entire research process.  A comprehensive and wholistic approach to data quality is the key to valid, dependable, and valuable insights.

Touchstone research takes a comprehensive, multi-method approach to survey QC which spans the full research process. QC is baked into every phase of our research projects, from the survey design to the sample sourcing to fielding schedule and strategy to the production of the final data deliverables, and everything in between.  Connect with us today to learn more about our approach to data quality.

Author Bio

Erik is a Senior Project Director and has been with Touchstone Research for over two years leading qualitative UX research projects and managing online research communities.  Erik is also a member of Touchstone’s Data Quality Team, which is dedicated to enhancing our data quality systems to ensure they remain at the forefront of our industry.  He has over 15 years of experience conducting qualitative, quantitative, and mixed-methods research in private industry, academia, and government, and for the past 12+ years he has been uncovering insights about consumer culture and user experiences for Fortune 500s and industry-leading ad agencies in categories including technology, media, CPG, finance, health care, food and beverage, automotive, clothing, and more.