Good insights require good research, and good research requires high-quality data. At inVibe, this data comes from pre-designed voice response questions, which we use to elicit the depth of response of traditional qual research without the scheduling and moderation needs of an interview. The resulting voices can provide powerful insights into HCPs and patient behaviors.
However, our methodology demands particular care in how we ask questions – in an unmoderated survey, we need to make sure that we are prompting respondents in a way that elicits useful content while reducing the need to follow up. To accomplish this, we must strike a delicate balance between an open-ended design philosophy and goal-oriented lines of questioning.
Open-ended: Give them the floor
Without the back-and-forth of an interview, it can be difficult to imagine respondents speaking both broadly and deeply enough to provide meaningful qualitative insights. Indeed, from the participant’s perspective, our voice response surveys may seem more like quick quant surveys, at least in terms of how long it takes. But getting expansive responses in this format is easier than you may think, mainly for one simple reason: people like to be the expert.
When we reach out to stakeholders, whether they are doctors or patients, they are – in one way or another – an expert in some condition. Whether a patient is discussing their experience with an illness, or an HCP is describing their typical patient, our prompts place the respondent in the role of a teacher, an expert. By giving them that role, we empower respondents to speak with authority and confidence on whatever subject we’re trying to dig into.
This positioning sets us up for success and helps us avoid the dreaded “yes/no” quant-esque responses that come from reticent respondents or under-designed write-in surveys. When respondents feel engaged and confident, they will give us more than that.
So, what does this look like? Frequent readers (or partners) may have seen some examples of our voice-response questions in the past, but here is a good example of a prompt that gets someone talking:
“How would you describe what living with multiple sclerosis is like to someone who is unfamiliar with it? What frustrations or challenges do you experience?
The purpose of a question like this is clear: Elicit a patient description of the MS experience. Something that could be asked briefly and answered even more briefly – indeed, a perfectly valid line of inquiry would be to ask for a few adjectives. But by putting a patient directly into the role of an explainer, we set an implicit expectation. We want a story, not just a description. And by tapping into how MS makes patients feel, we can get just that:
We set the stage for this respondent, and she took it. By harnessing people’s desire to be the expert, we can get the rich voice data that our insights are made of.
Goal-oriented: Build them a frame
Now that we see the benefit of getting a respondent talking, why not take it further, go broader? If we just need more voice to get the insights, why not just design for maximum response length?
First and perhaps most obviously, we need to respect our experts’ time. Respondent fatigue is real, and trying to push through it in an unmoderated environment can lead to lopsided or outright bad data.
But second, and equally important, is that designing around the fundamental research questions makes responses more coherent and insights more accessible. One standard part of our survey design process is making sure each prompt we write corresponds to a research question or goal our client has. If our client wants to know a patient’s thoughts on new treatments, we do not use half of the survey breaking down symptom impact. One prompt that gets us context on disease experience can be extremely helpful in interpreting a patient’s later eagerness or hesitation to try something new, but we build the prompt to that specific purpose.
There is a third ‘why-not’ that is more situational but often quite powerful: setting boundaries gives respondents the chance to meaningfully break them. Compare the following examples:
A. What is your overall experience with Treatment X?
B. How effective has Treatment X been at managing your condition?
If a patient focuses on side effects when answering A., we can infer that side effects are the most top-of-mind aspect of Treatment X. Efficacy goes un-discussed, which tells us something about the patient’s priorities.
If a patient focuses on side effects when answering B., however, that tells us a little more. Not only are side effects top-of-mind, but we’re seeing the respondent choose to focus on that aspect of treatment X at the expense of efficacy. This signals both the patient’s priorities (which we would have gotten from A) and that whatever efficacy X has is not enough to justify those side effects. We get more insight by providing a little more guidance, even if our respondent disregards it.
This strategy is an example of how we take the linguistic expertise we use when analyzing responses and put it to work on the design side. Awareness of language helps us not only find insights in the data, but elicit the kind of data that will give us plenty to fins.
Put this design to work
There is always more to be said about survey design, but the best way to understand how we put these principles into action is through an example. If you want to see how we would handle your research questions, we would love to hear from you