inVibe logo

imagine

Gaining Deep and Meaningful Insights from just a Few Questions

The why’s behind inVibe’s voice response collection process

By Tripp Maloney

Wed Apr 13 2022

Imagine getting a ton of great insights from just a few questions. Imagine figuring out how to ask a patient or a doctor a question that will really get them talking. This is the main goal of voice response design at inVibe, and the process we undertake to achieve that goal needs to account for how people interact with each other and how they will interact with us.

There are two main “whys” of inVibe’s design process - why do we limit the number of questions we ask in a single study, and why do we use linguists to compose those questions?

Why just 8 questions?

When looking at inVibe’s voice response offerings, one question that likely springs to mind is “wait, why do we ask so few questions?” Indeed, when looking at a set of eight questions relative to the tens of questions that are common in most quantitative studies, it seems like there’s less opportunity to ‘go broad’ and cover all of the ground there is to cover in a given market research topic. Eight questions seems more like a probe or a follow-up rather than a useful first step.

This perspective is certainly understandable, however it couldn’t be further from the reality of the data those eight questions retrieve. The content we gain from our voice response platform, in addition to providing a great depth of insights on a single topic, can easily capture stakeholder opinions on several different parts of a treatment landscape or patient journey. This has everything to do with the fact that, fundamentally, we are not asking respondents closed-ended questions. We’re asking them what (and, often, how) they think.

This seemingly simple shift in approach, giving respondents control over the boundaries of the conversation, allows them to talk as much as they want. While the devil’s advocate view would be that people would give an extremely brief response and get on with their day, that simply does not match our years experience.

In 2021, the average respondent talked for about five and a half minutes. This is not the length of the entire engagement; it does not count them listening to questions. It is five and a half straight minutes of speech – voices of patients, caregivers, and doctors telling us what they think. This translates to hours of perspectives – enough to get immersed in, and certainly enough to derive powerful insights from. It is also in comparison to hours of in-depth interviews where less insight is gained.

Of course, there is a natural follow-up question: “if eight questions provide this much good insight, why not go for more?” The simple answer to this comes down to practicality. While a respondent tends to give a good deal of useful information with only eight prompts to focus on, there is a drop-off in response quality past that point, where people become fatigued, inattentive, and terse. Once we reach that threshold of when a respondent is starting to check the time, we lose thoroughness and depth. Frequently, we see respondents start to frame responses as repetitive, “like I said previously.” Those speech markers are a good sign that the respondent has lost their patience and that we would gain deeper insights by sticking with our methodology of eight questions.

Why use linguists in design?

How can we get respondents talking without making them feel like they’re being forced to talk, while using as few prompts as possible? Easier said than done. This is where it is important to have a good working knowledge of how interactions work and why we utilize the expertise of our trained linguists who both compose and analyze our voice response questions. We combine this theoretical background with experience in research design. With backgrounds in conversation analysis and discourse studies, our linguists have a sophisticated understanding of how people talk, which we can then match to the client’s goals to understand what kind of speech we want to draw out of stakeholders.

For example, if we want to understand more about HCP/patient interactions, we can prompt one or both of those audiences to reconstruct what a normal conversation at the doctor’s office sounds like. If we want to test claims that depend on disruptive new clinical data, we can ask a polarizing question that will prompt strong, honest opinions from experts.

In essence, our linguists have access to a large toolbox, in addition to the field experience to know what tool is right for the job. With these resources at our fingertips, the goal of getting respondents talking becomes much less daunting. This way, we can literally shape the discussions respondents have – not by introducing bias, but by giving them a loose structure through which they can speak their mind and provide valuable insights.

And once that part is taken care of, all we need to do is listen. When you are ready to listen, reach out to schedule a demo.

Thanks for reading!

Be sure to subscribe to stay up to date on the latest news & research coming from the experts at inVibe Labs.

Recently Published

Responsible AI: Validating LLM Outputs

By Christopher Farina, PhD

Thu Dec 19 2024

How we ask: How inVibe writes voice response questions

By Tripp Maloney

Thu Dec 12 2024

In Support of Crohn's & Colitis Awareness Week (Dec. 1-7)

By Christopher Farina, PhD

Wed Dec 04 2024

/voice

  1. Why Voice
  2. Sociolinguistic Analysis
  3. Speech Emotion Recognition
  4. Actionable Insights
  5. Whitepapers
  6. The Patient Voice

@social

2024 inVibe - All Rights Reserved