inVibe logo

Hellooo?!

By Beth Baldys

Mon Sep 16 2019

The science of sound.

To learn more, visit www.invibe.co

Imagine: a familiar buzz and a text message flashes across your phone. Which would you rather read?

Similar words, three (very different) messages. As our technology advances, so too has our creativity with language. Accustomed to infusing our speech with acoustic and emotional signals, human beings have found ways to mimic these signals in written language using extra letters, capitals, punctuation marks — or by omitting them instead.

And yet — who hasn’t misread the tone of a colleague’s email or Slack message? How can we leverage linguistics to avoid such miscommunication in our professional and personal lives? We read with interest Dr. Anna Trester’s article on job seeking gone awry via LinkedIn.

As Dr. Trester cites, it starts by thinking of language as having three forces:

  • Locutionary forcewhat is said (the actual words used)
  • Illocutionary force — what the speaker intended to communicate
  • Perlocutionary force — the effect/impact those words have on the listener

J.L. Austin first described these forces as Speech Act Theory. We notice communication pitfalls occur when messages slip through the cracks between what was said, what the speaker meant, and what the listener heard.

In market research, such “intention traps” can be even trickier to avoid. Oftentimes clients base critical brand decisions on transcripts of interviews and typed open-ended survey responses that have either been stripped of their original context or lack the speaker’s illocutionary force.

Here at inVibe, we are serious about redefining what it means to listen. Our team of linguistic and medical discourse experts leverage proprietary tools and frameworks that put the rich emotional and linguistic patterns of spoken voice data front and center.

By capturing acoustic metrics and measuring changes in a speaker’s pitch and tone, for example, our technology is able to reveal previously impossible-to-detect patterns of emotion across participant response samples. These auditory exclamation points can be signals within otherwise noisy data, pointing out a patient’s expression of confidence or uncertainty when describing their experience with a diagnosis, or capturing a physician’s expression of true excitement over a new data release at a medical conference.

Importantly, by using complementary acoustic and emotional analysis, we avoid the intention traps that befall researchers when sound is omitted from the science. Combining these approaches can reveal information about what was said as well as the speaker’s deeper intentions and attitudes.

While we may not be able to help you avoid every electronic miscommunication, contact us to help ensure your next research project is a wealth of valuable insights — without the guesswork.

Story originally published March 14, 2019.

Copyright © 2019 inVibe Labs. All rights reserved.

Thanks for reading!

Be sure to subscribe to stay up to date on the latest news & research coming from the experts at inVibe Labs.

Recently Published

Responsible AI: Validating LLM Outputs

By Christopher Farina, PhD

Thu Dec 19 2024

How we ask: How inVibe writes voice response questions

By Tripp Maloney

Thu Dec 12 2024

In Support of Crohn's & Colitis Awareness Week (Dec. 1-7)

By Christopher Farina, PhD

Wed Dec 04 2024

/voice

  1. Why Voice
  2. Sociolinguistic Analysis
  3. Speech Emotion Recognition
  4. Actionable Insights
  5. Whitepapers
  6. The Patient Voice

@social

2024 inVibe - All Rights Reserved