Skip to content

Design outcome measures Question 1

How can Ecological Momentary Assessment improve real-world hearing aid evaluations?

Design outcome measures Question 1

The challenge

Unraveling the complexities of hearing aid evaluation

Recently Ecological Momentary Assessment (EMA) has become popular in audiology for capturing real-world hearing aid performance and user experiences. However, challenges remain, including rating biases, behavioral adaptations, careless responses, and study design limitations.

 

These issues raise concerns about data validity, participant burden, and the ability to detect meaningful differences between hearing aid programs.

Our focus

Refining EMA for Accurate Hearing Assessments

By collecting surveys in everyday environments, EMA minimizes recall bias and, together with objective measures of the environment and hearing aid algorithms, provides richer insights than retrospective self-reports.

 

Our research integrates findings from multiple studies to explore how EMA can be refined to improve accuracy and reliability and assess real-world hearing aid use through EMA, focusing on:

 

  • Ensuring data validity by identifying and mitigating response inconsistencies.

 

  • Understanding how user behavior and acoustic environments influence EMA ratings.

 

  • Minimizing biases in real-world hearing aid evaluations.

 

  • Optimizing study designs to balance accuracy, usability, and participant engagement.

The focus is on enhancing EMA methods to better capture the real-world experience of hearing aid use

Our approach

A multi-faceted lens on EMA reliability

Over the years, we have collaborated with researchers and universities (e.g. Jade Hochschule, Technische Hochschule LĂĽbeck, University of Southern Denmark) across multiple studies. Also, the contributions from the EMA Methods in Audiology Working Group have helped refine methodologies for more accurate real-world hearing aid assessments.

 

Participants provided self-reports on speech understanding, sound quality, and listening effort, which were analyzed alongside objective hearing aid data.

 

Key methodological aspects examined:

 

  • Behavioral adaptations, such as volume adjustments and movement in response to different noise conditions, may mask hearing aid differences in real life.

 

  • Comparison of EMA and laboratory ratings, evaluating potential biases in real-world assessments.

 

  • Careless response detection, using completion time, skipped items, and response consistency.

 

  • Direct (A/B switching) vs. indirect (one program per day) EMA methods, assessing their impact on rating sensitivity and participant burden.

 

  • Objective speech intelligibility tests support subjective ratings.

Better EMA methods ensure hearing aid evaluations reflect real-world listening challenges more accurately

Key insights Enhancing reliability in hearing assessments

Breaking down barriers to EMA accuracy

The following highlights our findings on several factors affecting EMA reliability.

 

For a deeper dive into the research behind these findings, explore our research papers in the library section below.

 

  • Ceiling effects & behavioral adaptations – Real-world ratings were overwhelmingly positive, likely due to limited exposure to challenging listening situations rather than reluctance to provide negative feedback. In real life people frequently modified their listening environments to improve hearing, which contributed to ceiling effects, complicating hearing aid comparisons.

 

  • Direct vs. Indirect program comparisons – One way of counteracting this is to use direct comparisons. Direct A/B testing in the field may reveal smaller differences between conditions than indirect comparisons but is perceived as more burdensome.

 

  • Sampling bias – EMA in general has the problem that in some situations it is difficult to answer a survey and due to differences in burden, direct and indirect comparisons sample different situations.

 

  • Objective speech intelligibility & listening effort – While objective speech intelligibility tests in real life can be disruptive, we successfully implemented them in EMA. Reaction times proved to be a more sensitive measure of listening effort than subjective ratings.

 

  • Careless responses – Several methodological questions regarding EMA are still unknown. While careless responses in EMA were rare, traditional detection methods may not fully capture inconsistencies in real-world settings.

 

  • Memory bias – As it is not always possible to answer surveys in the situation, we analyzed memory bias for responses given 30 or 60 minutes later, showing that particular rare situations benefit from short response delays.

 

Future directions

Refining EMA for more accurate hearing aid assessments

Future research to improve EMA will focus on refining rating scales to prevent ceiling effects and enhance response sensitivity.

 

It will also explore dyadic EMA to capture input from conversation partners and incorporate short-term retrospectives to assess situations where participants cannot respond immediately.

 

Additionally, insights from EMA will be applied to optimize hearing aid fitting in clinical practice.

Real-world impact

Strengthening EMA for clinical and research applications

Refining EMA methodologies will lead to more accurate and personalized hearing aid recommendations, benefiting both researchers and clinicians.

 

These improvements will help:

 

  • Enhance hearing aid programming, aligning settings with real-world user experiences

 

  • Develop tailored rehabilitation strategies, addressing individual listening challenges.

 

  • Strengthen clinical guidelines, ensuring hearing aid evaluations reflect real-life use.

 

By refining EMA methodology, EMA can become an indispensable tool for designing and optimizing hearing aid technology—ultimately improving the quality of life for users worldwide.

Related Publications

Discover how EMA is transforming hearing aid research

Researchers involved

Partners Universities