Skip to main content

Introduction

Overview

To allow the development of better hearing aids, we need ways to evaluate the speech intelligibility of audio signals automatically. We need a prediction model that takes the audio produced by a hearing aid and the listener's characteristics (e.g. audiogram) and estimates the speech intelligibility score that the listener would achieve in a listening test. Here is a brief introduction to the challenge:

For the prediction challenge we will provide the following data:

  • Audio produced by a variety of (simulated) hearing aids for speech-in-noise;
  • The corresponding clean reference signals (the original speech);
  • Characteristics of the listeners (pure tone audiogram, etc.); and
  • The measured speech intelligibility scores from listening tests, where the listener was asked to say what they heard for the speech-in-noise.

The challenge has two separate but related tracks

  • Track 1: Closed-set - i.e, Systems that can make prediction for hearing-aid algorithms and listeners that have been seen in the training data.
  • Track 2: Open-set - i.e, Systems that can make predictions for unseen hearing-aid algorithms and/or listeners.

We have an extensive FAQ to answer key questions competitors might have. So even if you have never worked on speech intelligibility models for people with hearing loss, you will have the knowledge to take part. This includes seminar recordings on the following topics:

  • What hearing loss is;
  • How it's typically mitigated in hearing aids; and
  • How speech intelligibility is measured and estimated using metrics.

Key dates (updated 15/11/21)

  • 16th November 2021: Launch of challenge, release of data.
  • 23rd November 2021: Webinar to introduce the challenge 15:00-17:00 UK time.
  • 1st January 2022: Registration deadline.
  • 1st March 2022: Release of evaluation data.
  • 1st April 2022: Submission deadline. All entrants submit a draft of their technical report (details below).
  • 25th April 2022: Deadline by which all entrants must submit two page technical reports to Clarity Prediction Challenge 2022 workshop.
  • May/June 2022: Clarity Prediction Challenge 2022 workshop.

More details

  • Scenario - a description of the listening scenario and how it has been simulated.

  • Baseline System - a description of the baseline software model.

  • Data - the data that can be used to train and evaluate your system during development.

  • Software - the software tools that we are providing to help you build and evaluate a challenge entry.

  • Challenge Rules - the rules to which all challenge entries must adhere.

  • Submission - information about how to prepare your submission.

  • Prizes - information about our prizes.

  • Download - where to go to download the software and challenge data.

  • Find a team - if you'd like to find collaborators to help you compete.

  • FAQ - an extensive FAQ answering key questions and providing background knowledge to help you compete.