Skip to main content

Rules

Teams

  • Teams must have registered and nominated a contact person.
  • Teams can be from one or more institutions.
  • Teams can comprise up to 10 persons.
  • The organisers - and any person forming a team with one or more organisers - may enter the challenge themselves but will not be eligible to win the cash prizes.

Transparency

  • Teams must provide a technical document of up to 2 pages describing the system/model and any external data and pre-existing tools, software and models used.
  • We will publish all technical documents (anonymous or otherwise).
  • Teams are encouraged – but not required – to provide us with access to the system(s)/model(s) and to make their code open source.
  • Anonymous entries are allowed but will not be eligible for cash prizes.
  • If a group of people submits multiple entries, they cannot win more than one prize in a given category.
  • All teams will be referred to using anonymous codenames if the rank ordering is published before the final results are announced.

Intellectual property

The following terms apply to participation in this machine learning challenge (“Challenge”). Entrants may create original solutions, prototypes, datasets, scripts, or other content, materials, discoveries or inventions (a “Submission”). The Challenge is organised by the Challenge Organiser.

Entrants retain ownership of all intellectual and industrial property rights (including moral rights) in and to Submissions.

As a condition of submission, Entrant grants the Challenge Organiser, its subsidiaries, agents and partner companies, a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to use, reproduce, adapt, modify, publish, distribute, publicly perform, create a derivative work from, and publicly display the Submission.

Entrants provide Submissions on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.

What information can I use?

Training and development

For

  • Track 1 (closed-set), teams should use the signals and listener responses provided in the `CPC1.train.json` file.
  • Track 2 (open-set), teams should use the signals and listener responses provided in the smaller `CPC1.train_indep.json`.

In addition, teams can use their own data for training or expand the training data through simple automated modifications. Additional pre-training data could be generated by existing speech intelligibility and hearing loss models. The FAQ gives links to some models that might be used for this.

Any audio or metadata can be used during training and development, but during evaluation the prediction model(s) will not have access to all of the data (see next section).

Evaluation

The only data that can be used by the prediction model(s) during evaluation are

  • The output of the hearing aid processor/system.
  • The target convolved with the anechoic BRIR (channel 1) for each ear (‘target_anechoic’).
  • The IDs of the listeners assigned to the scene/hearing aid system in the metadata provided.
  • The listener metadata.
  • The prompt for the utterances (the text the actors were given to read)

If you use text from the speech prompts as part of evaluating the systems, we will classify that as an intrusive method for the purpose of awarding prizes.

Baseline models and computational restrictions

  • Teams may choose to use all or some of the provided baseline models.
  • There is no limit on computational cost.
  • Models can be non-causal.

What sort of model do I create?

  • You can create either a single prediction model that calculates speech intelligibility given a listener's hearing characteristics (that is, the metadata provided), or you can submit separate models of hearing loss and speech intelligibility.
  • You should report the speech intelligibility for the whole sentence for each audio sample/listener combination.

Submitting multiple entries

If you wish to submit multiple entries,

  • All systems/models must be submitted for evaluation.
  • Your systems must have significant differences in their approach.
  • You must register multiple teams, submitting each entry as a different team.
  • In your documentation, you must make it clear how the submissions differ.

Evaluation of systems

  • Entries will be ranked according to their performance in predicting measured intelligibility scores.