My talk, “Ensuring Black voices matter: Why your voice assistant is racist, and what you can do about it“, was accepted into the main track of PyConAU 2020.

By 2025, there will be over 8 billion voice assistants in use. Speech recognition, chatbots, virtual assistants and smart speakers are all types of voice assistant. But as with many other technologies, issues of bias in the intent, design, execution and evolution of voice assistants are evident. Many voice assistants today fail to accurately recognise speakers who have accents, or who speak lesser-known languages. Synthesised voices represent well known languages only. There are a range of reasons for this – the under-representation of minorities in technology, commercial drivers and under-resourced languages.

This talk took the audience on a tour of these issues, including:

  • How many languages are spoken in the world, and how languages are seen as “lucrative” or “non-lucrative”, which affects whether there is voice assistant support for those languages
  • Cultural development of accents, including African American Vernacular English, and how that accent development is related to a long history of social and economic exclusion
  • And some of the measures we might be able to take to address these issues, including identifying bias in speech corpora, and better metadata standards for speech corpora.

The talk was recorded and is available on YouTube at: