Debbie Loakes

Research report – Assessing the role of automatic methods for the transcription of indistinct covert recordings

In the Hub, we find that we are very often asked about how the problem of what is said in indistinct covert recordings can be solved using computational methods. In our new research we show that the way things currently stand, computational methods are not suitable for a range of reasons – transcription by humans is a much better method.

Our research covers the fact that while artificial intelligence (AI), including automatic speech recognition (ASR) methods have had major advances and undoubtedly make our lives easier in a range of ways, such systems are not designed to be able to transcribe what was said in indistinct covert recordings, and nor are they designed to determine who uttered the words and phrases in such recordings. These systems can be used advantageously in research, and for various other purposes (even subtitling!), and the reasons they do not work for forensic transcription stems from the nature of the recording conditions, as well as the nature of the forensic context. The research we presented at IAFPA also tells us some really important things about the way AI systems work.

We (Debbie Loakes and Helen Fraser) presented our new research in a talk called Assessing the role of automatic methods for the transcription of indistinct covert recordings at the International Association of Forensic Phonetics and Acoustics (IAFPA) conference.

You can watch the Loakes and Fraser IAFPA presentation here (24 minutes):