За технічних причин Електронний архів Харківського національного університету радіоелектроніки «ElAr КhNURE» працює тільки на перегляд. Про відновлення роботи у повному обсязі буде своєчасно повідомлено.
 

Публікація:
Research into speech-to-text tranfromation module in the proposed model of a speaker’s automatic speech annotation

Завантаження...
Зображення мініатюри

Дата

2022

Автори

Назва журналу

ISSN журналу

Назва тома

Видавництво

ХНУРЕ

Дослідницькі проекти

Організаційні підрозділи

Видання журналу

Анотація

The subject matter of the article is the module for converting the speaker’s speech into text in the proposed model of automatic annotation of the speaker’s speech, which has become more and more popular in Ukraine in the last two years, due to the active transition to an online form of communication and education as well as conducting workshops, interviews and discussing urgent issues. Furthermore, the users of personal educational platforms are not always able to join online meetings on time due to various reasons (one example can be a blackout), which explains the need to save the speakers’ presentations in the form of audio files. The goal of the work is to elimination of false or corrupt data in the process of converting the audio sequence into the relevant text for further semantic analysis. To achieve the goal, the following tasks were solved: a generalized model of incoming audio data summarization was proposed; the existing STT models (for turning audio data into text) were analyzed; the ability of the STT module to operate in Ukrainian was studied; STT module efficiency and timing for English and Ukrainian-based STT module operation were evaluated. The proposed model of the speaker’s speech automatic annotation has two major functional modules: speech-to-text (STT) і summarization module (SUM). For the STT module, the following models of linguistic text analysis have been researched and improved: for English it is wav2vec2-xls-r-1bz, and for Ukrainian it is Ukrainian STT model (wav2vec2-xls-r-1b-uk-with-lm.Artificial neural networks were used as a mathematical apparatus in the models under consideration. The following results were obtained: demonstrates the reduction of the word error level descriptor by almost 1.5 times, which influences the quality of word recognition from the audio and may potentially lead to obtaining higher-quality output text data. In order to estimate the timing for STT module operation, three English and Ukrainian audio recordings of various length (5s, ~60s and ~240s) were analyzed. The results demonstrated an obvious trend for accelerated obtaining of the output file through the application of the computational power of NVIDIA Tesla T4 graphic accelerator for the longest recording. Conclusions: the use of a deep neural network at the stage of noise reduction in the input file is justified, as it provides an increase in the WER metric by almost 25%, and an increase in the computing power of the graphics processor and the number of stream processors provide acceleration only for large input audio files. The following research of the author is focused on the study of the methods of the obtained text summarization module efficiency.

Опис

Ключові слова

STT, text, processing, summary, audiofile, model, neural networks

Бібліографічний опис

Barkovska O. Research into speech-to-text tranfromation module in the proposed model of a speaker’s automatic speech annotation / O. Barkovska // Сучасний стан наукових досліджень та технологій в промисловості. – 2022. – № 4(22). – С. 5–13.

DOI