Публікація:
Research into speech-to-text tranfromation module in the proposed model of a speaker’s automatic speech annotation

dc.contributor.authorBarkovska, O.
dc.date.accessioned2024-01-20T12:20:56Z
dc.date.available2024-01-20T12:20:56Z
dc.date.issued2022
dc.description.abstractThe subject matter of the article is the module for converting the speaker’s speech into text in the proposed model of automatic annotation of the speaker’s speech, which has become more and more popular in Ukraine in the last two years, due to the active transition to an online form of communication and education as well as conducting workshops, interviews and discussing urgent issues. Furthermore, the users of personal educational platforms are not always able to join online meetings on time due to various reasons (one example can be a blackout), which explains the need to save the speakers’ presentations in the form of audio files. The goal of the work is to elimination of false or corrupt data in the process of converting the audio sequence into the relevant text for further semantic analysis. To achieve the goal, the following tasks were solved: a generalized model of incoming audio data summarization was proposed; the existing STT models (for turning audio data into text) were analyzed; the ability of the STT module to operate in Ukrainian was studied; STT module efficiency and timing for English and Ukrainian-based STT module operation were evaluated. The proposed model of the speaker’s speech automatic annotation has two major functional modules: speech-to-text (STT) і summarization module (SUM). For the STT module, the following models of linguistic text analysis have been researched and improved: for English it is wav2vec2-xls-r-1bz, and for Ukrainian it is Ukrainian STT model (wav2vec2-xls-r-1b-uk-with-lm.Artificial neural networks were used as a mathematical apparatus in the models under consideration. The following results were obtained: demonstrates the reduction of the word error level descriptor by almost 1.5 times, which influences the quality of word recognition from the audio and may potentially lead to obtaining higher-quality output text data. In order to estimate the timing for STT module operation, three English and Ukrainian audio recordings of various length (5s, ~60s and ~240s) were analyzed. The results demonstrated an obvious trend for accelerated obtaining of the output file through the application of the computational power of NVIDIA Tesla T4 graphic accelerator for the longest recording. Conclusions: the use of a deep neural network at the stage of noise reduction in the input file is justified, as it provides an increase in the WER metric by almost 25%, and an increase in the computing power of the graphics processor and the number of stream processors provide acceleration only for large input audio files. The following research of the author is focused on the study of the methods of the obtained text summarization module efficiency.
dc.identifier.citationBarkovska O. Research into speech-to-text tranfromation module in the proposed model of a speaker’s automatic speech annotation / O. Barkovska // Сучасний стан наукових досліджень та технологій в промисловості. – 2022. – № 4(22). – С. 5–13.
dc.identifier.urihttps://openarchive.nure.ua/handle/document/25382
dc.language.isoen
dc.publisherХНУРЕ
dc.subjectSTT
dc.subjecttext
dc.subjectprocessing
dc.subjectsummary
dc.subjectaudiofile
dc.subjectmodel
dc.subjectneural networks
dc.titleResearch into speech-to-text tranfromation module in the proposed model of a speaker’s automatic speech annotation
dc.typeArticle
dspace.entity.typePublication

Файли

Оригінальний пакет
Зараз показано 1 - 1 з 1
Завантаження...
Зображення мініатюри
Назва:
EOM_SSND_Pr_2022_n1_5-12.pdf
Розмір:
522.58 KB
Формат:
Adobe Portable Document Format
Ліцензійний пакет
Зараз показано 1 - 1 з 1
Немає доступних мініатюр
Назва:
license.txt
Розмір:
9.55 KB
Формат:
Item-specific license agreed upon to submission
Опис: