Перегляд за автором "Yesilevskyi, V."
Зараз показано 1 - 3 з 3
Результатів на сторінку
Варіанти сортування
Публікація Changing Trends in Teaching Computer Vision at Ukrainian Universities in the Age of Artificial Intelligence(International Journal of Emerging Technologies in Learning, 2024) Yesilevskyi, V.; Kyt, M.In recent years, professional education in computer vision and artifcial intelligence (AI) has witnessed transformative changes, a process signifcantly accelerated by the COVID-19 pandemic, which catalyzed the shift towards online learning formats. In Ukraine, as in many other countries, universities have rapidly adapted their curricula to align with these new realities. The onset of martial law in Ukraine further entrenched online learning as the dominant mode of education. Additionally, there has been a noticeable shift in research activities towards projects tailored to the defense sector, unfolding against the backdrop of signifcant global advancements in AI for computer vision. This article explores the impact of these shifts and the “Artifcial Intelligence Development Concept” introduced by the Ministry of Digital Transformation of Ukraine on the quality and methodologies of computer vision education. It scrutinizes specifc training courses in this domain, along with the literature used in the educational process. The exploration includes how deep neural network technologies such as TensorFlow, PyTorch, OpenCV, and Python libraries are being integrated into computer vision curricula, reflecting the latest advancements in the feld. It also examines the needs of Ukrainian startups and companies specializing in computer vision projects, analyzing how curricula are evolving to meet the specifc requirements of modern software developers. Furthermore, the article proposes directions for the further development of online education in computer vision teaching in Ukraine.Публікація Development of a video processing module for the task of air object recognition based on their contours(ХНУРЕ, 2022) Yesilevskyi, V.; Koliadin, A.; Sereda, O.The subject of research in the article is the module of automatic segmentation and subtraction of the background, which is created, based on the sequential application of methods of image preprocessing and modified method of interactive segmentation of images and implemented in the system of optical monitoring of the air situation. The aim of the work is to develop an image segmentation module to increase the efficiency of recognition of an air object type on a video image in the system of visual monitoring of the air environment by means of qualitative automatic segmentation. To solve this problem, a modified interactive algorithm in the mode of automatic selection of an object in the image, which allows more accurately, without the participation of the operator, to determine the foreground pixels of the image for further recognition of the type of airborne object. The following tasks are solved in the article: the analysis of existing methods of binarization of color images for semantic segmentation of images, which are used in image recognition systems; the development of a pipeline of methods for automatic segmentation of images in the system of optical monitoring of the air environment. In the work, the following methods are used: methods of digital image processing, methods of filtering and semantic segmentation of images, methods of graph analysis. The following results are obtained: the results of image processing with the proposed module of segmentation and background subtraction confirm the performance of the module procedures. The developed pipeline of methods included in the module demonstrates correct segmentation in 93% of test images in automatic mode without operator participation, which allows us to conclude about the effectiveness of the proposed module. Conclusions: The implementation of the developed module of segmentation and background subtraction for the system of optical monitoring of the air environment allowed to solve the problem of segmentation of video images for further recognition of aerial objects in the system of optical monitoring of the air environment in automatic mode with a high degree of reliability, thus increasing the operational efficiency of this system.Публікація Human emotion recognition system using deep learning algorithms(ХНУРЕ, 2022) Yuvchenko, K.; Yesilevskyi, V.; Sereda, O.The subject of research in the article is the software implementation of a neural image classifier. The work examines emotions as a special type of mental processes that express a person’s experience of his attitude to the surrounding world and himself. They can be expressed in different ways: facial expressions, posture, motor reactions, voice. However, the human face has the greatest expressiveness. Technologies for recognizing companies to improve customer service use human emotions make decisions about interviewing candidates and optimize the emotional impact of advertising. Therefore, the purpose of the work is to find and optimize the most satisfactory in terms of accuracy algorithm for classifying human emotions based on facial images. The following tasks are solved: review and analysis of the current state of the problem of "recognition of emotions"; consideration of classification methods; choosing the best method for the given task; development of a software implementation for the classification of emotions; conducting an analysis of the work of the classifier, formulating conclusions about the work performed, based on the received data. An image classification method based on a densely connected convolutional neural network is also used. Results: the results of this work showed that the method of image classification, based on a densely connected convolutional neural network, is well suited for solving the problems of emotion recognition, because it has a fairly high accuracy. The quality of the classifier was evaluated according to the following metrics: accuracy; confusion matrix; precision, recall, f1-score; ROC curve and AUC values. The accuracy value is relatively high – 63%, provided that the data set has unbalanced classes. AUC is also high at 89%. Conclusions. It can be concluded that the obtained model with weights has high indicators of recognition of human emotions, and can be successfully used for its purpose in the future.