Word2Vec Model Analysis for Semantic and Morphologic Similarities in Turkish Words
Turgut Sübay M.
The study presents the calculation of the similarity between words in Turkish language by using word representation techniques. Word2Vec is a model used to represent words into vector form. The model is formed using articles from Wikipedia dump Turkish service as the corpus and then Cosine Similarity calculation method is used to determine the similarity value. The open-source Python programming language and Gensim library are used to obtain high quality word vectors with Word2Vec and calculate the cosine similarity of the vectors. Continuous Bag-of-words (CBOW) algorithm is used to train high quality word vectors. The cosine similarity values in the results are derived from the weight (dimension values) of the vector dimensions. The Window size 10 and 300 vector dimension configurations are taken. Increasing the number of cycles contributes to the vectors getting more accurate values. The corpus is trained in five cycles (EPOCH) with the same parameters. The Turkish corpus contains more than one hundred and sixty one million words. The dictionary of words (unique words), obtained from the corpus, is more than three hundred and sixty-seven thousand. Such a big data gives an opportunity to conduct high quality semantic and morphologic analysis and arithmetic operations of the word vectors.
NLP, Word2Vec, word vectors, cosine similarity, word embedding, semantic relations, formal (structural) relations, Turkish language
Word2Vec Model Analysis for Semantic and Morphologic Similarities in Turkish Words / Savytska, L., Turgut Sübay, M., Vnukova, N., Bezugla, I., Pyvovarov, V. // COLINS-2022: 6th International Conference on Computational Linguistics and Intelligent Systems, May 12–13, 2022, Gliwice, Poland, 2022, р. 161–176. - URL: https://ceur-ws.org/Vol-3171/paper17.pdf.