Please use this identifier to cite or link to this item: http://openarchive.nure.ua/handle/document/5806
Title: Dynamic Bayesian Networks for State- and Action-Space Modelling in Reinforcement Learning
Authors: Леховицький, Д. І.
Ховрат, А. В.
Keywords: Markov Decision Process
Dynamic Bayesian networks
Reinforcement Learning
Issue Date: 2018
Publisher: ХНУРЕ
Citation: Lekhovitsky D., Khovrat A. Dynamic Bayesian Networks for State- and Action-Space Modelling in Reinforcement Learning / D. Lekhovitsky, A. Khovrat // Радіоелектроніка та молодь у XXI столітті : матеріали 22-го Міжнар. молодіжного форуму, 17–19 апр. 2018 г. – Харків : ХНУРЕ, 2018. – С. 118–119.
Abstract: In recent years Reinforcement Learning has proven its efficiency in solving problems of sequential decision making, formalized with a concept called Markov Decision Process. Though, there is a lot of problems: high computational complexity for multivariate state- and action-space problems, needs to handle missing data and hidden variables, lack of both good model and a sufficient number of episodes for constructing an optimal policy. In this work we suggest Dynamic Bayesian networks (DBNs) as a solution. These models provide an elegant and compact representation of joint state-action space, efficient inference algorithms, which include Monte-Carlo methods and Belief Propagation, and can be used in Dyna-Q Algorithm for integrating real-world and simulated experience.
URI: http://openarchive.nure.ua/handle/document/5806
Appears in Collections:Кафедра прикладної математики (ПМ)

Files in This Item:
File Description SizeFormat 
Lekhovickiy D. 2018.pdf60.56 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.