Hybrid reinforcement learning with expert state sequences
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
In this paper, we present a self-generating modular neural network architecture for supervised learning. In the architecture, any kind of feedforward neural networks can be employed as component nets. For a given task, a tree-structured modular neural network is automatically generated with a growing algorithm by partitioning input space recursively to avoid the problem of pre-determined structure. Due to the principle of divide-and- conquer used in the proposed architecture, the modular neural network can yield both good performance and significantly faster training. The proposed architecture has been applied to several supervised learning tasks and has achieved satisfactory results.
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Hong-linh Truong, Maja Vukovic, et al.
ICDH 2024
Dzung Phan, Vinicius Lima
INFORMS 2023
Pavel Klavík, A. Cristiano I. Malossi, et al.
Philos. Trans. R. Soc. A