Rei Odaira, Jose G. Castanos, et al.
IISWC 2013
One way to speed up convergence in a large optimization problem is to introduce a smaller, approximate version of the problem at a coarser scale and to alternate between relaxation steps for the fine-scale and coarse-scale problems. We exhibit such an optimization method for neural networks governed by quite general objective functions. At the coarse scale there is a smaller approximating neural net which, like the original net, is nonlinear and has a nonquadratic objective function. The transitions and information flow from fine to coarse scale and back do not disrupt the optimization, and the user need only specify a partition of the original fine-scale variables. Thus the method can be applied easily to many problems and networks. We show positive experimental results including cost comparisons. © 1991 IEEE
Rei Odaira, Jose G. Castanos, et al.
IISWC 2013
Hong-linh Truong, Maja Vukovic, et al.
ICDH 2024
Zahra Ashktorab, Djallel Bouneffouf, et al.
IJCAI 2025
Salvatore Certo, Anh Pham, et al.
Quantum Machine Intelligence