logo

[ Japanese | English ]

Graduate School of Environmental, Life, Natural Science and Technology / Information Technology, Electrical Engineering, and Mathematical and Data Sciences Program, School of Engineering, Okayama University

Information and Mathematical Engineering Lab

Distributed Computation by Multiagent Systems

Distributed computation by multiple agents forming a network has attracted much attention, and is expected to be applied to information processing in sensor networks, cooperative work by swarm robots, distributed learning by neural networks and so on. In this research, we are studying methods for agents to solve given problems cooperatively under the situation that each agent can communicate only with a small number of neighboring agents. For example, we are developing algorithms for agents to estimate statistics of their state values, to compute the algebraic connectivity of their network, to solve large-scale linear equations, to perform principal component analysis and nonnegative matrix factorization, and to perform distributed learning. Papers containing our results have been published in IEEE Control Systems Letters and other journals.

Keywords: multiagent system, consensus, distributed computation, distributed learning


Distributed computation by a multiagent system

Efficient Computation Methods for Nonnegative Matrix Factorization

Nonnegative Matrix Factorization (NMF) is a method for extracting features of given nonnegative data, and has been used in a wide range of fields such as face image processing, gene analysis, text mining, and music classification. In this research, we analyze the global convergence of multiplicative update and hierarchical alternating least-squares algorithm, which are widely used as efficient computation methods for NMF, and further improve their efficiency. Papers containing our results have been published in Computational Optimization and Applications and other journals.

Keywords: nonnegative matrix factorization, feature extraction, optimization algorithm, global convergence


Nonnegative matrix factorization.

Optimization of Network Structure

There are many large-scale and complex networks in the real world, such as the Internet, the World Wide Web, power grids, friendship networks, neural networks, and gene networks. In recent years, many researchers are studying complex networks with an aim to clarify common properties on the structure, the growing process, and the information propagation mechanism. In this research, we focus on the algebraic connectivity and the cluster coefficient, which are indices that characterize the structure of networks, and address the problem of finding a network that maximizes or locally maximizes these indices under appropriate conditions. Papers containing our results have been published in Discrete Applied Mathematics, IEEE Transactions on Control of Network Systems and so on.

Keywords: complex networks, clustering coefficient, algebraic connectivity, graph theory


Clustering coefficient locally maximizing graph

Image Understanding via Inverse Rendering

It is easy for a person to understand the contents of an image, but not for a computer, mainly because this unconscious human vision process can not be clearly formulated as a mathematical formula, although such a robust image processing is very important for realization of, say, biometrics and/or autonomous vehicles. One of the approaches is the inverse rendering, where the problem is formulated as an inverse problem of the image formation process (aka computer graphics). Our results are published in, e.g., IW-FCV2020 (Springer CCIS).

Keywords: image formation model, inverse problem, numerical optimization


Examples of input images and 3d shape models

Efficient Learning Algorithms for Support Vector Machines

Support Vector Machine (SVM) is one of the most popular pattern classifiers. Learning of an SVM is formulated as a convex quadratic programming problem with the same number of variables as the training samples. In this study, we derived a global convergence condition for the decomposition method, which is one of the efficient methods for solving large-scale convex quadratic programming problems that appear in SVM learning. We also developed new learning algorithms for SVM. Papers containing our results have been published in IEEE Transactions on Neural Networks and other journals.

Keywords: support vector machine, pattern classification, regression, quadratic programming problem


Pattern classification by an SVM

Signal Processing by means of Cellular Neural Networks

Cellular neural networks (CNNs) are a type of recurrent neural network, but they are distinguished from other models by the fact that the basic units, called cells, are arranged in a grid and each cell is connected only to its neighbors. As can be seen from its structural characteristics, CNNs are mainly used in image processing, especially in real-time image processing (the figure below shows an example of image halftoning using CNNs). In addition, the local connection makes it easy to implement in integrated circuits. In fact, chips have been fabricated at several universities in Europe. In this research, we are addressing fundamental issues related to signal processing using CNNs. Specifically, we are studying the characterization of stationary patterns that appear in CNNs (in the image halftoning example below, the output image is one of the stationary patterns) and optimal design methods for associative memory. Papers containing our results have been published in IEEE Transactions on Circuits and Systems I and other journals.

Keywords: cellular neural network, nonlinear circuits, signal processing


Image halftoning by a CNN

Stability Analysis of Nonlinear Dynamical Systems Related to Neural Networks

Artificial recurrent neural networks are a simple model of the human brain, and are expected to be able to perform advanced information processing at high speed due to the parallel and distributed operation of neurons connected to each other. In this study, we theoretically analyzed the global dynamical behavior of Hopfield networks and other nonlinear systems related to recurrent neural networks, and derived conditions on the network parameters that allow the network to converge to an equilibrium state for any initial state. Papers containing our results have been published in IEEE Transactions on Circuits and Systems I and other journals.

Keywords: recurrent neural network, nonlinear dynamical system, stability


Chaotic trajectories generated by a recurrent neural network with three neurons.