# Research activity

The social cooperation program “Energy-efficient information processing” aims to develop energy-efficient computing systems and devices based on new operation principles by cooperation between system scientists studying theory and algorithms and device scientists responsible for experiments and developments.

## Research topics

Due to recent development of brain function-measurement and image-processing techniques, it has been found that neuronal connectivity in the brain of living things is complex but has modularity and sparseness to some extent. Inspired by the structure of the real brain with regularity and complexity, we aim to explore the connection topology of operation units in energy-efficient computing systems with high performance. Considering hardware implementation of the energy-efficient computing systems in the future, we are developing the optimal network structure of operation units and learning algorithms that can maximize the performance of neural information processing.

There are three major issues to be overcome for implementing efficient information processing systems with parallel distributed processing using computing machinery consisting of solid operational devices: the realization of a huge number of interconnections between operational unit devices; the development of operational devices with a huge number of fan-in and fan-out; the development of individual operational unit devices with extremely low power consumption. These issues are difficult to be handled by the conventional wiring and device technology for CMOS integrated circuits. Meanwhile, many device researchers have tried to implement neuronal computers on chips in a manner different from the CMOS integrated circuits, although large-scale neuronal computers have yet to be implemented by overcoming the above difficulties. One of the reasons of this is that most studies have focused on the specialty of and the interest on the input-output characteristics of devices but have not considered the methodology for network construction of the operational unit devices. This social cooperation program is dealing with the development of operational devices which are required for realization of large-scale integrated neural computers through a collaboration among researchers studying networks/algorithms and those studying devices.

## Publications

### Journal papers

- Y. Katayama, T. Yamane, D. Nakano, R. Nakane, and G. Tanaka,

"Wave-Based Neuromorphic Computing Framework for Brain-Like Energy Efficiency and Integration,"

IEEE Transcations on Nanotechnology, vol. 15, no. 5, pp. 762-769 (2016 September)

DOI: 10.1109/TNANO.2016.2545690

### Conference papers

- T. Yamane, S. Takeda, D. Nakano, G. Tanaka, R. Nakane, S. Nakagawa, and A. Hirose,

"Dynamics of reservoir computing at the edge of stability,"

Proceedings of the 23th International Conference on Neural Information Processing (ICONIP), pp. 205-212 (2016)

DOI: 10.1007/978-3-319-46687-3_22 - S. Takeda, D. Nakano, T. Yamane, G. Tanaka, R. Nakane, A. Hirose, and S. Nakagawa

"Photonic Reservoir Computing Based on Laser Dynamics with External Feedback,"

Proceedings of the 23th International Conference on Neural Information Processing (ICONIP), pp. 222-230 (2016)

DOI: 10.1007/978-3-319-46687-3_24 - R. Mori, G. Tanaka, R. Nakane, A. Hirose, and K. Aihara

"Computational Performance of Echo State Networks with Dynamic Synapses,"

Proceedings of the 23th International Conference on Neural Information Processing (ICONIP), pp. 264-271 (2016)

DOI: 10.1007/978-3-319-46687-3_29 - G. Tanaka, R. Nakane, T. Yamane, D. Nakano, S. Takeda, S. Nakagawa, and A. Hirose,

"Exploiting Heterogeneous Units for Reservoir Computing with Simple Architecture,"

Proceedings of the 23th International Conference on Neural Information Processing (ICONIP), pp. 187-194 (2016)

DOI: 10.1007/978-3-319-46687-3_20 -
- T. Yamane, Y. Katayama, R. Nakane, G. Tanaka, and D. Nakano,

"Wave-Based Reservoir Computing by Synchronization of Coupled Oscillators,"

Proceedings of the 22th International Conference on Neural Information Processing (ICONIP), pp. 198-205 (2015)

DOI: 10.1007/978-3-319-26555-1_23 - G. Tanaka, T. Yamane, D. Nakano, R. Nakane, and Y. Katayama,

"Regularity and Randomness in Modular Network Architectures for Neural Associative Memories,"

Proceedings of the International Joint Conference on Neural Networks (IJCNN) (2015 July)

DOI: 10.1109/IJCNN.2015.7280829 - Y. Katayama, T. Yamane, D. Nakano, R. Nakane, and G. Tanaka,

"Wave-Based Neuromorphic Computing Framework Toward Atomic-Scale Integration,"

Proceedings of the 15th International Conference on Nanotechnology (IEEE NANO) (2015 July)

- Y. Katayama, T. Yamane, D. Nakano, R. Nakane, and G. Tanaka,

"Wave-Based Device Scaling Concept for Brain-Like Energy Efficiency and Integration,"

2015 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH) (2015 July)

DOI: 10.1109/NANOARCH.2015.7180580 - T. Yamane, G. Tanaka, D. Nakano, R. Nakane, and Y. Katayama,

"Performance analysis of auto-associative neural networks on diluted modular networks,"

IEICE Technical Report (Proc. of Information-Based Induction Sciences Workshop), vol. 114, no. 306, pp. 351-356 (2014 November) - G. Tanaka, T. Yamane, D. Nakano, R. Nakane, and Y. Katayama

"Hopfield-Type Associative Memory with Sparse Modular Networks,"

Proceedings of the 21th International Conference on Neural Information Processing (ICONIP), Lecture Notes in Computer Science, vol. 8834, pp. 255-262 (2014 November)

DOI: 10.1007/978-3-319-12637-1_32

### Books

- 廣瀬 明,

「複素ニューラルネットワーク[第2版]」、サイエンス社、SGCライブラリ 126 (2016) - Akira Hirose,

"Complex-Valued Neural Networks, 2nd Edition," Springer (2012)