Self-aware computing is an emerging field of research. It considers systems and applications able to proactively gather and maintain knowledge about aspects of themselves, learning and reasoning on an ongoing basis, and finally expressing themselves in dynamic ways, in order to meet their goals under changing conditions. Concepts of self-awareness have been established in psychology, philosophy, and cognitive science but are rather new to computing and networking. “As the complexity of technical systems is steadily increasing, traditional computing systems with a predefined functionality will soon reach their limits. Hence, innovative computing systems must be able to continuously assess their own state and make autonomous decisions, in order to adapt to unforeseen changes,” Bernhard Rinner explains.
As a case study, the NES researchers developed a network of cameras based on computational self-awareness. The cameras pursue a common objective, however – depending on the specific level of self-awareness – they autonomously decide on their individual contribution to the objective. “As shown in a person tracking use case in the camera network, we were able to demonstrate that computational self-awareness leads to more resource-efficient solutions,” Jennifer Simonjan, a member of the institute’s research staff, points out. “The autonomous learning performed by the network topology is an additional advantage. It allows the camera network to configure itself independently.” Computational self-awareness is not only suitable as a design method for camera networks, it has already been successfully applied to high-performance computers for financial modeling, interactive music systems, and the management of cloud systems.
Together with partners from the University of Genoa we have recently introduced a bio-inspired framework for generative and descriptive dynamic models that support SA in a computational and efficient way. Generative models facilitate predicting future states, while descriptive models enable the selection of the representation that best fits the current observation. Our framework is founded on the analysis and extension of three bio-inspired theories that studied SA from different viewpoints. We demonstrate how probabilistic techniques, such as cognitive dynamic Bayesian networks and generalized filtering paradigms, can learn appropriate models from multi-dimensional proprioceptive and exteroceptive signals acquired by an autonomous system.
C. Regazzoni, L. Marcenaro, D. Campo, and B. Rinner. Multi-sensorial generative and descriptive self-awareness models for autonomous systems. Proceedings of the IEEE, 2020.
P. Lewis, M. Platzner, B. Rinner, J. Toressen, and X. Yao. Self-Aware Computing Systems – An Engineering Approach. Springer, 2016.
B. Rinner, L. Esterle, J. Simonjan, G. Nebehay, R. Pflugfelder, P.R. Lewis, and G.F. Dominguez. Self-aware and self-expressive camera networks. IEEE Computer, 2015.
P. R. Lewis, L. Esterle, A. Chandra, B. Rinner, J. Torresen, and X. Yao. Static, dynamic and adaptive heterogeneity in distributed smart camera networks. ACM Transactions on Autonomous and Adaptive Systems, 2015.
L. Esterle, P. R. Lewis, X. Yao, and B. Rinner. Socio-economic vision graph generation and handover in distributed smart camera networks. ACM Transactions on Sensor Networks, 2014.