Multi-Modal and Multi-Camera Attention in Smart Environments

B. Schauerte, J. Richarz, T. Pl{\"o}tz, C. Thurau and G. A. Fink
Proc. Int. Conf. Multimodal Interfaces and Workshop on Machine Learning for Multi-modal Interaction (ICMI-MLMI), pages 261-268, 2009.

Cambridge, MA, USA

BibTeX PDF

Abstract

This paper considers the problem of multi-modal saliency and attention. Saliency is a cue that is often used for directing attention of a computer vision system, e.g., in smart environments or for robots. Unlike the majority of recent publications on visual/audio saliency, we aim at a well grounded integration of several modalities. The proposed framework is based on fuzzy aggregations and offers a flexible, plausible, and efficient way for combining multi-modal saliency information. Besides incorporating different modalities, we extend classical 2D saliency maps to multi-camera and multi-modal 3D saliency spaces. For experimental validation we realized the proposed system within a smart environment. The evaluation took place for a demanding setup under real-life conditions, including focus of attention selection for multiple subjects and concurrently active modalities.