Multi-Modal User Interactions in Controlled Environments
Multi-Modal User Interactions in Controlled Environments investigates the capture and analysis of user’s multimodal behavior (mainly eye gaze, eye fixation, eye blink and body movements) within a real controlled environment (controlled-supermarket, personal environment) in order to adapt the response of the computer/environment to the user. Such data is captured using non-intrusive sensors (for example, cameras in the stands of a supermarket) installed in the environment. This multi-modal video based behavioral data will be analyzed to infer user intentions while assisting users in their day-to-day tasks by adapting the system’s response to their requirements seamlessly. This book also focuses on the presentation of information to the user.
Multi-Modal User Interactions in Controlled Environments is designed for professionals in industry, including professionals in the domains of security and interactive web television. This book is also suitable for graduate-level students in computer science and electrical engineering.
-One of first books to cover primarily multimodality and behavioral data, rather than mono-modality tracking and analysis (mainly eye gaze, eye fixation, eye blink, body movements)-Discusses video based system that boosts productivity and increases satisfaction by automating repetitive human tasks; optimizes gestures for information we need, plus enables us to work together with other people through space and time