AJND Copyright © Since 2012-All rights reserved. Published by e-Century Publishing Corporation, Madison, WI 53711, USA
Am J Neurodegener Dis 2012;1(3):292-304

Original Article
Audio representations of multi-channel EEG: a new tool for diagnosis of
brain disorders

François B Vialatte, Justin Dauwels, Toshimitsu Musha, Andrzej Cichocki

Laboratoire Sigma, Ecole Supérieur de Physique et Chimie Industrielle de la ville de Paris (ESPCI ParisTech), 10
rue Vauquelin, 75231 Paris Cedex 05; Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science
Institute, 2-1 Hirosawa, Saitama-Ken, Wako-Shi, 351-0198, Japan; School of Electrical & Electronic Engineering
(EEE), Nanyang Technological University (NTU), 50 Nanyang Avenue, Singapore 639798; 4Brain Functions Laboratory
Inc., KSP Building E211, Sakado, Takatsu Kawasaki -shi, Kanagawa, 213-0012, Japan

Received May 31, 2012; Accepted August 22, 2012; Epub November 15, 2012; Published November 30, 2012

Abstract: Objective: The objective of this paper is to develop audio representations of electroencephalographic
(EEG) multichannel signals, useful for medical practitioners and neuroscientists. The fundamental question explored
in this paper is whether clinically valuable information contained in the EEG, not available from the conventional
graphical EEG representation, might become apparent through audio representations. Methods and Materials:
Music scores are generated from sparse time-frequency maps of EEG signals. Specifically, EEG signals of
patients with mild cognitive impairment (MCI) and (healthy) control subjects are considered. Statistical differences
in the audio representations of MCI patients and control subjects are assessed through mathematical complexity
indexes as well as a perception test; in the latter, participants try to distinguish between audio sequences from MCI
patients and control subjects. Results: Several characteristics of the audio sequences, including sample entropy,
number of notes, and synchrony, are significantly different in MCI patients and control subjects (Mann-Whitney p <
0.01). Moreover, the participants of the perception test were able to accurately classify the audio sequences (89%
correctly classified). Conclusions: The proposed audio representation of multi-channel EEG signals helps to
understand the complex structure of EEG. Promising results were obtained on a clinical EEG data set.

Keywords: Multichannel-EEG sonification, time-frequency transform, bump modeling, EEG, Alzheimer’s disease


Address all correspondence to:
Dr. Francois Vialatte,
Laboratoire Sigma, Ecole Supérieur de Physique et
Chimie Industrielle de la ville de Paris (ESPCI
ParisTech), 10 rue Vauquelin, 75231 Paris Cedex 05.
Tel: +33(0) 14079 4466; Fax: +33(0) 14707 1393;
E-mail: francois.vialatte@espci.fr