Independent component analysis: algorithms and applications

Neural Netw. 2000 May-Jun;13(4-5):411-30. doi: 10.1016/s0893-6080(00)00026-5.

Abstract

A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject.

MeSH terms

  • Algorithms*
  • Artifacts*
  • Brain / physiology
  • Humans
  • Magnetoencephalography
  • Neural Networks, Computer*
  • Normal Distribution