NIPALS -- Nonlinear Iterative Partial Least Squares
http://en.wikipedia.org/wiki/NIPALS, is a method for iteratively
finding the left singular vectors of a large matrix. In other words
it discovers the largest principal component
http://en.wikipedia.org/wiki/Principal_component of a set of
mean-centred samples, along with the score (the magnitude of the
principal component) for each sample, and the residual of each
sample that is orthogonal to the principal component. By repeating
the procedure on the residuals, the second principal component is
found, and so on.
The advantage of NIPALS over more traditional methods, like SVD, is
that it is memory efficient, and can complete early if only a small
number of principal components are needed. It is also simple to
implement correctly. Additionally, because it doesn't pre-condition
the sample matrix in any way, it can be implemented with only two
sequential passes per iteration through the sample data, which is
much more efficient than random accesses if the data-set is too
large to fit in memory.
NIPALS is not generally recommended because sample matrices where
the largest eigenvalues are close in magnitude will cause NIPALS to
converge very slowly. For sparse matrices, use Lanczos methods
http://en.wikipedia.org/wiki/Lanczos_algorithm, and for dense
matrices, random-projection methods
http://amath.colorado.edu/faculty/martinss/Pubs/2009_HMT_random_review.pdf
can be used. However, these methods are harder to implement in a
single pass. If you know of a good, single-pass, and
memory-efficient implementation of either of these methods, please
contact the author.