Lecture 1 coda
January 22th, 2020
Ian Goodfellow, Yoshua Bengio and Aaron Courville:
Deep Learning MIT Press, 2016.
Jure Lescovec, Anand Rajaraman, Jeffrey D. Ullmann Mining of Massive datasets MIT Press, 2016.
Matrix
In principle, if
In practice,
they might not be real, nor
are always costly to find.
A square matrix
In such case its eigenvalues are non-negative:
Eigenvectors, i.e., solutions to
describe the direction along which matrix A operates an expansion
as opposed to
rotation
deformation
Example: shear mapping
The blue line is unchanged:
an
corresponding to
Eigenvectors are always orthogonal with each other: they describe alternative directions, interpretable as topic
Eigenvalues expand one’s affiliation to a specific topic.
Simple all-pairs extraction via Numpy/LA:
Caveat: e-values come normalized:
hence multiply them by
General case:
The Frobenius norm
The unit or normalized vector of
keeps the directon
norm is set to 1.
Handbook solution: solve the equivalent
A non-zero x is associated to a solution of
In Numerical Analysis
find the
At Google scale,
Ideas:
find the e-vectors first, with an iterated method.
interleave iteration with control on the expansion in value
until a fix point:
Now, eliminate the contribution of the first eigenpair:
Repeat the iteration on