Linear Algebra: Eigenvalues and Eigenvectors

Hello OP,

Eigenvalues can be understood as matrix fingerprints, but I think that is more like put some "glasses" in your eyes. That means, when one diagonalize a matrix (A = P\*D\*P\^-1, where D is a diagonal matrix that contains eigenvalues, and P is eigenvectors), one achieve the possibility to see the matrix A in the simplest form, which is D. Whoever, u can only see that with some glasses, which is P. Let A be a matrix with a complete set of eigenvectors (that means, all of then are L.I.), so P is a non-unique base.

In terms of coordinates system, when u diagonalize a matrix (A=P\*D\*P\^-1), u r rotating the canoninal axis to represent A as a diagonal matrix. So if u got a ellipse, the eigenvectors represent the basis that best simplify your ellipse equation. So, after diagonalize, one will get a ellipse with axis matching axis and vertex.

In chemical engineering, the eigenvectors of null eigenvalues (nullspace or kernel), in the context of stechometric of reactions, represents reactional invariants (linear combinations of compounds that do not chenge over time). In algebric problems, eigenvalues represent how fast a solver will step foward a solution. When one of the eigenvalues is much lower or higher, then the problem can be called ill contained, and it is very sensible to small changes (such empirical errors). When that same situations shows up in DOE or DAE, then u will get a stiff problem, which will be challenging to solve. In DOE with 2 variables, eigenvalues can tell u about stability of the system and his dynamics. In context of optimization and mathmatical programming, eigenvalues of the lagrangian hessian can tell to you if the problem is convex (has 1 unique solution), which is very important.

&#x200B;

It's a brief informal talk about eigenvalues and eigenvectors. This particular subject is very, very important

&#x200B;