Search This Blog

Saturday, April 13, 2024

Matrix differential equation

A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.

For example, a first-order matrix ordinary differential equation is

where is an vector of functions of an underlying variable , is the vector of first derivatives of these functions, and is an matrix of coefficients.

In the case where is constant and has n linearly independent eigenvectors, this differential equation has the following general solution,

where λ1, λ2, …, λn are the eigenvalues of A; u1, u2, …, un are the respective eigenvectors of A; and c1, c2, …, cn are constants.

More generally, if commutes with its integral then the Magnus expansion reduces to leading order, and the general solution to the differential equation is

where is an constant vector.

By use of the Cayley–Hamilton theorem and Vandermonde-type matrices, this formal matrix exponential solution may be reduced to a simple form. Below, this solution is displayed in terms of Putzer's algorithm.

Stability and steady state of the matrix system

The matrix equation

with n×1 parameter constant vector b is stable if and only if all eigenvalues of the constant matrix A have a negative real part.

The steady state x* to which it converges if stable is found by setting

thus yielding

assuming A is invertible.

Thus, the original equation can be written in the homogeneous form in terms of deviations from the steady state,

An equivalent way of expressing this is that x* is a particular solution to the inhomogeneous equation, while all solutions are in the form

with a solution to the homogeneous equation (b=0).

Stability of the two-state-variable case

In the n = 2 case (with two state variables), the stability conditions that the two eigenvalues of the transition matrix A each have a negative real part are equivalent to the conditions that the trace of A be negative and its determinant be positive.

Solution in matrix form

The formal solution of has the matrix exponential form

evaluated using any of a multitude of techniques.

Putzer Algorithm for computing eAt

Given a matrix A with eigenvalues ,

where

The equations for are simple first order inhomogeneous ODEs.

Note the algorithm does not require that the matrix A be diagonalizable and bypasses complexities of the Jordan canonical forms normally utilized.

Deconstructed example of a matrix ordinary differential equation

A first-order homogeneous matrix ordinary differential equation in two functions x(t) and y(t), when taken out of matrix form, has the following form:

where , , , and may be any arbitrary scalars.

Higher order matrix ODE's may possess a much more complicated form.

Solving deconstructed matrix ordinary differential equations

The process of solving the above equations and finding the required functions of this particular order and form consists of 3 main steps. Brief descriptions of each of these steps are listed below:

The final, third, step in solving these sorts of ordinary differential equations is usually done by means of plugging in the values calculated in the two previous steps into a specialized general form equation, mentioned later in this article.

Solved example of a matrix ODE

To solve a matrix ODE according to the three steps detailed above, using simple matrices in the process, let us find, say, a function x and a function y both in terms of the single independent variable t, in the following homogeneous linear differential equation of the first order,

To solve this particular ordinary differential equation system, at some point in the solution process, we shall need a set of two initial values (corresponding to the two state variables at the starting point). In this case, let us pick x(0) = y(0) = 1.

First step

The first step, already mentioned above, is finding the eigenvalues of A in

The derivative notation x′ etc. seen in one of the vectors above is known as Lagrange's notation (first introduced by Joseph Louis Lagrange. It is equivalent to the derivative notation dx/dt used in the previous equation, known as Leibniz's notation, honoring the name of Gottfried Leibniz.)

Once the coefficients of the two variables have been written in the matrix form A displayed above, one may evaluate the eigenvalues. To that end, one finds the determinant of the matrix that is formed when an identity matrix, , multiplied by some constant λ, is subtracted from the above coefficient matrix to yield the characteristic polynomial of it,

and solve for its zeroes.

Applying further simplification and basic rules of matrix addition yields

Applying the rules of finding the determinant of a single 2×2 matrix, yields the following elementary quadratic equation,

which may be reduced further to get a simpler version of the above,

Now finding the two roots, and of the given quadratic equation by applying the factorization method yields

The values and , calculated above are the required eigenvalues of A. In some cases, say other matrix ODE's, the eigenvalues may be complex, in which case the following step of the solving process, as well as the final form and the solution, may dramatically change.

Second step

As mentioned above, this step involves finding the eigenvectors of A from the information originally provided.

For each of the eigenvalues calculated, we have an individual eigenvector. For the first eigenvalue, which is , we have

Simplifying the above expression by applying basic matrix multiplication rules yields

All of these calculations have been done only to obtain the last expression, which in our case is α = 2β. Now taking some arbitrary value, presumably, a small insignificant value, which is much easier to work with, for either α or β (in most cases, it does not really matter), we substitute it into α = 2β. Doing so produces a simple vector, which is the required eigenvector for this particular eigenvalue. In our case, we pick α = 2, which, in turn determines that β = 1 and, using the standard vector notation, our vector looks like

Performing the same operation using the second eigenvalue we calculated, which is , we obtain our second eigenvector. The process of working out this vector is not shown, but the final result is

Third step

This final step finds the required functions that are 'hidden' behind the derivatives given to us originally. There are two functions, because our differential equations deal with two variables.

The equation which involves all the pieces of information that we have previously found, has the following form:

Substituting the values of eigenvalues and eigenvectors yields

Applying further simplification,

Simplifying further and writing the equations for functions x and y separately,

The above equations are, in fact, the general functions sought, but they are in their general form (with unspecified values of A and B), whilst we want to actually find their exact forms and solutions. So now we consider the problem’s given initial conditions (the problem including given initial conditions is the so-called initial value problem). Suppose we are given , which plays the role of starting point for our ordinary differential equation; application of these conditions specifies the constants, A and B. As we see from the conditions, when t = 0, the left sides of the above equations equal 1. Thus we may construct the following system of linear equations,

Solving these equations, we find that both constants A and B equal 1/3. Therefore substituting these values into the general form of these two functions specifies their exact forms,

the two functions sought.

Using matrix exponentiation

The above problem could have been solved with a direct application of the matrix exponential. That is, we can say that

Given that (which can be computed using any suitable tool, such as MATLAB's expm tool, or by performing matrix diagonalisation and leveraging the property that the matrix exponential of a diagonal matrix is the same as element-wise exponentiation of its elements)

the final result is

This is the same as the eigenvector approach shown before.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...