Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Linear Algebra: Special Matrices

Let’s look at some special matrices and determinants that are useful in economic analysis.

The Jacobian

The Jacobian (named after German mathematician Karl Gustav Jacobi, 1804–1851) is generally used in conjunction with partial derivatives to provide an easy test for the existence of functional dependence (linear and nonlinear). A Jacobian determinant J|\mathbf{J}| is composed of all the first‑order partial derivatives of a system of equations, arranged in order sequence. Given

y1=f1(x1,x2,x3)y2=f2(x1,x2,x3)y3=f3(x1,x2,x3)y_{1} = f_{1}(x_{1},x_{2},x_{3}) \\ y_{2} = f_{2}(x_{1},x_{2},x_{3}) \\ y_{3} = f_{3}(x_{1},x_{2},x_{3})

Then

J=y1x1y1x2y1x3y2x1y2x2y2x3y3x1y3x2y3x3|\mathbf{J}| = \begin{vmatrix} \frac{\partial y_{1}}{\partial x_{1}} & \frac{\partial y_{1}}{\partial x_{2}} & \frac{\partial y_{1}}{\partial x_{3}} \\[4pt] \frac{\partial y_{2}}{\partial x_{1}} & \frac{\partial y_{2}}{\partial x_{2}} & \frac{\partial y_{2}}{\partial x_{3}} \\[4pt] \frac{\partial y_{3}}{\partial x_{1}} & \frac{\partial y_{3}}{\partial x_{2}} & \frac{\partial y_{3}}{\partial x_{3}} \end{vmatrix}

For example, say you have

y1=5x1+3x2y2=25x12+30x1x2+9x22y_{1} = 5x_{1} + 3x_{2} \\ y_{2} = 25x_{1}^{2} + 30x_{1}x_{2} + 9x_{2}^{2}

Then taking the first‑order partials gives

y1x1=5,y1x2=3,y2x1=50x1+30x2,y2x2=30x1+18x2{\partial y_1}{\partial x_1} = 5,\qquad \frac{\partial y_1}{\partial x_2} = 3,\qquad \frac{\partial y_2}{\partial x_1} = 50x_1 + 30x_2,\qquad \frac{\partial y_2}{\partial x_2} = 30x_1 + 18x_2

And the Jacobian is

J=5350x1+30x230x1+18x2|\mathbf{J}| = \begin{vmatrix} 5 & 3 \\ 50x_1 + 30x_2 & 30x_1 + 18x_2 \end{vmatrix}

The determinant of the Jacobian matrix is

J=5(30x1+18x2)3(50x1+30x2)=0|\mathbf{J}| = 5(30x_1 + 18x_2) - 3(50x_1 + 30x_2) = 0

Since J=0|\mathbf{J}| = 0, there is functional dependence between the equations.

Note: (5x1+3x2)2=25x12+30x1x2+9x22(5x_{1} + 3x_{2})^{2} = 25x_{1}^{2} + 30x_{1}x_{2} + 9x_{2}^{2}.

The Hessian

Given that the first‑order conditions zx=zy=0z_{x} = z_{y} = 0 are met, a sufficient condition for a multivariate function z=f(x,y)z = f(x,y) to be an optimum is

  1. zxx,zyy>0z_{xx}, z_{yy} > 0 for a minimum; zxx,zyy<0z_{xx}, z_{yy} < 0 for a maximum.

  2. zxxzyy>(zxy)2z_{xx} z_{yy} > (z_{xy})^{2}.

A convenient test for this second‑order condition is provided by the Hessian.
A Hessian H|\mathbf{H}| is a determinant composed of all the second‑order partial derivatives, with the second‑order direct partials on the principal diagonal and the second‑order cross partials off the principal diagonal. That is,

H=zxxzxyzyxzyy|\mathbf{H}| = \begin{vmatrix} z_{xx} & z_{xy} \\ z_{yx} & z_{yy} \end{vmatrix}

where (by Young’s Theorem) zxy=zyxz_{xy} = z_{yx}.

If the first element on the principal diagonal, a.k.a. the first principal minor, H1=zxx|H_{1}| = z_{xx} is positive, and the second principal minor also

H2=zxxzxyzyxzyy=zxxzyy(zxy)2>0,|H_{2}| = \begin{vmatrix} z_{xx} & z_{xy} \\ z_{yx} & z_{yy} \end{vmatrix} = z_{xx} z_{yy} - (z_{xy})^{2} > 0,

then the second‑order conditions for a minimum are set. That is, when H1>0|H_{1}| > 0 and H2>0|H_{2}| > 0, the Hessian H|\mathbf{H}| is said to be positive definite. And a positive definite Hessian fulfills the second‑order conditions for a minimum.

If however the first principal minor H1<0|H_{1}| < 0, while the second principal minor remains positive, then the second‑order conditions for a maximum are met. That is, when H1<0|H_{1}| < 0 and H2>0|H_{2}| > 0, the Hessian H|\mathbf{H}| is said to be negative definite and fulfills the second‑order conditions for a maximum.

Let’s take a numerical example:

Given

z=f(x,y)=2x2xy+2y25x6y+20,z = f(x,y) = 2x^{2} - xy + 2y^{2} - 5x - 6y + 20,

then

zx=4xy5,zy=x+4y6,zxx=4,zyy=4,zxy=zyx=1.z_x = 4x - y - 5,\quad z_y = -x + 4y - 6,\quad z_{xx} = 4,\quad z_{yy} = 4,\quad z_{xy} = z_{yx} = -1.

The Hessian is therefore

H=4114|\mathbf{H}| = \begin{vmatrix} 4 & -1 \\ -1 & 4 \end{vmatrix}

We have H1=4>0|H_{1}| = 4 > 0 and H2=(4)(4)(1)2=161=15>0|H_{2}| = (4)(4) - (-1)^2 = 16 - 1 = 15 > 0, i.e. both princpal minnores are positive and hence the Hessian is said to be positive definite and the function zz is characterized by a minimum at the critical values (can you find these?).

The Discriminant

Determinants can be used to test for positive and negative definiteness of any quadratic form. The determinant of a quadratic form is called a discriminant D|\mathbf{D}|. Given the quadratic form

z=ax2+bxy+cy2,z = a x^{2} + b x y + c y^{2},

the determinant is formed by placing the coefficients of the squared terms on the principal diagonal and dividing the coefficients of the non‑squared terms equally between the off‑diagonal positions. Hence, we have

D=ab/2b/2c|\mathbf{D}| = \begin{vmatrix} a & b/2 \\ b/2 & c \end{vmatrix}

We then evaluate the principal minors like we did for the Hessian test, where

D1=aandD2=ab/2b/2c=acb24.|D_{1}| = a \quad \text{and} \quad |D_{2}| = \begin{vmatrix} a & b/2 \\ b/2 & c \end{vmatrix} = a c - \frac{b^{2}}{4}.

If D1,D2>0|D_{1}|, |D_{2}| > 0, D|\mathbf{D}| is positive definite and zz is positive for all nonzero values of xx and yy. If D1<0|D_{1}| < 0 and D2>0|D_{2}| > 0, D|\mathbf{D}| is negative definite and zz is negative for all nonzero values of xx and yy. If D20|D_{2}| \neq 0, zz is not sign definite and zz may assume both positive and negative values.

Let’s take an example to test for sign definiteness of the following quadratic form

z=2x2+5xy+8y2.z = 2x^{2} + 5xy + 8y^{2}.

We can easily form the discriminant

D=22.52.58|\mathbf{D}| = \begin{vmatrix} 2 & 2.5 \\ 2.5 & 8 \end{vmatrix}

Then evaluating the principal minors gives

D1=2>0,D2=22.52.58=166.25=9.75>0.|D_{1}| = 2 > 0, \qquad |D_{2}| = \begin{vmatrix} 2 & 2.5 \\ 2.5 & 8 \end{vmatrix} = 16 - 6.25 = 9.75 > 0.

Hence, zz is positive definite, meaning that it will be greater than zero for all nonzero values of xx and yy.

Try these ^^

By inspecting its Discriminant, can you tell whether the function below is positive or negative for all nonzero x and y?

(1) f(x,y)=2x2+5xy+8y2f(x,y) = 2x^2 + 5xy + 8y^2
(2) f(x,y)=3x+4xy4y2f(x,y) = -3x + 4xy - 4y^2
(3) f(x,y,z)=5x26xy+3y22yz+8z23xyf(x,y,z) = 5x^2 - 6xy + 3y^2 - 2yz + 8z^2 - 3xy

The Quadratic Form

A quadratic form is defined as a polynomial expression in which each component term has a uniform degree.

Here are some examples:

6x22xy+3y2is a quadratic form in 2 variables.x2+2xy+4xz+2yz+y2+z2is a quadratic form in 3 variables.6x^{2} - 2xy + 3y^{2} \quad \text{is a quadratic form in 2 variables.} \\ x^{2} + 2xy + 4xz + 2yz + y^{2} + z^{2} \quad \text{is a quadratic form in 3 variables.}

It would be useful to determine the sign definiteness so we can make statements concerning the optimum value of the function as to whether it is a minimum or a maximum.

More generally, a quadratic form in nn variables (x1,x2,,xn)(x_{1},x_{2},\ldots ,x_{n}) can be written as xAx\mathbf{x}^{\prime}\mathbf{A}\mathbf{x} where

x\mathbf{x}^{\prime} is a row vector [x1,x2,,xn][x_{1},x_{2},\dots,x_{n}] and
A\mathbf{A} is an n×nn\times n matrix of scalar elements.

To see this more clearly, let

A=[2113]A = \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}

Then the quadratic form is:

Q(x,y)=[xy][2113][xy]=[2x+yx+3y][xy]=(2x+y)x+(x+3y)y=2x2+2xy+3y2\begin{aligned} Q(x, y) &= \begin{bmatrix} x & y \end{bmatrix} \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \\ &= \begin{bmatrix} 2x + y & x + 3y \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \\ &= (2x + y)x + (x + 3y)y \\ &= 2x^2 + 2xy + 3y^2 \end{aligned}

Furthermore, completing the square gives:

Q(x,y)=2x2+2xy+3y2=2(x2+xy)+3y2=2(x+y2)212y2+3y2=2(x+y2)2+52y2\begin{aligned} Q(x, y) &= 2x^2 + 2xy + 3y^2 = 2(x^2 + xy) + 3y^2 \\ &= 2\left(x + \frac{y}{2}\right)^2 - \frac{1}{2}y^2 + 3y^2 = 2\left(x + \frac{y}{2}\right)^2 + \frac{5}{2}y^2 \end{aligned}

Since both squared terms have positive coefficients, Q(x,y)>0Q(x, y) > 0 for all (x,y)(0,0)(x, y) \neq (0, 0), the quadratic form is positive definite.

Quadratic forms are used in many areas such as:

Quadratic forms are particularly useful in determining the concavity or convexity of a differentiable function.

For a function y=f(x1,x2,,xn)y = f(x_{1},x_{2},\dots,x_{n}), we can form the Hessian consisting of second‑order partial derivatives and construct a quadratic form as xHx\mathbf{x}^{\prime}\mathbf{H}\mathbf{x}. Then

a) The function yy is strictly convex if the quadratic form is positive (implying that the Hessian is positive definite, i.e., the determinants of the principal minors are all positive).
b) The function yy is strictly concave if the quadratic form is negative (that is, the Hessian is negative definite, i.e. the determinants of the principal minors alternate in sign).

Let’s take an example to see this more clearly.

Say we have

z=x2+y2.z = x^{2} + y^{2}.

Then

zx=2x,zxx=2,zxy=0,zy=2y,zyx=0,zyy=2.z_{x} = 2x,\quad z_{xx} = 2,\quad z_{xy} = 0, \\ z_{y} = 2y,\quad z_{yx} = 0,\quad z_{yy} = 2.

But

xHx=zxxx2+zyyy2+zxyzyxxy=2x2+2y2>0.\mathbf{x}^{\prime}\mathbf{H}\mathbf{x} = z_{xx}x^{2} + z_{yy}y^{2} + z_{xy}z_{yx}xy = 2x^{2} + 2y^{2} > 0.

Hence the function z=x2+y2z = x^{2} + y^{2} is characterized by a minimum at its optimal value and is therefore a strictly convex function. And it is easy to see that the Hessian is positive definite (i.e. the value of the function is always positive for any non-zero values of xx and yy).

Here’s yet another example:

Say we have a quadratic form in 2 variables

z=8x2+6xy+2y2.z = 8x^{2} + 6xy + 2y^{2}.

This can be rearranged as

z=8x2+3xy+3xy+2y2.z = 8x^{2} + 3xy + 3xy + 2y^{2}.

We can write this as

xHx=zxxx2+zyyy2+(zxy+zyx)xy=8x2+2y2+(3+3)xy=8x2+2y2+6xy\begin{aligned} \mathbf{x}^{\prime}\mathbf{H}\mathbf{x} &= z_{xx}x^{2} + z_{yy}y^{2} + (z_{xy} + z_{yx})xy \\ &= 8x^{2} + 2y^{2} + (3 + 3)xy \\ &= 8x^{2} + 2y^{2} + 6xy \end{aligned}

The Hessian is

H=[8332]\mathbf{H} = \begin{bmatrix} 8 & 3 \\ 3 & 2 \end{bmatrix}

The principal minors are H1=8|H_{1}| = 8 and H2=8332=169=7.|H_{2}| = \begin{vmatrix} 8 & 3 \\ 3 & 2 \end{vmatrix} = 16 - 9 = 7. Both are positive so we have the Hessian and the quadratic form as positive definite!

Higher Order Hessian

Given y=f(x1,x2,x3)y = f(x_{1},x_{2},x_{3}), the third‑order Hessian is

H=y11y12y13y21y22y23y31y32y33|\mathbf{H}| = \begin{vmatrix} y_{11} & y_{12} & y_{13} \\ y_{21} & y_{22} & y_{23} \\ y_{31} & y_{32} & y_{33} \end{vmatrix}

where the elements are the various second‑order partial derivatives of yy:

y11=2yx12,y12=2yx2x1,y23=2yx3x2,etc.y_{11} = \frac{\partial^{2}y}{\partial x_{1}^{2}},\quad y_{12} = \frac{\partial^{2}y}{\partial x_{2}\partial x_{1}},\quad y_{23} = \frac{\partial^{2}y}{\partial x_{3}\partial x_{2}},\quad \text{etc.}

Conditions for a relative minimum or maximum depend on the signs of the first, second, and third principal minors, respectively. If

H1>0,H2=y11y12y21y22>0,andH3=H>0,|H_{1}| > 0,\quad |H_{2}| = \begin{vmatrix} y_{11} & y_{12} \\ y_{21} & y_{22} \end{vmatrix} > 0,\quad \text{and} \quad |H_{3}| = |\mathbf{H}| > 0,

then H|\mathbf{H}| is positive definite and fulfills the second‑order conditions for a minimum.

If

H1<0,H2=y11y12y21y22>0,andH3=H<0,|H_{1}| < 0,\quad |H_{2}| = \begin{vmatrix} y_{11} & y_{12} \\ y_{21} & y_{22} \end{vmatrix} > 0,\quad \text{and} \quad |H_{3}| = |\mathbf{H}| < 0,

then H|\mathbf{H}| is negative definite and fulfills the second‑order conditions for a maximum.

Let’s take an example. Given the function:

y=5x12+10x1+x1x32x22+4x2+2x2x34x32.y = -5x_{1}^{2} + 10x_{1} + x_{1}x_{3} - 2x_{2}^{2} + 4x_{2} + 2x_{2}x_{3} - 4x_{3}^{2}.

The first order conditions (F.O.C) are

yx1=y1=10x1+10+x3=0,yx2=y2=4x2+2x3+4=0,yx3=y3=x1+2x28x3=0.\frac{\partial y}{\partial x_1} = y_1 = -10x_1 + 10 + x_3 = 0, \\ \frac{\partial y}{\partial x_2} = y_2 = -4x_2 + 2x_3 + 4 = 0, \\ \frac{\partial y}{\partial x_3} = y_3 = x_1 + 2x_2 - 8x_3 = 0.

which can be expressed in matrix form as

[1001042128][x1x2x3]=[1040]\begin{bmatrix} -10 & 0 & 1 \\ 0 & -4 & 2 \\ 1 & 2 & -8 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} -10 \\ -4 \\ 0 \end{bmatrix}

Using Cramer’s rule we get x11.04x_1 \approx 1.04, x21.22x_2 \approx 1.22 and x30.43x_3 \approx 0.43 (please verify this).

Taking the second partial derivatives from the first‑order conditions to create the Hessian,

y11=10,y12=0,y13=1,y21=0,y22=4,y23=2,y31=1,y32=2,y33=8.y_{11} = -10,\quad y_{12} = 0,\quad y_{13} = 1, \\ y_{21} = 0,\quad y_{22} = -4,\quad y_{23} = 2, \\ y_{31} = 1,\quad y_{32} = 2,\quad y_{33} = -8.

Thus,

H=[1001042128]\mathbf{H} = \begin{bmatrix} -10 & 0 & 1 \\ 0 & -4 & 2 \\ 1 & 2 & -8 \end{bmatrix}

Finally, applying the Hessian test, we have

H1=10<0,H2=10004=40>0,H3=H=1001042128=276<0.\begin{aligned} |H_1| = -10 < 0, & \\ |H_2| = \begin{vmatrix} -10 & 0 \\ 0 & -4 \end{vmatrix} = 40 > 0, & \\ |H_3| = |\mathbf{H}| = \begin{vmatrix} -10 & 0 & 1 \\ 0 & -4 & 2 \\ 1 & 2 & -8 \end{vmatrix} = -276 < 0. & \end{aligned}

Since the principal minors alternate correctly in sign, the Hessian is negative definite and the function is maximized at x11.04x_1 \approx 1.04, x21.22x_2 \approx 1.22 and x30.43x_3 \approx 0.43.

Try these ^^

For each equation below find (a) critical values, and (b) the nature of the critical values using the Hessian.

(1) f(x,y)=3x2xy2y24x6y+12f(x,y) = 3x^2 - xy - 2y^2 - 4x - 6y + 12
(2) f(x,y,z)=5x2+10x+xz2y42y+2yz4z2f(x,y,z) = -5x^2 + 10x + xz -2y^2 _ 4y + 2yz - 4z^2
(3) f(x,y,z)=3x25xxy+6y24y+2yz+4z2+2z3xzf(x,y,z) = 3x^2 - 5x - xy + 6y^2 - 4y + 2yz + 4z^2 +2z - 3xz

The Bordered Hessian

To optimize a function f(x1,x2)f(x_1, x_2) subject to a constraint g(x1,x2)g(x_1, x_2) the first‑order conditions can be found by setting up what is known as the Lagrangian function F(x1,x2,λ)=f(x1,x2)λ(g(x1,x2)c)F(x_1, x_2, \lambda) = f(x_1, x_2) - \lambda(g(x_1, x_2) - c).

The second‑order conditions can be expressed in terms of a bordered Hessian Hˉ|\bar{\mathbf{H}}| as

Hˉ=0g1g2g1F11F12g2F21F22|\bar{\mathbf{H}}| = \begin{vmatrix} 0 & g_1 & g_2 \\ g_1 & F_{11} & F_{12} \\ g_2 & F_{21} & F_{22} \end{vmatrix}

or

Hˉ=0g1g2g1f11f12g2f21f22|\bar{\mathbf{H}}| = \begin{vmatrix} 0 & g_1 & g_2 \\ g_1 & f_{11} & f_{12} \\ g_2 & f_{21} & f_{22} \end{vmatrix}

which is the usual Hessian bordered by the first derivatives of the constraint with zero on the principal diagonal.

The order of a bordered principal minor is determined by the order of the principal minor being bordered. Hence Hˉ2|\bar{H}_2| above represents a second bordered principal minor, because the principal minor being bordered has dimensions 2×22\times 2.

For the bivariate case with a single constraint, we simply look at Hˉ2|\bar{H}_2|. If this is negative, the bordered Hessian is said to be positive definite and satisfies the second‑order condition for a minimum. However, if it is positive, the bordered Hessian is said to be negative definite, and meets the sufficient conditions for a maximum.

Let’s try optimizing the following objective function

f(x1,x2)=4x12+3x222x1x2f(x_1, x_2) = 4x_1^2 + 3x_2^2 - 2x_1x_2

subject to

x1+x2=56.x_1 + x_2 = 56.

Setting up the Lagrangian function, taking the first‑order partials and solving gives x1=36x_1^* = 36, x2=20x_2^* = 20 (and λ=348\lambda = 348). (You should verify this.)

The bordered Hessian for this optimization problem is

Hˉ=011182126|\bar{\mathbf{H}}| = \begin{vmatrix} 0 & 1 & 1 \\ 1 & 8 & -2 \\ 1 & -2 & 6 \end{vmatrix}

Starting with the second principal minor, we have

Hˉ2=Hˉ=0822611216+11812=(6+2)+(28)=810=18\begin{aligned} |\bar{H}_2| = |\bar{\mathbf{H}}| &= 0 \cdot \begin{vmatrix} 8 & -2 \\ -2 & 6 \end{vmatrix} - 1 \cdot \begin{vmatrix} 1 & -2 \\ 1 & 6 \end{vmatrix} + 1 \cdot \begin{vmatrix} 1 & 8 \\ 1 & -2 \end{vmatrix} \\ &= - (6+2) + (-2-8) = -8 -10 = -18 \end{aligned}

which is negative, hence Hˉ|\bar{\mathbf{H}}| is positive definite and we have met sufficient conditions for a minimum.

For the more general case in which the objective function has say nn variables, i.e., f(x1,,xn)f(x_1, \ldots , x_n) which is subject to some constraint g(x1,,xn)g(x_1, \ldots , x_n), we can set up the bordered Hessian as

Hˉ=[0g1g2gng1F11F12F1ng2F21F22F2ngnFn1Fn2Fnn]|\bar{\mathbf{H}}| = \begin{bmatrix} 0 & g_1 & g_2 & \dots & g_n \\ g_1 & F_{11} & F_{12} & \dots & F_{1n} \\ g_2 & F_{21} & F_{22} & \dots & F_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ g_n & F_{n1} & F_{n2} & \dots & F_{nn} \end{bmatrix}

where Hˉ=Hˉn|\bar{\mathbf{H}}| = |\bar{H}_n| because the n×nn \times n principal minor is bordered.

In this case, if all the principal minors are negative, i.e. Hˉ2,Hˉ3,,Hˉn<0|\bar{H}_2|, |\bar{H}_3|, \ldots , |\bar{H}_n| < 0, the bordered Hessian is said to be positive definite and satisfies the second‑order condition for a minimum.

On the other hand, if the principal minors alternate consistently in sign from positive to negative, i.e. Hˉ2>0,Hˉ3<0,Hˉ4>0|\bar{H}_2| > 0, |\bar{H}_3| < 0, |\bar{H}_4| > 0 etc., the bordered Hessian is negative definite, and meets the sufficient conditions for a maximum.

Input‑Output Analysis

If aija_{ij} is a technical coefficient representing the value of input ii required to produce one dollar’s worth of product jj, the total demand for good ii can be expressed as

xi=ai1x1+ai2x2++ainxn+bix_{i} = a_{i1}x_{1} + a_{i2}x_{2} + \dots + a_{in}x_{n} + b_{i}

where bib_i is the final demand for product ii. What is important to realize here is that the total demand for a product consists of that product being the final demand plus that product being an intermediate good required for the production of other products.

In matrix form we have

X=AX+B\mathbf{X} = \mathbf{A}\mathbf{X} + \mathbf{B}

where, for an nn sector economy,

X=[x1x2xn],A=[a11a12a1na21a22a2nan1an2ann],B=[b1b2bn].\mathbf{X} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}, \quad \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}.

A\mathbf{A} is called the matrix of technical coefficients.

To find the output (intermediate and final goods) needed to satisfy demand, all we have to do is to solve for X\mathbf{X}:

XAX=B(IA)X=BX=(IA)1B.\mathbf{X} - \mathbf{A}\mathbf{X} = \mathbf{B} \quad \Rightarrow \quad (\mathbf{I} - \mathbf{A})\mathbf{X} = \mathbf{B} \quad \Rightarrow \quad \mathbf{X} = (\mathbf{I} - \mathbf{A})^{-1}\mathbf{B}.

where (IA)(I-A) is kowns as the Leontief matrix.

Thus for a 3-sector economy, we have

[x1x2x3]=[1a11a12a13a211a22a23a31a321a33]1[b1b2bn]\begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \end{bmatrix} = \begin{bmatrix} 1 - a_{11} & -a_{12} & -a_{13} \\ -a_{21} & 1 - a_{22} & -a_{23} \\ -a_{31} & -a_{32} & 1 - a_{33} \end{bmatrix}^{-1} \begin{bmatrix} b_{1} \\ b_{2} \\ b_{n} \end{bmatrix}

Say we are asked to determine total output for three sectors/industries given A and B as below:

A=[0.30.40.10.50.20.60.10.30.1],andB=[201030]A = \begin{bmatrix} 0.3 & 0.4 & 0.1 \\ 0.5 & 0.2 & 0.6 \\ 0.1 & 0.3 & 0.1 \end{bmatrix}, \quad \text{and} \quad B = \begin{bmatrix} 20 \\ 10 \\ 30 \end{bmatrix}

Since X=(IA)1BX = (I - A)^{-1} B

IA=[100010001][0.30.40.10.50.20.60.10.30.1]=[0.70.40.10.50.80.60.10.30.9]I - A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} - \begin{bmatrix} 0.3 & 0.4 & 0.1 \\ 0.5 & 0.2 & 0.6 \\ 0.1 & 0.3 & 0.1 \end{bmatrix} = \begin{bmatrix} 0.7 & -0.4 & -0.1 \\ -0.5 & 0.8 & -0.6 \\ -0.1 & -0.3 & 0.9 \end{bmatrix}

And taking the inverse

(IA)1=10.151[0.540.390.320.510.620.470.230.250.36](I - A)^{-1} = \frac{1}{0.151} \begin{bmatrix} 0.54 & 0.39 & 0.32 \\ 0.51 & 0.62 & 0.47 \\ 0.23 & 0.25 & 0.36 \end{bmatrix}

Hence,

X=[x1x2x3]=10.151[0.540.390.320.510.620.470.230.250.36][201030]=10.151[24.330.517.9]=[160.93201.99118.54]\begin{aligned} X = \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \end{bmatrix} &= \frac{1}{0.151} \begin{bmatrix} 0.54 & 0.39 & 0.32 \\ 0.51 & 0.62 & 0.47 \\ 0.23 & 0.25 & 0.36 \end{bmatrix} \begin{bmatrix} 20 \\ 10 \\ 30 \end{bmatrix} \\ &= \frac{1}{0.151} \begin{bmatrix} 24.3 \\ 30.5 \\ 17.9 \end{bmatrix} = \begin{bmatrix} 160.93 \\ 201.99 \\ 118.54 \end{bmatrix} \end{aligned}
Try these ^^

Determine the total demand for industries 1, 2 and 3, given the matrix of technical coefficients A and the final demand vector bb below.

A=[0.20.30.20.40.10.30.30.50.2],b=[150200210]\mathbf{A} = \begin{bmatrix} 0.2 & 0.3 & 0.2 \\ 0.4 & 0.1 & 0.3 \\ 0.3 & 0.5 & 0.2 \end{bmatrix}, \quad b = \begin{bmatrix} 150 \\ 200 \\ 210 \end{bmatrix}

Characteristic Roots and Vectors (Eigenvalues and Eigenvectors)

The sign and definiteness of a Hessian and a quadratic form has been tested by using the principal minors. Sign definiteness can also be tested by using the characteristic roots of a matrix. Given a square matrix A\mathbf{A}, is possible to find a vector V0\mathbf{V} \neq 0 and a scalar λ\lambda such that

AV=λV\mathbf{AV} = \lambda\mathbf{V}

the scalar λ\lambda is called the characteristic root, latent value or eigenvalue; and the vector V\mathbf{V} is called the characteristic vector, latent vector or eigenvector. The above can be written as

AVλV=0\mathbf{AV} - \lambda\mathbf{V} = 0

which can be rearranged so that

AVλIV=0(AλI)V=0\begin{aligned} \mathbf{AV} - \lambda\mathbf{IV} &= 0 \\ (\mathbf{A} - \lambda\mathbf{I})\mathbf{V} &= 0 \end{aligned}

where AλI\mathbf{A} - \lambda\mathbf{I} is called the characteristic matrix of A\mathbf{A}. Since we have V0\mathbf{V} \neq 0, the characteristic matrix AλI\mathbf{A} - \lambda\mathbf{I} must be singular and thus its determinant is zero.

If A\mathbf{A} is a 3×33 \times 3 matrix, then

AλI=a11λa12a13a21a22λa23a31a32a33λ=0|\mathbf{A} - \lambda\mathbf{I}| = \begin{vmatrix} a_{11} - \lambda & a_{12} & a_{13} \\ a_{21} & a_{22} - \lambda & a_{23} \\ a_{31} & a_{32} & a_{33} - \lambda \end{vmatrix} = 0

With AλI=0|\mathbf{A} - \lambda\mathbf{I}| = 0, there will be an infinite number of solutions for V\mathbf{V}. To force a unique solution, the solution may be normalized by requiring of the elements viv_i of V\mathbf{V} such that vi2=1\sum v_i^2 = 1.

Let’s take an example. Given a square matrix

A=[6336]\mathbf{A} = \begin{bmatrix} -6 & 3 \\ 3 & -6 \end{bmatrix}

To find the characteristic roots (eigenvalues) of A\mathbf{A}, we simply set AλI=0|\mathbf{A} - \lambda \mathbf{I}| = 0:

AλI=6λ336λ=0|\mathbf{A} - \lambda \mathbf{I}| = \begin{vmatrix} -6 - \lambda & 3 \\ 3 & -6 - \lambda \end{vmatrix} = 0

This means

(6λ)(6λ)9=0λ2+12λ+27=0(λ+9)(λ+3)=0λ=9,3.(-6 - \lambda)(-6 - \lambda) - 9 = 0 \\ \lambda^2 + 12\lambda + 27 = 0 \\ (\lambda + 9)(\lambda + 3) = 0 \\ \lambda = -9, -3.

Since both characteristic roots λ\lambda are negative, we say A\mathbf{A} is negative definite.


Note:

(i) λi=tr(A),(ii) λi=A.\text{(i) } \sum \lambda_i = \operatorname{tr}(\mathbf{A}), \qquad \text{(ii) } \prod \lambda_i = |\mathbf{A}|.

Let’s continue with the example above to find the characteristic vector.

We know one of the roots λ=9\lambda = -9, so substituting in the characteristic matrix gives

(AλI)v=0[6(9)336(9)][v1v2]=[3333][v1v2]=[00].(\mathbf{A} - \lambda \mathbf{I})\mathbf{v} = \mathbf{0} \quad \Rightarrow \quad \begin{bmatrix} -6 - (-9) & 3 \\ 3 & -6 - (-9) \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} 3 & 3 \\ 3 & 3 \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}.

Since the coefficient matrix is linearly dependent, there are infinite number of solutions. The product of the matrices gives two equations which are identical:

3v1+3v2=0v2=v1.3v_1 + 3v_2 = 0 \quad \Rightarrow \quad v_2 = -v_1.

By normalizing we have

v12+v22=1.v_1^{2} + v_2^{2} = 1.

Substituting v2=v1v_2 = -v_1 gives

v12+(v1)2=12v12=1v12=12.v_1^{2} + (-v_1)^{2} = 1 \quad \Rightarrow \quad 2v_1^{2} = 1 \quad \Rightarrow \quad v_1^{2} = \frac{1}{2}.

Taking the positive square root gives v1=1/2=22v_1 = \sqrt{1/2} = \frac{\sqrt{2}}{2} and substituting into v2=v1v_2 = -v_1 gives v2=22v_2 = -\frac{\sqrt{2}}{2}. That is

v1=[2222].\mathbf{v}_1 = \begin{bmatrix} \frac{\sqrt{2}}{2} \\[4pt] -\frac{\sqrt{2}}{2} \end{bmatrix}.

Using the second characteristic root λ=3\lambda = -3:

(AλI)v=[6(3)336(3)][v1v2]=[3333][v1v2]=[00].(\mathbf{A} - \lambda \mathbf{I})\mathbf{v} = \begin{bmatrix} -6 - (-3) & 3 \\ 3 & -6 - (-3) \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} -3 & 3 \\ 3 & -3 \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}.

Multiplying out gives 3v1+3v2=0-3v_1 + 3v_2 = 0 and 3v13v2=03v_1 - 3v_2 = 0, so v1=v2v_1 = v_2.

Normalizing as before:

v12+v22=12v12=1v1=22.v_1^2 + v_2^2 = 1 \quad \Rightarrow \quad 2v_1^2 = 1 \quad \Rightarrow \quad v_1 = \frac{\sqrt{2}}{2}.

Hence,

v2=[2222].\mathbf{v}_2 = \begin{bmatrix} \frac{\sqrt{2}}{2} \\[4pt] \frac{\sqrt{2}}{2} \end{bmatrix}.

Diagonalization

A square matrix A is diagonalizable if it can be written as:

A=TDT1.\mathbf{A} = \mathbf{T} \mathbf{D} \mathbf{T}^{-1}.

where

Note that not all matrices are diagonalizable, however.

For example, given

A=[2112]A = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}

Let us start by finding its eigenvalues.

The characteristic polynomial is given by

det(AλI)=(2λ)21=λ24λ+3=0.\det(A - \lambda I) = (2 - \lambda)^2 - 1 = \lambda^2 - 4\lambda + 3 = 0.

Hence,

λ=1,λ=3.\lambda = 1, \qquad \lambda = 3.
(AI)v=(1111)v=0.(A - I)\mathbf{v} = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} \mathbf{v} = \mathbf{0}.

A corresponding eigenvector is

v1=(11).\mathbf{v}_1 = \begin{pmatrix} 1 \\ -1 \end{pmatrix}.
(A3I)v=(1111)v=0.(A - 3I)\mathbf{v} = \begin{pmatrix} -1 & 1 \\ 1 & -1 \end{pmatrix} \mathbf{v} = \mathbf{0}.

A corresponding eigenvector is

v2=(11).\mathbf{v}_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}.

Then,

T=(1111),D=(1003).T = \begin{pmatrix} 1 & 1 \\ -1 & 1 \end{pmatrix}, \qquad D = \begin{pmatrix} 1 & 0 \\ 0 & 3 \end{pmatrix}.

The inverse of TT is

T1=12(1111).T^{-1} = \tfrac{1}{2} \begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}.

We can verify that A=TDT1A = TDT^{-1}.

TDT1=(1111)(1003)12(1111)=(2112)=A.TDT^{-1} = \begin{pmatrix} 1 & 1 \\ -1 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 3 \end{pmatrix} \cdot \tfrac{1}{2} \begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} = A.

Note (1):


Note (2):
This is closely related to the transformation form of diagonalization, which states that if AA is diagonalizable, then there exists an invertible matrix TT and a diagonal matrix DD such that

T1AT=D.T^{-1} A T = D.

Exercise. Can you show this?

Try these ^^

Given:

A=[4226],B=[3012]A = \begin{bmatrix} -4 & -2 \\ -2 & -6 \end{bmatrix}, \quad B = \begin{bmatrix} 3 & 0 \\ 1 & 2 \end{bmatrix}
C=[6332],D=[463025013]C = \begin{bmatrix} 6 & 3 \\ 3 & -2 \end{bmatrix}, \quad D = \begin{bmatrix} 4 & 6 & 3 \\ 0 & 2 & 5 \\ 0 & 1 & 3 \end{bmatrix}

Find:

(a) Find the eigenvalues and eigenvectors for each of the matrices above
(b) What can you say about the sign definiteness of each mattix
(c) Verify A=TDT1\mathbf{A} = \mathbf{T} \mathbf{D} \mathbf{T}^{-1}
(d) Find A5\mathbf{A}^5