×

Close

Type:
**Note**Institute:
**
Jawaharlal nehru technological university anantapur college of engineering
**Offline Downloads:
**93**Views:
**1614**Uploaded:
**10 months ago**Add to Favourite

Page:
Unit-I
Lesson – 1
Matrix, Determinants and Inverse
Contents:
1.0 Aims and objectives
1.1 Introduction
1.2 Matrices
1.2.1 Applications of matrices
1.2.2 Definition of a Matrix
1.2.3 Matrix types
1.3 Determinant
1.3.1 Definition
1.3.2 Properties of determinants
1.3.3 Determinant of a 3 x 3 matrix by Matrix Enhancement
1.3.4 Leibnitz formula to compute determinant of a Matrix
1.3.5 Laplace's formula to compute determinant of a Matrix
1.4 Inverse of a Matrix
1.4.1 Cofactors
1.4.2 Adjoint of a Matrix
1.4.3 Inverse of a Matrix – Definition
1.4.4 Properties of invertible matrices
1.4.5 Matrix inverses in real-time simulations
1.4.6 Analytic solution
1.5 Problems and Solutions
1.6 Let Us Sum Up
1.7 Lesson End Activities
1.8 References
2

Page:
3
1.0 Aims and objectives
In this lesson, we have discussed the representation of data using matrices and the
representation of expressions using determinant. Matrix inversion, a convenient way to
solve system of linear equations, is also discussed in this lesson.
After reading this lesson, you should able to understand
·
Matrix and its usage in data representation.
·
Determinant and the various methods of computing it.
·
Cofactors and Adjoint of a matrix.
·
How to find matrix Inverse? And to solve linear equations using inverse.
·
Properties of determinant and invertible matrices.
·
Applications of Matrix inverses in real-time simulations
1.1 Introduction
J. J. Sylvester coined the term “matrix” in 1848. A matrix is a rectangular table of
elements. Matrices are used to describe linear equations, keep track of the coefficients of
linear transformation and to record data that depend on multiple parameters. The
determinant notation is now employed in almost every branch of applied science.
Matrix inversion plays a significant role in computer graphics and to solve system of
linear equations.

Page:
4
1.2 Matrices
The study of matrices is quite old. A 3-by-3 magic square appears in Chinese
literature dating from as early as 650 BC. Matrices have a long history of application in
solving linear equations. After the development of the theory of determinants by Seki
Kowa and Leibniz in the late 17th century, Cramer developed the theory further in the
18th century, presenting Cramer's rule in 1750. Carl Friedrich Gauss and Wilhelm Jordan
developed Gauss-Jordan elimination in the 1800s. Cayley, Hamilton, Grassmann,
Frobenius and von Neumann are among the famous mathematicians who have worked on
matrix theory. Olga Taussky-Todd (1906-1995) used matrix theory to investigate an
aerodynamic phenomenon called fluttering or aeroelasticity during WWII.
In mathematics, a matrix (plural matrices) is a rectangular table of elements (or
entries), which may be numbers or, more generally, any abstract quantities that can be
added and multiplied. Matrices are used to describe linear equations, keep track of the
coefficients of linear transformation and to record data that depend on multiple
parameters. Matrices can be added, multiplied, and decomposed in various ways, making
them a key concept in linear algebra and matrix theory.
1.2.1 Applications of matrices
Ø
Encryption
Matrices can be used to encrypt numerical data. Multiplying the data
matrix with a key matrix does encryption. Simply multiplying the encrypted
matrix with the inverse of the key does decryption.
Ø
Computer graphics
4×4 transformation matrices are commonly used in computer graphics.
The upper left 3×3 portion of a transformation matrix is composed of the new X,
Y, and Z-axes of the post-transformation coordinate space.
1.2.2 Definition of a Matrix
If m and n are positive integers, then an m ´ n matrix (read “m b y n”) is a
rectangular array
æ a11 a12 a13 . . a1n öü
ç
÷ï
ç a 21 a 22 a 23 . . a 2 n ÷ï
ça
a32 a33 . . a3n ÷ïï
ç 31
÷ý m rows
.
. . . . ÷ï
ç .
ç .
.
. . . . ÷÷ï
ç
ï
ça
a n 2 a n 3 . . a nn ÷øïþ
n1
è14
4444244444
3
n columns
in which each entry, ai,j, of the matrix is a real number. An m ´ n matrix has m rows
(horizontal lines) and n columns (vertical lines).

Page:
5
A matrix having m rows and n columns is said to be of order m ´ n. If m = n, the
matrix is square of order n. For a square matrix, the entries a11, a22, a33,…, ann are the
main diagonals.
A matrix that has only one row is a row matrix, and a matrix that has only one
column is a column matrix.
1.2.3 Matrix types
1) Diagonal Matrix
A square matrix A is said to be a diagonal matrix if aij= 0 when i ¹ j. In a
diagonal matrix all the entries except the entries along the main diagonal is
zero.
2) Triangular matrix
A square matrix in which all the entries above the main diagonal are zero
is called a lower triangular matrix. If all the entries below the main diagonal
are zero, it is called an upper triangular matrix.
3) Scalar Matrix
A scalar matrix is a diagonal matrix in which all the entries along the main
diagonal are equal.
4) Identity Matrix or Unit Matrix
An Identity Matrix or Unit Matrix is a scalar matrix in which entries along
the main diagonal are equal to 1.
5) Zero or Null or Void Matrix
In which all the entries are zero.
6) Equality of matrices
The matrices A=[aij]mxn and B=[bij]pxq are equal if m = p, n = q and aij = bij
for every i and j.
1.3 Determinant
Many complicated expressions of electrical and mechanical systems can be
conveniently handled by expressing them in “determinant form”.
1.3.1 Definition
In algebra, a determinant is a function depending on n that associates a scalar, det(A) or
D, to every n × n square matrix A. The fundamental geometric meaning of a determinant
is as the scale factor for volume when A is regarded as a linear transformation.
Determinants are important both in calculus, where they enter the substitution rule for
several variables, and in multilinear algebra.

## Leave your Comments