9.2. Special Matrices#
using LinearAlgebra
So far, we have used general-purpose 2D arrays that can represent any arbitrary matrix. However, many applications involve matrices with special structures (e.g., symmetric, diagonal, triangular).
Julia provides a rich set of specialized matrix types to handle these cases. Using them has two major benefits:
Performance: Operations can be orders of magnitude faster and use significantly less memory.
Correctness: Enforcing a structure (like symmetry) can help ensure your algorithms are correct.
Type |
Description |
---|---|
|
|
|
|
|
Upper triangular matrix |
|
Upper triangular matrix with unit diagonal |
|
Lower triangular matrix |
|
Lower triangular matrix with unit diagonal |
|
|
|
Symmetric tridiagonal matrix |
|
Upper/lower bidiagonal matrix |
|
|
|
A uniform scaling operator, |
For example, if you know that your matrix is both symmetric and tridiagonal, you can use the SymTridiagonal
type. The example below shows how to construct the 1D discrete Laplacian, a matrix that appears frequently when solving differential equations.
T = SymTridiagonal(2ones(5), -ones(4))
5×5 SymTridiagonal{Float64, Vector{Float64}}:
2.0 -1.0 ⋅ ⋅ ⋅
-1.0 2.0 -1.0 ⋅ ⋅
⋅ -1.0 2.0 -1.0 ⋅
⋅ ⋅ -1.0 2.0 -1.0
⋅ ⋅ ⋅ -1.0 2.0
The standard matrix operations will work just as before on these specialized types, but Julia will dispatch to highly efficient, specialized algorithms.
T * randn(5) # Matrix-vector multiplication
T^3 # Matrix cube
5×5 Matrix{Float64}:
14.0 -14.0 6.0 -1.0 0.0
-14.0 20.0 -15.0 6.0 -1.0
6.0 -15.0 20.0 -15.0 6.0
-1.0 6.0 -15.0 20.0 -14.0
0.0 -1.0 6.0 -14.0 14.0
9.2.1. Example: Dramatic Improvement in Scaling Performance#
The real power of these specialized types becomes obvious when working with large matrices. Let’s consider a 1,000,000 x 1,000,000
tridiagonal matrix.
If we were to store this as a standard Matrix{Float64}
, it would require 1,000,000 * 1,000,000 * 8
bytes of memory, which is 8 terabytes! That’s far more RAM than any standard computer has.
By using SymTridiagonal
, we only store the diagonal and off-diagonal elements, making the problem tractable. Furthermore, Julia’s eigvals
function uses specialized, highly efficient algorithms for these matrix types.
n = 1_000_000
T_large = SymTridiagonal(2 * ones(n), -ones(n - 1))
# This computation is only feasible because of the special matrix type.
# It uses a fast algorithm and minimal memory to find the 5 smallest eigenvalues.
@time lambda = eigvals(T_large, 1:5);
println("The 5 smallest eigenvalues are:")
display(lambda)
1.692150 seconds (982.77 k allocations: 234.523 MiB, 0.40% gc time, 9.49% compilation time)
The 5 smallest eigenvalues are:
5-element Vector{Float64}:
9.869691735229404e-12
3.9478444686844015e-11
8.882637160787986e-11
1.579134638370591e-10
2.4673972355866816e-10
As you can see, the calculation is incredibly fast and memory-efficient. This task would be completely impossible with a standard dense matrix, demonstrating the critical importance of using the right data structure for your problem.
9.2.2. The Identity Matrix#
The identity matrix \(I\) is so common that it has a special representation, I
, which is a UniformScaling
operator. This object acts like an identity matrix of any required size without ever allocating memory for it.
# Julia infers that I should be a 5x5 identity matrix to match the size of T.
T + 2I
5×5 SymTridiagonal{Float64, Vector{Float64}}:
4.0 -1.0 ⋅ ⋅ ⋅
-1.0 4.0 -1.0 ⋅ ⋅
⋅ -1.0 4.0 -1.0 ⋅
⋅ ⋅ -1.0 4.0 -1.0
⋅ ⋅ ⋅ -1.0 4.0
If you need to create an explicit, dense identity matrix, you can do so, but this is often unnecessary.
# This creates a standard 4x4 Matrix{Float64}.
I4 = Matrix{Float64}(I, 4, 4)
4×4 Matrix{Float64}:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0
In most cases, you can just use the I
operator directly in your expressions, and Julia will handle the dimensions and calculations efficiently.