# Timing for banded matrices¶

We’ll use a large banded matrix to observe the speedup possible in LU factorization. We’ll also need to load in a (standard) package for sparse matrices.

using FundamentalsNumericalComputation


If we use an ordinary “dense” matrix, then there’s no way to exploit a banded structure such as tridiagonality.

n = 10000
A = diagm(0=>1:n,1=>n-1:-1:1,-1=>ones(n-1))

10000×10000 Array{Float64,2}:
1.0  9999.0     0.0     0.0     0.0  …     0.0     0.0     0.0      0.0
1.0     2.0  9998.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     1.0     3.0  9997.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     1.0     4.0  9996.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     1.0     5.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     1.0  …     0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0  …     0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
⋮                                    ⋱
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0  …     0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0  …     4.0     0.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0     9997.0     3.0     0.0      0.0
0.0     0.0     0.0     0.0     0.0        1.0  9998.0     2.0      0.0
0.0     0.0     0.0     0.0     0.0        0.0     1.0  9999.0      1.0
0.0     0.0     0.0     0.0     0.0        0.0     0.0     1.0  10000.0

@time lu(A);

  6.691195 seconds (108 allocations: 763.022 MiB, 2.07% gc time)


If instead we construct a proper “sparse” matrix, though, the speedup can be dramatic.

A = spdiagm(0=>1:n,1=>n-1:-1:1,-1=>ones(n-1))

10000×10000 SparseMatrixCSC{Float64,Int64} with 29998 stored entries:
[1    ,     1]  =  1.0
[2    ,     1]  =  1.0
[1    ,     2]  =  9999.0
[2    ,     2]  =  2.0
[3    ,     2]  =  1.0
[2    ,     3]  =  9998.0
[3    ,     3]  =  3.0
[4    ,     3]  =  1.0
[3    ,     4]  =  9997.0
[4    ,     4]  =  4.0
[5    ,     4]  =  1.0
[4    ,     5]  =  9996.0
⋮
[9996 ,  9996]  =  9996.0
[9997 ,  9996]  =  1.0
[9996 ,  9997]  =  4.0
[9997 ,  9997]  =  9997.0
[9998 ,  9997]  =  1.0
[9997 ,  9998]  =  3.0
[9998 ,  9998]  =  9998.0
[9999 ,  9998]  =  1.0
[9998 ,  9999]  =  2.0
[9999 ,  9999]  =  9999.0
[10000,  9999]  =  1.0
[9999 , 10000]  =  1.0
[10000, 10000]  =  10000.0

@time lu(A);

  0.097763 seconds (274.35 k allocations: 23.452 MiB)