In this notebook, we use Kronecker products to construct a 2d finite-difference approximation of the Laplacian operator \(-\nabla^2\) with Dirichlet (zero) boundary conditions, via the standard 5-point stencil (centered differences in \(x\) and \(y\)).
The Kronecker products build up the matrix acting on "multidimensional" data from the matrices expressing the 1d operations on a 1d finite-difference grid. We start with the first-derivative matrix \(D\) from class.
# construct the (M+1)xM matrix D, not including the 1/dx factor
diff1(M) = [ [1.0 zeros(1,M-1)]; diagm(ones(M-1),1) - eye(M) ]
diff1(5)
Multidimensional finite-difference matrices will quickly get very large, so we need to exploit the fact that they are sparse (mostly zero), but storing only the nonzero entries and using special algorithms that exploit the sparsity. A sparse matrix can be constructed in Julia by using the sparse
function:
# sparse version (the lazy way):
sdiff1(M) = sparse(diff1(M))
sdiff1(5)
Now we'll construct the matrix approximation of \(-\nabla^2\) in 2d for an \(L_x \times L_y\) box with Dirichlet boundary conditions. The unknowns are discretized into \(N_x \times N_y\) points, are assumed to be written as column vectors in "column-major" order (a stack of \(N_y\) vectors of length \(N_y\)), as described in the lecture notes.
The function kron
computes the Kronecker product, and the function speye(N)
forms the \(N \times N\) identity matrix \(I_N\) in sparse-matrix format. As in the notes, this gives us \(A = I_{N_y} \otimes A_x + A_y \otimes I_{N_x}\).
# make the discrete -Laplacian in 2d, with Dirichlet boundaries
function Laplacian(Nx, Ny, Lx, Ly)
dx = Lx / (Nx+1)
dy = Ly / (Ny+1)
Dx = sdiff1(Nx) / dx
Dy = sdiff1(Ny) / dy
Ax = Dx' * Dx
Ay = Dy' * Dy
return kron(speye(Ny), Ax) + kron(Ay, speye(Nx))
end
The spy
command lets us plot the nonzero pattern in a matrix. (Currently, spy
does not work on Julia's sparse-matrix datastructure, so we use the full
function to convert it back to an ordinary "dense" matrix first.)
using PyPlot
spy(full(Laplacian(10,10,1,1)))
As a first test case, we'll consider a rectangular box and find its eigenvalues and eigenfunctions. From class, we know that \(-\nabla^2\) in this domain has separable eigenfunctions \(\sin(n_x \pi x / L_x) \sin(n_y \pi y / L_y)\) with eigenvalues \((n_x \pi / L_x)^2 + (n_y \pi / L_y)^2\). We'll use \(L_x = 1\) and \(L_y = 2\).
Lx = 1
Ly = 2
Nx = 50
Ny = 100
x = linspace(0,Lx,Nx+2)[2:end-1]
y = linspace(0,Ly,Ny+2)[2:end-1]'
A = Laplacian(Nx,Ny,Lx,Ly)
The eigs
function exploits the sparsity of A
to find only a few of the eigenvalues and eigenvectors. which="SM" nev=6
means that we are requesting the smallest-magnitude 6 eigenvalues. This is far more efficient than computing all of the eigenvalues and eigenvectors via eig
for large sparse matrices. (It also returns some statistics niter, nmult, resid
about the eigensolver that you can safely ignore.)
λ, U, niter, nmult, resid = eigs(A, which="SM", nev=6)
Let's check that \(\lambda_1\), the smallest-magnitude eigenvalue of the finite-difference approximation, is close to the analytical value \((\pi/L_x)^2 + (\pi/L_y)^2\):
λ[1], (pi/Lx)^2 + (pi/Ly)^2
The expression U[:,2]
gives the second column of U
, i.e. the eigenvector corresponding to the second-smallest magnitude \(\lambda\). Since it is in column-major order can convert this back to a 2d \(N_x \times N_y\) matrix by the reshape command, and plot it with imshow
from PyPlot
:
imshow(reshape(U[:,2], Nx,Ny), extent=[0,Ly,0,Lx], cmap="RdBu")
colorbar()
Next, we'll consider the case from class of \(-\nabla^2\) in a radius-\(1\) cylinder centered on the origin.
To construct the finite-difference matrix, we'll first construct the finite-difference matrix \(A\) for the box \([-1,1] \times [-1,1]\). Setting the unknowns to zero outside the cylinder corresponds to deleting the corresponding columns of \(A\), and ignoring \(-\nabla^2\) outside the cylinder corresponds to deleting the corresponding rows of \(A\). We use the find
function to get an array i
of the indices for all points inside the cylinder (radius \(r < 1\)), and then make the submatrix Ai
of those rows and columns via A[i,i]
. Neat, huh?
Lx = 1
Ly = 1
Nx = 100
Ny = 100
x = linspace(-Lx,Lx,Nx+2)[2:end-1] # a column vector
y = linspace(-Ly,Ly,Ny+2)[2:end-1]' # a row vector
r = sqrt(x.^2 .+ y.^2) # use broadcasting (.+) to make Nx x Ny matrix of radii
i = find(r .< 1) # and get indices of points inside the cylinder
A = Laplacian(Nx,Ny,2Lx,2Ly)
Ai = A[i,i] # to make a submatrix for the Laplacian inside the cylinder
λi, Ui, niter, nmult, resid = eigs(Ai, which="SM");
To plot the resulting eigenfunctions, we must first convert them back into a rectangular grid of points. We make a 2d array u
of zeros (since the functions are zero outside the cylinder), and set u[i] = Ui[:,2]
to set the points inside the cylinder to the second column of U
, i.e. to the eigenvector for the second-largest \(\lambda\). We then plot it with imshow
as before, with a superimposed contour plot to show the outline of the cylinder:
u = zeros(Nx,Ny)
u[i] = Ui[:,2]
imshow(u, extent=[-Ly,Ly,-Lx,Lx], cmap="RdBu")
contour(y,x, (r.<1) * 1.0)
Notice that, as could be checked from the analytical solution, the second-largest \(\lambda\) corresponds to an eigenfunction \(J_1(k_{11} r) \cos(\theta + \mathrm{some angle})\). (Note that this eigenvalue is of multiplicity 2, and eigs
returns a "random" superposition of the \(\sin\) and \(\cos\) solutions.)
Let's check that the eigenvalues are indeed squares of roots of Bessel functions, from class. For example, the smallest \(\lambda\) should be the square of the first root of \(J_0(x)\). Let's plot \(J_0(x)\) and find the first root squared along with the computed first \(\lambda\):
using PyCall
@pyimport scipy.optimize as so
plot(0:0.01:10, besselj(0,0:0.01:10))
grid()
so.newton(x -> besselj(0,x), 2.0)^2, λi[1]
The eigenvalues match pretty well, to the accuracy of the finite-difference approximation!