+ - 0:00:00
Notes for current slide
Notes for next slide

Lanczos method in High-Performance Computing

1 / 11

Agenda

  • Introduction, linear algebra recap.
  • The Power method
  • Krylov subspace
  • Lanczos method
2 / 11

Introduction

High performance computing numerical analysis

States: Vectors in Hilbert space

Measurements: Linear Operators in the Hilbert Space

For example in Quantum Chromodynamics

SQCD=d4xψ¯(x)Dψ(x)+β1trF(x)F(x)

Fermion states are represented by complex vectors

Ψ(x,y,z,t,a,α)

a represents color, $\alpha$ spin

Observables are Correlation functions: Expectation values of operators

3 / 11

Spectrum of D

Eigenvectors

Dψ=λψ

$\lambda$ is a number

Spectral density

trochama

Low eigenvalues are important

4 / 11

Low eigenmodes of D

trochama trochama

5 / 11

How to differentiate localized-delocalized eigenvectors ?

  • Eigenvectors are normalized

x|ψ(x)|2=1

  • What happens when we sum the moments of the wave-functions?

x|ψ(x)|4=?

  • In the delocalized case $\psi(x)$ does not depend on $x$.

x|ψ0|2=Vψ02=1

  • Thus for the second moment we get

x|ψ04|2=Vψ04=1/V

6 / 11

The Power method

  • Given x_0, A
  • Compute x_1=Ax_0
  • x_2=Ax_1
  • x_3=Ax_2
  • x_4=Ax_3
  • Till
  • x_k=Ax_k-1 approaches the dominant eigenvector
7 / 11

Lanczos method

Eigenvalues of the Krylov subspace

K(A,v0)={v0,Av0,A2v0,A3v0,,Anv0}

Approximate the eigenmodes of $A$ using the Krylov subspace

$D$ is sparse

$n$ is typically small

Storing the subspace is expensive

8 / 11

Lanczos method

  • vector v_1 be an eigenvector with | v_1 |- = 1
  • beta0 :=0 v0:=0
  • for k=1,2,3,... do
  • w:= A*v_k
  • alpha_k := (v_k*w)
  • T_k,k := alpha_k
  • Diagonalize T^(k) and stop if e_n converges
  • w := w - beta(k-1)v(k-1)-alpha_kv_k
  • for l=1,2,...,k-2; do
  • w:=w-v_l(v_l*w)
  • end for
  • beta_k=sqrt(w*w)
  • v_(k+1)=w/beta_k
  • T_{k,k+1}:=beta_k
  • T_{k+1,k}:=beta_k
  • end for
9 / 11

Thick Restarted Lanczos

  • 1: vector v1 be an arbitrary vector with ||v1|| = 1
  • 2: kx := 1
  • 3: for l = 1, 2, 3, ... do
  • 4: for k = kx, kx + 1, kx + 2, ..., lm − 1 do
  • 5: w := Hvk
  • 6: αk := (vk · w)
  • 7: Tkk := αk
  • 8: Diagonalize T(k) and stop if e_n converges
  • 9: for l = k, k − 1, · · · , 2, 1 do
  • 10: w := w − vl(vl · w)
  • 11: end for
  • 12: βk := |w|
  • 13: vk+1 := w/βk
  • 14: Tk,k+1 := βk, Tk+1,k := βk
  • 15: end for
  • 16: Construct a new T1(ls+1) matrix and v1, · · · , vls+1 for restart
  • 17: kx := ls + 1
  • 18: end for
10 / 11

Implementation details

  • Question CPU or GPU, which parts are most computationally expensive
  • Application of the operator
  • Linear algebra
  • The following kernels have to be implemented
  • paxyb : y:=ax+b
  • scalar product of two vectors
  • updating the lanczos basis
11 / 11

Agenda

  • Introduction, linear algebra recap.
  • The Power method
  • Krylov subspace
  • Lanczos method
2 / 11
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow