Tag Archive: Greens Function

One-Sided Greens Functions and Causality

This week we pick up where we left off in the last post and continue probing the structure of the one-sided Greens function $$K(t,\tau)$$. While the computations of the previous post can be found in most introductory textbooks, I would be remiss if I didn’t mention that both the previous post and this one were heavily influenced by two books: Martin Braun’s ‘Differential Equations and Their Applications’ and Larry C. Andrews’ ‘Elementary Partial Differential Equations with Boundary Value Problems’.

As a recap, we found that a second order inhomogeneous linear ordinary differential equation

\[ y”(t) + p(t) y'(t) + q(t) y(t) \equiv L[y] = f(t) \; , \]

($$y'(t) = \frac{d}{dt} y(t)$$) with boundary conditions

\[ y(t_0) = y_0 \; \; \& \; \; y'(t_0) = y_0′ \; \]

possesses the solution

\[ y(t) = A y_1(t) + B y_2(t) + y_p(t) \; ,\]

where $$y_i$$ are solutions to the homogeneous equation, $$\{A,B\}$$ are constants chosen to meet the initial conditions, and $$y_p$$ is the particular solution of the form

\[ y_p(t) = \int_{t_0}^{t} \, d\tau \, K(t,\tau) f(\tau) \; . \]

By historical convention, we call the kernel that propagates the influence of the inhomogeneous term in time (either forward or backward) a one-sided Greens function. The Wronskian provides the explicit formula

\[ K(t,\tau) = \frac{ \left| \begin{array}{cc} y_1(\tau) & y_2(\tau ) \\ y_1(t) & y_2(t) \end{array} \right| } { W[y_1,y_2](t)} = \frac{ \left| \begin{array}{cc} y_1(\tau) & y_2(\tau) \\ y_1(t) & y_2(t) \end{array} \right| } { \left| \begin{array}{cc} y_1(\tau) & y_2(\tau) \\ y_1′(\tau) & y_2′(\tau) \end{array} \right| } \; . \]

for the one-sided Greens function. Plugging $$t=t_0$$ into the particular solution, gives

\[ y_p(t_0) = \int_{t_0}^{t_0} \, d\tau \, K(t_0,\tau) f(\tau) = 0 \]

as the initial datum for $$y_p$$ and

\[ y_p'(t_0) = K(t_0,t_0) f(t_0) + \int_{t_0}^{t_0} \left. \frac{\partial}{\partial t} K(t,\tau) \right|_{t=t_0} f(\tau) = 0 \]

for the initial datum for $$y_p’$$, since the definite integral of any integrand with same lower and upper limits is identically zero and because

\[ K(t,t) = \frac{ \left| \begin{array}{cc} y_1(t) & y_2(t) \\ y_1(t) & y_2(t) \end{array} \right| } { W[y_1,y_2](t)} = 0 \; . \]

The initial conditions on the particular solution provide the justification that the constants $$\{A, B\}$$ can be chosen to meet the initial conditions or, in other words, the initial values are carried by the homogeneous solutions.

The results for the one-sided Greens function can be extended in four ways that make the practice of handling systems much more convenient.

Arbitrary Finite Dimensions

Arbitrary number of dimensions in the original differential equation are handled straightforwardly by the relation that

\[ K(t,\tau) = \frac{ \left| \begin{array}{cccc} y_1(\tau) & y_2(\tau) & \cdots & y_n(\tau) \\ y_1′(\tau) & y_2′(\tau) & \cdots & y_n'(\tau) \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-1)}(\tau) & y_2^{(n-1)}(\tau) & \cdots & y_n^{(n-1)} (\tau) \\ y_1(t) & y_2(t) & \cdots & y_n(t) \end{array} \right| } { W[y_1,y_2,\ldots,y_n](\tau)} \]

where the corresponding Wronskian is given by

\[ W[y_1,y_2,\cdots,y_n](\tau) = \left| \begin{array}{cccc} y_1(\tau) & y_2(\tau) & \cdots & y_n(\tau) \\ y_1′(\tau) & y_2′(\tau) & \cdots & y_n'(\tau) \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-1)}(\tau) & y_2^{(n-1)}(\tau) & \cdots & y_n^{(n-1)} (\tau) \\ y_1^{(n)}(\tau) & y_2^{(n)}(\tau) & \cdots & y_n^{(n)} (\tau) \end{array} \right| \]

and

\[ y^{(n)} \equiv \frac{d^n y}{d t^n} \; .\]

The generation of one-side Greens functions is then a fairly mechanical process once the homogeneous solutions are known and since we are guaranteed that the solutions for initial value problems exist and are unique, the corresponding one-sided Greens functions also exist and are unique. The following is a tabulated set of $$K(t,\tau)$$s adapted from Andrew’s book.

One-side Greens Functions – adapted from Elementary Partial Diff. Eqs. by L. C. Andrews
Operator \[ K(t,\tau) \]
\[ D + b\] \[ e^{-b(t-\tau)} \]
\[ D^n, \; n = 2, 3, 4, \ldots \] \[ \frac{(t-\tau)^{n-1}}{(n-1)!} \]
\[ D^2 + b^2 \] \[ \frac{1}{b} \sin b(t-\tau) \]
\[ D^2 – b^2 \] \[ \frac{1}{b} \sinh b(t-\tau) \]
\[ (D-a)(D-b), \; a \neq b \] \[ \frac{1}{a-b} \left[ e^{a(t-\tau)} + e^{b(t-\tau)} \right] \]
\[ (D-a)^n, \; n = 2, 3, 4, \ldots \] \[ \frac{(t-\tau)^{n-1}}{(n-1)!} e^{a(t-\tau)} \]
\[ D^2 -2 a D + a^2 + b^2 \] \[ \frac{1}{b} e^{a(t-\tau)} \sin b (t-\tau) \]
\[ D^2 -2 a D + a^2 – b^2 \] \[ \frac{1}{b} e^{a(t-\tau)} \sinh b (t-\tau) \]
\[ t^2 D^2 + t D – b^2 \] \[ \frac{\tau}{2 b}\left[ \left( \frac{t}{\tau} \right)^b – \left( \frac{\tau}{t} \right)^b \right] \]

Imposing Causality

The second extension is a little more subtle. Allow the inhomogenous term $$f(t)$$ to be a delta-function so that the differential equation becomes

\[ L[y] = \delta(t-a), \; \; y(t_0) = 0, \; \; y'(t_0) = 0 \; .\]

The particular solution

\[ y = \int_{t_0}^t \, d \tau \, K(t,\tau) \delta(\tau – a) = \left\{ \begin{array}{lc} 0, & t_0 \leq t < a \\ K(t,a), & t \geq a \end{array} \right. \] now represents how the system responds to the unit impulse delivered at time $$t=a$$ by the delta-function. The discontinuous response results from the fact that the system at $$t=a$$ receives a sharp blow that changes its evolution from the unforced evolution it was following before the impulse to a new unforced evolution with new initial conditions at $$t=a$$ that reflect the influence of the impulse. By applying a little manipulation to the right-hand side, and allowing $$t_0$$ to recede to infinity, the above result transforms into \[ K^+(t,\tau) = \left\{ \begin{array}{lc} 0, & t_0 \leq t < \tau \\ K(t,\tau), & \tau \leq t < \infty \end{array} \right. = \theta(t-\tau) K(t,\tau) \; ,\] which is a familiar result from Quantum Evolution – Part 3. In this derivation, we get an alternative and more mathematically rigorous was of understanding why Heaviside theta function (or step function, if you prefer) enforces causality. The undecorated one-sided Greens function $$K(t,\tau)$$ is a mathematical object capable of evolving the system forward or backward in time with equal facility. The one-sided retarded Greens function $$K^+(t,\tau)$$ is physically meaningful because it will not evolve the influence of an applied force to a time earlier than the force was applied.

Recasting in State Space Notation

An alternative and frequently more insightful approach to solving ordinary differential equations comes in recasting the structure into state space language, in which the differential equation(s) reduce to a set of coupled first order equations of the form

\[ \frac{d}{dt} \bar S = \bar f(\bar S; t) \]

Quantum Evolution – Part 2 presents this approach applied to the simple harmonic oscillator. The propagator (or state transition matrix or fundamental matrix) of the system contains the one-sided Greens function as the upper-right portion of its structure. It is easiest to see that result by working with a second order system with linearly-independent solutions $$y_1$$ and $$y_2$$ and initial conditions $$y(t_0) = y_0$$ and $$y'(t_0) = y’_0$$. In analogy with the previous post, the initial conditions can be solved at time $$t_0$$ to yield the expression

\[ \left[ \begin{array}{c} C_1 \\ C_2 \end{array} \right] = \frac{1}{W(t_0)} \left[ \begin{array}{cc} y_2′ & -y_2 \\ -y_1′ & y_1 \end{array} \right]_{t_0} \left[ \begin{array}{c} y_0 \\ y_0′ \end{array} \right] \equiv M_{t_0} \left[ \begin{array}{c} y_0 \\ y_0′ \end{array} \right] \; , \]

where the subscript notation $$[]_{t_0}$$ means that all of the expressions in the matrix are evaluated at time $$t_0$$. Now the arbitrary solution $$y(t)$$ to the homogeneous equation is a linear combination of the independent solutions weighted by the constants just determined

\[ \left[ \begin{array}{c} y(t) \\ y'(t) \end{array} \right] = \left[ \begin{array}{cc} y_1 & y_2 \\ y_1′ & y_2′ \end{array} \right]_{t} \left[ \begin{array}{c} C_1 \\ C_2 \end{array} \right] \equiv \Omega_{t} \left[ \begin{array}{c} C_1 \\ C_2 \end{array} \right] \equiv \Omega_{t} M_{t_0} \left[ \begin{array}{c} y_0 & y_0′ \end{array} \right] \; .\]

The propagator, which is formally defined as

\[ U(t,t_0) = \frac{\partial \bar S(t)}{\partial \bar S(t_0) } \; ,\]

is easily read off to be

\[ U(t,t_0) = \Omega_{t} M_{t_0} \; , \]

which, when back-substituting the forms of $$\Omega_t$$ and $$M_{t_0}$$, gives

\[ U(t,t_0) = \frac{1}{W(t_0)} \left[ \begin{array}{cc} y_1 & y_2 \\ y_1′ & y_2′ \end{array} \right]_{t} \left[ \begin{array}{cc} y_2′ & -y_2 \\ -y_1′ & y_1 \end{array} \right]_{t_0} \; .\]

In state space notation, the inhomogeneous term takes the form $$\left[ \begin{array}{c} 0 \\ f(t) \end{array} \right]$$ and so the relative component of the matrix multiplication is the upper right element, which is
\[ \left\{ U(t,t_0) \right\}_{1,2} = \frac{y_1(t_0) y_2(t) – y_1(t) y_2(t_0)}{W(t_0)} \; , \]

which we recognize as the one-sided Greens function. Multiplication of the whole propagator by the Heaviside function yields enforces causality and gives the retarded, one-sided Greens function in the $$(1,2)$$ component.

Using the Fourier Transform

While all of the machinery discussed above is straightforward to apply, it does involve a lot of steps (e.g., finding the independent solutions, forming the Wronskian, forming the one-sided Greens function, applying causality, etc.). There is often a faster way to perform all of these steps using the Fourier transform. This will be illustrated for a simple one-dimensional problem (adapted from ‘Mathematical Tools for Physics’ by James Nearing) of a mass moving in a viscous fluid subjected to a time-varying force

\[ \frac{dv}{dt} + \beta v = f(t) \; ,\]

where $$\beta$$ is a constant characterizing the fluid and $$f(t)$$ is the force per unit mass.

We assume that the velocity has a Fourier transform

\[ v(t) = \int_{-\infty}^{\infty} d \omega \, V(\omega) e^{-i\omega t} \; \]

with the corresponding transform pair

\[ V(\omega) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} dt \, v(t) e^{+i\omega t} \; .\]

Likewise, the force possesses a Fourier transform

\[ f(t) = \int_{-\infty}^{\infty} d \omega \, F(\omega) e^{-i\omega t} \; .\]

Plugging the transforms into the differential equation yields the algebraic equation

\[ -i \omega V(\omega) + \beta V(\omega) = F(\omega) \; ,\]

which is easily solved for $$V(\omega)$$ and which, when substituted back in, gives the expression for particular solution

\[ v_p(t) = i \int_{-\infty}^{\infty} d \omega \frac{F(\omega)}{\omega + i \beta} e^{-i\omega t} \; .\]

Eliminating $$F(\omega)$$ by using its transform pair, we find that

\[ v_p(t) = \frac{i}{2 \pi} \int_{-\infty}^{\infty} d\tau K(t,\tau) f(\tau) \]

with the kernel

\[ K(t,\tau) = \int_{-\infty}^{\infty} d \omega \frac{e^{-i \omega (t-\tau)}}{\omega + i \beta} \; .\]

This is exactly the form of a one-sided Greens function. Even more pleasing is the fact that when complex contour integration is used to solve the integral, we discover that causality is already built-in and that what we have obtained is actually a retarded one-side Green’s function

\[ K^+(t,\tau) = \left\{ \begin{array}{lc} 0 & t < \tau \\ -2 \pi i e^{-i \beta(t-\tau)} & t > \tau \end{array} \right. \]

Causality results since the pole of the denominator is in the lower half of the complex plane. The usual semi-circular contour used in Jordan’s lemma must be in the upper half-plane when $$t < \tau$$, in which case no poles are contained and no residue exists. When $$t > \tau$$, the semi-circle, which must be in the lower-half plane, and surrounds the pole at $$\omega = – i \beta$$, giving a non-zero residue.

The final form of the particular solution is

\[ v_p(t) = \int_{-\infty}^t d \tau e^{-\beta (t-\tau)} f(\tau) \]

which is the same result we would have received from using the one-sided Greens function for the operator $$D + \beta$$ shown in the table above.

Quantum Evolution – Part 8

In the last two posts, we’ve discussed the path integral and how quantum evolution can be thought of as having contributions from every possible path in space-time such that the sum of their contributions exactly defines the quantum evolution operator $$U$$. In addition, we found that potentials in one dimension of the form $$V = a + b x + c x^2 + d \dot x + e x \dot x$$ kindly cooperate with the evaluation of the path integral. While potentials of these types do lend themselves to problems of both practical and theoretical importance, they exclude one very important class of problems – namely time-dependent potentials. Much of our modern economy is built upon time-dependent electric and magnetic fields, including the imaging sciences of photography and motion pictures, medical and magnetic resonance imaging, microwave ovens, modern electronics, and many more. In this post, I’ll be discussing the general structure for calculating how a quantum state evolves under a time-varying force. The main ingredients in the procedure are the introduction of a new picture, similar to the Schrodinger and Heisenberg pictures, and the perturbative expansion in this picture of the quantum evolution operator.

We start by assuming that the Hamiltonian can be written as
\[ H = H_0 + V(t) \; ,\]

where $$H_0$$ represents the Hamiltonian for some model problem that we can solve exactly. Usually $$H_0$$ represents the free-particle case.

Obviously, the aim is to solve the old and familiar state evolution equation
\[ i \hbar \frac{d}{dt} \left| \psi(t) \right> = H \left| \psi(t) \right> \]
to get the evolution operator that connects the state at the initial time $$t_0$$ with the state at time $$t$$
\[ \left| \psi(t) \right> = U(t,t_0) \left| \psi(t_0) \right> \; .\]
Since we haven’t nailed down any of the attributes of our model Hamiltonian other than it be exactly solvable, I can assume $$H_0 \neq H_0(t)$$. With this assumption, the evolution operator corresponding to $$H_0$$ then
becomes
\[ U_0(t,t_0) = e^{-i H_0(t – t_0)/\hbar} \; , \]
and its inverse is given by the Hermitian conjugate
\[U_0^{-1}(t,t_0) = e^{i H_0(t – t_0)/\hbar} \; .\]

The next step is to introduce a new state $$\left| \lambda (t) \right>$$ defined through the relation
\[ \left| \psi(t) \right> = U_0(t,t_0) \left| \lambda(t) \right> \; .\]
An obvious consequence of the above relation is the boundary condition
\[ \left| \psi(t_0) \right> = \left| \lambda(t_0) \right> \]
when $$t = t_0$$. This relation will come to be useful later.

By introducing this state, we’ve effectively introduced a new picture in which the state kets are defined with respect to a frame that ‘rotates’ in step with the evolution caused by $$H_0$$. This picture is called the Interaction or Dirac picture.

The evolution of this state obeys
\[ \frac{d}{dt} \left| \lambda(t) \right> = \frac{i}{\hbar} H_0 e^{i H_0(t – t_0)/\hbar} \left| \psi(t) \right> + e^{i H_0(t – t_0)/\hbar} \frac{d}{dt} \left| \psi(t) \right> \; , \]
which, when substituting the right-hand side of the time evolution of $$\left| \psi(t) \right>$$, simplifies to
\[ \frac{d}{dt} \left| \lambda(t) \right> = \frac{1}{i\hbar} e^{i H_0(t – t_0)/\hbar} \left[H – H_0\right] \left| \psi(t) \right> \; .\]
The difference between the total and model Hamiltonians is just the time-varying potential and
\[ i \hbar \frac{d}{dt} \left| \lambda(t) \right> = e^{i H_0(t – t_0)/\hbar} V(t) e^{-i H_0(t – t_0)/\hbar} \left| \lambda(t) \right> \equiv V_I(t) \left| \lambda(t) \right> \; , \]
where $$V_I(t) = U_0(t_0,t) V(t) U_0(t,t_0)$$. The ‘I’ subscript indicates that the potential is now specified in the interaction picture. The time evolution of the state $$\left| \lambda(t) \right>$$ leads immediately to the equation of motion
\[ i \hbar \frac{d}{dt} U_I(t,t_0) = V_I(t) U_I(t,t_0) \]
for the evolution operator $$U_I$$ in the interaction picture. The fact that $$U_I$$ evolves only under the action of $$V_I$$ justifies the name ‘interaction picture’.

What to make of the forward and backward propagation in this definition of $$V_I$$? A meaningful interpretation can be made mining the $$U_I$$’s equation of motion as follows.

The formal solution of the equation of motion is
\[ U_I(t,t_0) = Id – \frac{i}{\hbar} \int_{t_0}^t V_I(t’) U(t’,t_0) dt’ \]
but the time dependence of $$V_I$$ means that the iterated solution
\[ U_I(t,t_0) = Id + \sum_{n=1}^{\infty} \left( \frac{-i}{\hbar} \right)^n \int_{t_0}^{t} dt_1 \, \int_{t_0}^{t_1} dt_2 \, … \\ \int_{t_0}^{t_{n-1}} dt_n \, V_I(t_1) V_I(t_2)…V_I(t_n) \]
from case 3 in Part 1 is the only one available.
To understand what’s happening physically, let’s keep terms in this solution only up to $$n=1$$. Doing so yields
\[ U_I(t,t_0) = Id -\frac{i}{\hbar} \int_{t_0}^t \, dt_1 V_I(t_1) \]
or, expanding $$V_I$$ by its definition,
\[ U_I(t,t_0) = Id – \frac{i}{\hbar} \int_{t_0}^t \, dt_1 U_0(t_0,t_1) V(t_1) U(t_1,t_0) \; . \]

From the relationships between $$\left| \psi \right>$$ and $$\left| \lambda \right>$$ we have
\[ \left| \psi(t) \right> = U_0(t,t_0) \left| \lambda(t) \right> = U_0(t,t_0) U_I(t,t_0) \left| \lambda(t_0)\right> \\ = U_0(t,t_0) U_I(t,t_0) \left| \psi(t_0) \right> \]
from which we conclude

\[ U(t,t_0) = U_0(t,t_0) U_I(t,t_0) \; .\]

Pre-multiplying by the model Hamiltonian’s evolution operator $$U_0$$ gives
\[ U(t,t_0) = U_0(t,t_0) – \frac{i}{\hbar} \int_{t_0}^t \, dt_1 \left( U_0(t,t_0) U_0(t_0,t_1) V(t_1) U(t_1,t_0) \right) \; , \]
which simplifies using the composition property of the evolution operators as
\[ U(t,t_0) = U_0(t,t_0) – \frac{i}{\hbar} \int_{t_0}^t \, dt_1 U_0(t,t_1) V(t_1) U(t_1,t_0) \; .\]
This first-order form for the full evolution operator suggests that its action on a state can be thought of as comprised of two parts. The first part corresponds to the evolution of the state under the action of the model Hamiltonian over the entire time span from $$t_0$$ to $$t$$. The second part corresponds to the evolution of the state by $$U_0$$ from $$t_0$$ to $$t_1$$ at which point the state’s motion is perturbed by $$V(t)$$ and then the state merrily goes on its way under $$U_0$$ from $$t_1$$ to $$t$$. In order to get the correct answer to first order, all intermediate times at which this perturbative interaction can occur must be included. A visual way of representing this description is given by the following figure

first_order_evolution

where the thick double line represents the full evolution operator $$U(t,t_0)$$, the thin single line represents the evolution operator $$U_0$$ and the circles represent the interaction with the potential $$V(t)$$ that can happen at any intermediate time. This interpretation can be carried out to any order in the expansion, with two interaction events (two circles) for $$n=2$$, three interaction events (three circles) for $$n=3$$, and so on.

The formal solution of $$U_I$$ can also be manipulated in the same fashion by pre-multiplying by $$U_0$$ to get
\[ U(t,t_0) = U_0(t,t_0) \\ – \frac{i}{\hbar} \int_{t_0}^t \, dt’ U_0(t,t_0) U_0(t_0,t’) V(t’) \; \; U_0(t’,t_0) U_I(t’,t_0) \]
which simplifies to
\[ U(t,t_0) = U_0(t,t_0) – \frac{i}{\hbar} \int_{t_0}^t \, dt’ U_0(t,t’) V(t’) U(t’,t_0) \; . \]
Projecting this equation onto the position basis using $$\left< \vec r \right|$$, $$\left| \vec r_0 \right>$$ and the closure relation $$\int d^3r’ \left| \vec r’ \right>\left< \vec r’\right|$$ for all intermediate positions gives a relationship for the forward-time propagator (Greens Function) of
\[ K^+(\vec r, t; \vec r_0, t_0) = \int_{t_0}^{t} \, dt’ \int \, d^3r’ \\ K^+_0(\vec r, t; \vec r’, t’) V(\vec r’, t’) \; \; K^+(\vec r’, t’; \vec r_0, t_0) \; \]
(compare, e.g., with equation (36.18) in Schiff). This type of analysis leads to the famous Feynman diagrams.

Quantum Evolution – Part 7

In the last post, I presented a plausibility argument for the Feynman path integral. Central to this argument is the identification of a relation between the transition amplitude $$\left< \vec r, t \right. \left| \vec r_0, t_0 \right>$$ and the classical action given by
\[ \left< \vec r, t \right. \left| \vec r_0, t_0\right> \sim N e^{\frac{i S}{\hbar}} \;. \]
However, because of the composition property of the quantum propagator, we were forced into evaluating the action not just for the classical path but also for all possible paths in that obey causality (i.e. were time ordered).

In this post I will evaluate the closed form for the free particle propagator and will show how to get the same result from the path integral. Along the way, it will also be noted that the same result obtains when only the classical path and action are used. This strange property holds for a variety of systems more complex than the free particle as is proven in Shankar’s book. My presentation here follows his discussion in Chapter 8.

To truly appreciate the pros and cons of working with the path integral, let’s start by first deriving the quantum propagator for the free particle using a momentum space representation. To keep the computations clearer, I will work only in one dimension. The Hamiltonian for a free particle is given in the momentum representation by
\[ H = \frac{p^2}{2m} \]
and in the position representation by
\[ H = -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \; .\]
Since the Hamiltonian is purely a function of momentum and is an algebraic function in the momentum representation it is a easier to work with than if it were expressed in the position representation.

Because the momentum operator $$\hat P$$ commutes with the Hamiltonian, the momentum eigenkets for a natural diagonal basis for the Hamiltonian with the energy being given by
\[ E(p) = p^2/2m \; \]
This basis is clearly doubly degenerate for each given value of $$E_p$$ with a right-going momentum $$p_R = +\sqrt{2mE_p}$$ and a left-going momentum $$p_L = -\sqrt{2mE_p}$$ both having the same energy.

The Schrodinger equation in the momentum representation is
\[ \frac{p^2}{2m} \psi(p,t) = i \hbar \partial_t \psi(p,t) \; , \]
which has easily-obtained solutions
\[ \psi(p,t) = e^{-\frac{i}{\hbar} \frac{p^2}{2m} (t-t_0)} \; .\]
The quantum evolution operator can now be expressed in the momentum basis as
\[ U(t_f,t_0) = \int_{-\infty}^{\infty} dp \, \left| p \right> \left< p \right| e^{-\frac{i}{\hbar} \frac{p^2}{2m} (t-t_0)} \; \] By sandwiching the evolution operator between two position eigenkets \[ K^+(x_f,t_f;x_0,t_0) = \left< x_f \right| U(t_f,t_0) \left| x_0 \right> \theta(t_f-t_0)\]
we arrive at the expression for the forward-time propagator
\[ K^+(x_f,t_f;x_0,t_0) = \int_{-\infty}^{\infty} dp \, \left< x_f \right. \left| p \right> \left< p \right. \left| x_0 \right> e^{-\frac{i}{\hbar} \frac{p^2}{2m} (t_f-t_0)} \theta(t_f-t_0)\; .\]
In the remaining computations, it will be understood that $$t_f > t_0$$ and so I will drop the explicit reference to the Heaviside function. This equation can be can be evaluated by using
\[ \left< p | x \right> = \frac{1}{\sqrt{2 \pi \hbar}} e^{-\frac{ipx}{\hbar} } \]to give
\[ K^+(x_f,t_f;x_0,t_0) = \frac{1}{2 \pi \hbar} \int_{-\infty}^{\infty} dp \, e^{\frac{i}{\hbar}p(x_f-x_0)} e^{-\frac{i}{\hbar} \frac{p^2}{2m} (t_f-t_0)}\; .\]

The integral for the propagator can be conveniently written as
\[ K^+(x_f,t_f;x_0,t_0) = \frac{1}{2 \pi \hbar} \int_{-\infty}^{\infty} dp \, e^{-a p^2 + b p} \; , \]
where
\[ a = \frac{i (t_f – t_0)}{2 m \hbar} \]
and
\[ b = \frac{i(x_f – x_0)}{\hbar} \; .\]

Using the standard Gaussian integral
\[ \int_{-\infty}^{\infty} dx \, e^{-ax^2+bx} = e^{b^2/2a} \sqrt{\frac{\pi}{a}} \; ,\]
we arrive at the exact answer for the free-particle, forward-time quantum propagator
\[ K^+(x_f,t_f;x_0,t_0) = \sqrt{ \frac{m}{2\pi i\hbar(t_f-t_0)} } e^{\frac{i m}{2\hbar}\frac{ (x_f-x_0)^2}{(t_f-t_0)}} \; .\]

Now we turn to performing the same computation using the path integral approach. The first step is to express the classical action
as a function of the path. The Lagrangian only consists of the kinetic energy and so
\[ S = \int_{t_0}^{t_f} L[x(t)] dt = \int_{t_0}^{t_f} \frac{1}{2} m {\dot x} ^2 \; .\]
The basic idea of the path integral is to look at the quantum evolution across many small time steps so that each step can be handled more easily. In keeping with this idea, the action integral can be approximated as a sum by the expression
\[ S = \sum_{i=0}^{N-1} \frac{m}{2} \left( \frac{x_{i+1} – x_{i}}{\epsilon} \right)^2 \epsilon \; ,\]
where $$\epsilon$$ is the time step. The forward-time propagator is now written as
\[ K^+(x_f,t_f;x_0,t_0) = \lim_{N\rightarrow \infty, \epsilon \rightarrow 0} Q \int_{-\infty}^{\infty} dx_1 \int_{-\infty}^{\infty} dx_2 … \\ \exp \left[ \frac{i m}{2 \hbar} \sum_{i=0}^{N-1} \frac{(x_{i+1} – x_{i})^2}{\epsilon} \right] \; ,\]
where $$Q$$ is a normalization that will have to be determined at the end. The form of the action gives us hope that these integrals can be evaluated, since the term $$x_{i+1}-x_i$$ connects the positions on only two time slices. For notational convenience we’ll define an intermediate set of integrals
\[ I = \lim_{N\rightarrow\infty} Q \int_{-\infty}^{\infty} dx_1…dx_n \\ \exp \left[ i q \left( (x_N-x_{N-1})^2 + … + (x_2 – x_1)^2 + (x_1 – x_0)^2 \right) \right] \; \]
with $$q = \frac{m}{2 \hbar \epsilon}$$.

To start, let’s work on the $$x_1$$ integral. Since it only involves $$x_0$$ and $$x_2$$ this amounts to solving
\[ I_1 = \int_{-\infty}^{\infty} dx_1 \exp \left\{ i q \left[ 2 x_1^2 – 2(x_2 + x_0) x_1 + (x_2^2 + x_0^2) \right] \right\} \; ,\]
which can be done using essentially the same Gaussian integral as above
\[ \int_{-\infty}^{\infty} e^{-ax^2 + bx + c} = exp(b^2/4a +c) \sqrt{\frac{\pi}{a}} \; .\]
This results in
\[ I_1 = \frac{1}{\sqrt{2}} \left( \frac{i\pi}{q} \right)^{1/2} e^{i q \frac{(x_2-x_0)}{2}} \; .\]
Now the next integral to solve is
\[ I_2 = \int_{-\infty}^{\infty} dx_2 \exp \left\{ i q \left[ (x_3-x_2)^2 + (x_2-x_0)^2/2 \right] \right\} \; .\]
Rather than go through this in detail, I wrote some Maxima code to carry these integrals out

calc_int(integrand,ivar) := block([f,a,b,c],
                                  f : integrand,
                                  a : coeff(expand(f),ivar^2),
                                  b : coeff(expand(f),ivar),
                                  c : ratsimp(f-a*ivar^2-b*ivar),
                                  a : -1*a,
                                  sqrt(%pi/a)*exp(factor(ratsimp(b^2/(4*a) + c))

and the results for up through $$I_4$$ are
\[ I_2 = \frac{1}{\sqrt{3}} \left( \frac{i\pi}{q} \right)^{2/2} e^{i q \frac{(x_3-x_0)}{3}} \; , \]
\[ I_3 = \frac{1}{\sqrt{4}} \left( \frac{i\pi}{q} \right)^{3/2} e^{i q \frac{(x_4-x_0)}{4}} \; ,\]
and
\[ I_4 = \frac{1}{\sqrt{5}} \left( \frac{i\pi}{q} \right)^{4/2} e^{i q \frac{(x_5-x_0)}{5}} \; \]
yielding the result for $$N$$
\[ I = Q \frac{1}{\sqrt{N}} \left( \frac{i\pi}{q} \right)^{\frac{N-1}{2}} e^{i q \frac{(x_N-x_0)}{N}} \; .\]
With all the factors in $$q$$ now fully restored, we get
\[ I = \frac{Q}{\sqrt{N}} \left( \frac{2 \pi i \hbar \epsilon}{m} \right)^{\frac{N-1}{2}} e^{\frac{i m (x_N-x_0)^2}{2 \hbar N \epsilon}} \; .\]
Setting
\[ Q = \left( \frac{m}{2 \pi i \hbar \epsilon} \right)^{\frac{N}{2}} \]
gives
\[ I = \left( \frac{m}{2 \pi i \hbar N \epsilon} \right) e^{\frac{i m (x_N-x_0)^2}{2 \hbar N \epsilon}} \; .\]
Taking the limit as $$N \rightarrow \infty$$, $$\epsilon \rightarrow 0$$, and $$N \epsilon = (t_f – t_0)$$ yields
\[ K^+(x_f,t_f;x_0,t_0) = \sqrt{ \frac{m}{2\pi\hbar i (t_f-t_0)} } e^{i m (x_f-x_0)^2/2\hbar (t_f-t_0)} \; ,\]
which is the exact answer that was obtained above.

While this was a lot more work than the momentum-representation path, it is interesting to note that Shankar proves that
any potential with the form
\[ V = a + bx + c x^2 + d \dot x + e x \dot x \]
yields immediately the forward-time propagator
\[K^+(x_f,t_f;x_0,t_0) = e^{i S_{cl}/h} Q(t) \]
where $$S_{cl}$$ is the classical action and $$Q(t)$$ is a function solely of time that generally cannot be determined. Shankar shows, in the case of a free particle, that
\[ S_{cl} = \int_{t_0}^{t_f} L[x(t)] dt = \frac{m v_{av}^2}{2} (t_f-t_0) = \frac{m}{2} \frac{(x_f-x_0)^2}{t_f-t_0} \]
yielding
\[ K^+(x_f,t_f;x_0,t_0) = Q(t) \exp \left[ \frac{i m (x_f – x_0)^2}{2 \hbar (t_f – t_0)} \right] \; , \]
where $$Q(t)$$ can be determined from the requirement that $$K^+(x_f,t_f;x_0,t_0) = \delta(x_f – x_0)$$ when $$t_f = t_0$$. The applicable identity is
\[ \delta(x_f – x_0) = \lim_{\sigma \rightarrow 0} \frac{1}{\sqrt{\pi \sigma^2}} \exp \left[ -\frac{(x_f – x_0)^2}{\sigma^2} \right] \]
from which we immediately get
\[ Q(t) = \sqrt{ \frac{m}{2\pi \hbar i (t_f-t_0)}} \; .\]
These results mean that a host of practical problems that have intractable propagators in any given representation (due to the need to find the eigenfunctions and then plug them into a power series representing the exponential) can now be calculated with relative ease.

Quantum Evolution – Part 4

This post takes a small detour from the main thread of the previous posts to make a quick exploration of the classical applications of the Greens function.

In the previous posts, the basic principles of quantum evolution have resulted in the development of the propagator and corresponding Greens function as a prelude to moving into the Feynman spacetime picture and its applications to quantum scattering and quantum field theory. Despite all of the bra-ket notation and the presence of $$\hbar$$, there has actually been very little presented that was peculiarly quantum mechanical, except for the interpretation of the quantum propagator as a probability transition amplitude. Most of the machinery developed is applicable to linear systems regardless of their origins.

Here we are going to use that machinery to explore how the knowledge of the propagator allows for the solution of an inhomogeneous linear differential equation. While the presence of an inhomogeneity doesn’t commonly show up in the quantum mechanics, performing this study will be helpful in several ways. First, it is always illuminating to compare applications of the same mathematical techniques in quantum and classical settings. Doing so helps to sharpen the distinctions between the two, but also helps to point out the commons areas where insight into one domain may be more easily obtained than in the other. Second, the term Greens function is used widely in many different but related contexts, so having some knowledge highlighting the basic applications is useful in being able to work through the existing literature.

Lets start with a generic linear, homogeneous, differential equation

\[ \frac{d}{dt} \left| \psi(t) \right> = H \left| \psi(t) \right> \; ,\]

where $$\left| \psi(t) \right>$$ is simply a state of some sort in either a finite- or infinite-dimensional system, and $$H$$ is some linear operator. Let the solutions of this equation, by the methods discussed in the last three posts, be denoted by $$\left| \phi(t) \right>_h$$ where the $$h$$ subscript means ‘homogeneous’.

Now suppose the actual differential equation that we want to solve involves an inhomogeneous term $$\left|u(t)\right>$$ that is not related to the state itself.

\[ \left( \frac{d}{dt} – H \right) \left| \psi(t) \right> = \left| u(t) \right> \; .\]

Such a term can be regarded as an outside driving force. How, then, do we solve this equation?

Recall that the homogeneous solution at some earlier time $$\left| \phi(t_0) \right>_h$$ evolves into a later time according to

\[ \left| \phi(t) \right>_h = \Phi(t,t_0) \left| \phi(t_0) \right>_h \; , \]

where the linear operator $$\Phi(t,t_0)$$ is called the propagator. Now the general solution of the inhomogeneous equation can be written in terms of these objects as

\[ \left| \psi(t) \right> = \left| \phi(t) \right>_h + \int_{t_0}^t dt’ \Phi(t,t’) \left| u(t’) \right> \; .\]

To demonstrate that this is true, apply the operator

\[ L \equiv \frac{d}{dt} – H(t) \]

to both sides. (Note that the any time dependence for the operator $$H(t)$$ has been explicitly restored for reasons that will become obvious below.) Since $$\left| \phi(t)\right>_h$$ is a homogeneous solution,

\[ L \left| \phi(t) \right>_h = 0 \]

and we are left with

\[ L \left| \psi(t) \right> = L \int_{t_0}^t dt’ \Phi(t,t’) \left| u(t’) \right> \; .\]

Now expand the operator on the right-hand side, bring the operator $$H(t)$$ into the integral over $$t’$$, and use the Liebniz rule to resolve the action of the time derivative on the limits of integration. Doing this gives

\[ L \left| \psi(t) \right> = \Phi(t,t) \left| u(t) \right> + \int_{t_0}^t dt’ \frac{d}{dt} \Phi(t,t’) \left| u(t’) \right> \\ – \int_{t_0}^t dt’ H(t) \Phi(t,t’) \left| u(t’) \right> \; .\]

Now recognize that $$\Phi(t,t) = Id$$ and that

\[ \frac{d}{dt} \Phi(t,t’) = H(t) \Phi(t,t’) \]

since $$\Phi(t,t’)$$ is propagator for the homogeneous equation. Substituting these relations back in simplifies the equation to

\[ L \left| \psi(t) \right> = \left| u(t) \right> + \int_{t_0}^t dt’ H(t) \Phi(t,t’) \left| u(t’) \right> \\ – \int_{t_0}^t dt’ H(t) \Phi(t,t’) \left| u(t’) \right> \; .\]

The last two terms cancel and, at last, we arrive at

\[ \left( \frac{d}{dt} – H \right) \left| \psi(t) \right> = \left| u(t) \right> \; , \]

which completes the proof.

It is instructive to actually carry this process out for a driven simple harmonic oscillator. In this case, the usual second-order form is given by

\[ \frac{d^2}{dt^2} x(t) + \omega_0^2 x(t) = F(t) \]

and the corresponding state-space form is

\[ \frac{d}{dt} \left[ \begin{array}{c} x \\ v \end{array} \right] = \left[ \begin{array}{cc} 0 & 1 \\ \omega_0^2 & 0 \end{array} \right] \left[ \begin{array}{c} x \\ v \end{array} \right] + \left[ \begin{array}{c} 0 \\ F(t) \end{array}\right] \; ,\]

from which we identify

\[ H = \left[ \begin{array}{cc} 0 & 1 \\ \omega_0^2 & 0 \end{array} \right] \; \]

and

\[ \left| u(t) \right> = \left[ \begin{array}{c} 0 \\ F(t) \end{array} \right] \; .\]

The propagator is given by

\[ \Phi(t,t’) = \left[ \begin{array}{cc} \cos(\omega_0 (t-t’)) & \frac{1}{\omega_0} \sin(\omega_0 (t-t’)) \\ -\omega_0 \sin(\omega_0 (t-t’)) & \cos(\omega_0 (t-t’)) \end{array} \right] \; , \]

and the driving integral becomes

\[ \int_0^t dt’ \left[ \begin{array}{c} \frac{1}{\omega_0} \sin\left( \omega_0 (t-t’) \right) F(t’) \\ \cos\left( \omega_0 (t-t’) \right) F(t’) \end{array} \right] \; ,\]

where $$t_0$$ has been set to zero for convenience.

The general solution for the position of the oscillator can then be read off from the first component as

\[ x(t) = x_h(t) + \int_0^t dt’ \frac{1}{\omega_0} \sin\left( \omega_0 (t-t’) \right) F(t’) \; . \]

This is essentially the form for the general solution, and is the same that results from the Greens function approach discussed in many classical mechanics texts (e.g., page 140 of Classical Dynamics of Particle and Systems, Second Edition, Marion). The only difference between the treatment here and a more careful treatment is the inclusion of a Heaviside function to enforce causality. Since this was discussed in detail in the last post and will also be covered in future posts, that detail was suppressed here for clarity.

Quantum Evolution – Part 3

In the last post, the key equation for the quantum state propagation was derived to be

\[ \psi(\vec r_2, t_2) = \int d^3r_1 K(\vec r_2, t_2; \vec r_1, t_1) \psi(\vec r_1, t_1) \]

subject to the boundary condition on the propagator that

\[ \lim_{t2 \rightarrow t_1} K(\vec r_2, t_1; \vec r_1, t_1) = \left< \vec r_2 \right| U(t_1,t_1) \left| \vec r_1 \right> = \left< \vec r_2 \right. \left| \vec r_1 \right> = \delta(\vec r_2 – \vec r_1) \; . \]

A comparison was also made to the classical mechanics system of the simple harmonic oscillator and an analogy between the propagator and the state transition matrix was demonstrated, where the integral over position in the quantum case served the same function as the sum over state variables in the classical mechanics case (i.e., $$\int d^3r_1$$ corresponds to $$\sum_i$$).

The propagator and the state transition equations also share the common trait that, being deterministic, states at later times can be back-propagated to earlier times as easily as can be done for the reverse. While mathematically sound, this property doesn’t reflect reality, and we would like to be able to restrict our equations such that only future states can be determined from earlier ones. In other words, we want to enforce causality.

This condition can be meet with a trivial modification to the propagator equation. By multiplying each side by the unit step function

\[ \theta(t_2 – t_1) = \left\{ \begin{array}{ll} 0 & t_2 < t_1 \\ 1 & t_2 \geq t_1 \end{array} \right. \]

the quantum state propagation equation becomes

\[ \psi(\vec r_2,t_2) \theta(t_2 – t_1) = \int d^3r_1 K^+(\vec r_2, t_2; \vec r_1, t_1) \psi(\vec r_1, t_1) \; ,\]

where the object

\[K^+(2,1) \equiv K^+(\vec r_2, t_2; \vec r_1, t_1) = K(\vec r_2, t_2; \vec r_1, t_1) \theta(t_2 – t_1)\]

is called the retarded propagator, which is subject to an analogous boundary condition

\[ \lim_{t_2 \rightarrow t_1} K^+(\vec r_2, t_1; \vec r_1, t_1) = \theta(t_2 – t_1) \delta(\vec r_2 – \vec r_1) \; .\]

With this identification, it is fairly easy to prove, although perhaps not so easy to see, that $$K^+(2,1)$$ is a Greens function.

The proof starts by first defining the linear, differential operator
\[ L \equiv -\frac{\hbar^2}{2m} \nabla_{\vec r_2}^2 + V(\vec r_2) – i \hbar \frac{\partial}{\partial t_2} \; .\]

Schrodinger’s equation is then written compactly as
\[ L \psi(\vec r_2, t_2) = 0 \; . \]

Since the quantum propagator obeys the same differential equation as the wave function itself, then

\[ L K(\vec r_2, t_2; \vec r_1, t_1) = 0 \; ,\]

as well.

The final piece is to find out what happens when $$L$$ is applied to $$K^+$$. Before working it out, consider the effect of $$L$$ on the unit step function –
\[ L \theta(t_2 – t_1) = \left( -\frac{\hbar^2}{2 m} \nabla_{\vec r_2}^2 + V(\vec r_2) – i \hbar \frac{\partial}{\partial t_2} \right) \theta ( t_2 – t_1 ) \\ = -i \hbar \frac{\partial}{\partial t_2} \theta (t_2 – t_1) = -i \hbar \delta(t_2 – t_1) \; .\]

Now it is easy to apply $$L$$ to $$K^+(2,1)$$ using the product rule

\[ L K^+(2,1) = L \left[ \theta(t_2 – t_1) K(2,1) \right] \\ = \left[L \theta(t_2 – t_1) \right] K(2,1) + \theta(t_2 – t_1) \left[ L K(2,1) \right] \; .\]

The first term on the right-hand side is $$-i \hbar K(2,1) \delta(t_2 – t_1)$$ and the last term is identically zero. Substituting these results back in yields

\[ L K^+(2,1) = -i \hbar K(2,1) \delta(t_2 – t_1) \; .\]

For $$K^+(2,1)$$ to be a Greens function for the operator $$L$$, the right-hand side should be a product of delta-functions, but the above equation still has a $$K(2,1)$$ term, which seems to spoil the proof. However, appearances can be deceiving, and using the boundary condition on $$K(2,1)$$ we can conclude that

\[ K(\vec r_2, t_2; \vec r_1, t_1) \delta(t_2 – t_1)  \\ = K(\vec r_2, t_1; \vec r_1, t_1) \delta(t_2 – t_1) = \delta(\vec r_2 – \vec r_1) \delta(t_2 – t_1) \; .\]

Substituting this relation back in yields


\[ \left( -\frac{\hbar^2}{2m} \nabla_{\vec r_2}^2 + V(\vec r_2) – i \hbar \frac{\partial}{\partial t_2} \right) K^+(\vec r_2, t_2; \vec r_1, t_1 ) \\ = – i \hbar \delta(\vec r_2 – \vec r_1 ) \delta(t_2 – t_1) \; ,\]

which completes the proof.

At this point, the reader is no doubt asking, “who cares?”. To answer that question, recall that the only purpose for a Greens function is to allow for the inclusion of an inhomogeneous term in the differential equation. Generally, the Schrodinger equation doesn’t have physically realistic scenarios where a driving force can be placed on the right-hand side. That said, it is very common to break the $$L$$ operator up and move the piece containing the potential $$V(\vec r_2) \psi(\vec r_2,t_2)$$ to the right-hand side. This creates an effective driving term, and the Greens function that is used is associated with the reduced operator.

To make this more concrete, and to whet the appetite for future posts, consider the Schrodinger equation written in the following suggestive form

\[ \left( i \hbar \frac{\partial}{\partial t} – H_0 \right) \left| \psi(t) \right> = V \left| \psi(t) \right> \; ,\]

where $$V$$ is the potential and $$H_0$$ is some Hamiltonian whose spectrum is exactly known; usually it is the free particle Hamiltonian given by

\[ H_0 = – \frac{\hbar^2}{2 m} \nabla^2 \;. \]

The strategy is then to find a Greens function for the left-hand side such that if $$L_0 \equiv i \hbar \partial_t – H_0$$ then the solution of the full Schrodinger equation can be written symbolically as

\[ \left| \psi(t) \right> = L_0^{-1} V \left| \psi(t) \right> + \left| \phi(t) \right> \; , \]

where $$\left| \phi(t) \right>$$ is a solution to $$L_0 \left| \phi(t) \right> = 0$$, since applying $$L_0$$ to both sides yields

\[ L_0 \left| \psi(t) \right> = L_0 \left[ L_0^{-1} V \left| \psi(t) \right> + \left| \phi(t) \right> \right] \\ = L_0 L_0^{-1} V \left| \psi(t) \right> + L_0 \left| \phi(t) \right> = V \left| \psi(t) \right> \; .\]

This type of symbolic manipulation, with the appropriate interpretation of the operator $$L_0^{-1}$$ results in the Lippmann-Schwinger equation used in scattering theory.

Quantum Evolution – Part 2

Given the general relationships for quantum time evolution in Part 1 of these posts, it is natural to ask about how to express these relationships in a basis that is more suited for computation and physical understanding. That can be done by taking the general relationship for time development

\[ \left| \psi (t_2) \right> = U(t_2, t_1) \left| \psi (t_1) \right> \]

and the projecting this relation into the position basis $$\left| \vec r \right>$$ with the definition that the traditional Schrodinger wave function is given by

\[ \left< \vec r | \psi (t) \right> = \psi(\vec r, t) \; .\]

The rest of the computation proceeds by a strategic placement of the closure relation for the identity operator, $$Id$$,

\[ Id = \int d^3 r_1 \left| \vec r_1 \right>\left< \vec r_1 \right| \]

in the position basis, between $$U(t_2,t_1)$$ and $$\left| \psi(t_1) \right>$$ when $$U(t_2,t_1) \left| \psi(t_1) \right>$$ is substituted for $$\left| \psi(t_2) \right>$$

\[ \left< \vec r_2 | \psi(t_2) \right> = \left< \vec r_2 \right| U(t_2,t_1) \left| \psi(t_1) \right> = \\ \int d^3r_1 \left<\vec r_2 \right| U(t_2,t_1) \left| \vec r_1 \right> \left< \vec r_1 \right| \left. \psi(t_1) \right> \; .\]

Recognizing the form of the Schrodinger wave function about in both the left- and right-hand sides, the equation becomes

\[ \psi(\vec r_2, t_2) = \int d^3r_1 \left<\vec r_2 \right| U(t_2,t_1) \left| \vec r_1 \right> \psi(\vec r_1, t_1) \; .\]

If the matrix element of the evolution operator between $$\vec r_2$$ and $$\vec r_1$$ is defined as

\[ \left<\vec r_2 \right| U(t_2,t_1) \left| \vec r_1 \right> \equiv K(\vec r_2, t_2; \vec r_1, t_1) \; , \]

then the structure of the equation is now


\[ \psi(\vec r_2, t_2) = \int d^3r_1 K(\vec r_2, t_2; \vec r_1, t_1) \psi(\vec r_1, t_1) \; .\]

What meaning can be attached to this equation, which, for convenience, will be referred to as the boxed equation? Well it turns out that the usual textbooks on Quantum Mechanics are not particularly illuminating on this front. For example, Cohen-Tannoudji et al, usually very good in their pedagogy, have a presentation in Complement $$J_{III}$$ that jumps immediately from the boxed equation to the idea that $$K(\vec r_2, t_2; \vec r_1, t_1)$$ is a Greens function. While this idea is extremely important, it would be worthwhile to slow down the development and discuss the interpretation of the boxed equation both mathematically and physically.

Let’s start with the mathematical aspects. The easiest way to understand the meaning of the boxed equation is to start with a familiar example from classical mechanics – the simple harmonic oscillator.

The differential equation for the position, $$x(t)$$, of the simple harmonic oscillator is given by

\[ \frac{d^2}{dt^2} x(t) + \omega^2_0 x(t) = 0 \; ,\]

where $$\omega^2_0 = k/m$$ and where $$k$$ and $$m$$ are the spring constant and mass of the oscillator. The general solution of this equation is the well-known form

\[ x(t) = x_0 \cos(\omega_0 (t-t_0)) + \frac{v_0}{\omega_0} \sin(\omega_0 (t-t_0)) \, \]

with $$x_0$$ and $$v_0$$ being the initial position and velocity at $$t_0$$, respectively. To translate this system into a more ‘quantum’ form, the second-order differential equation needs to be translated into state-space form, where the state, $$\bar S$$, captures the dynamical variables (here the position and velocity)

\[ \bar S = \left[ \begin{array}{c} x \\ v \end{array} \right] \; ,\]

(the time dependence is understood) and the corresponding differential equation is written in the form

\[ \frac{d}{dt} {\bar S} = {\bar f}\left( \bar S,t\right) \; .\]

For the simple harmonic oscillator, the state-space form is explicitly

\[ \frac{d}{dt} \left[ \begin{array}{c} x \\ v\end{array} \right] = \left[ \begin{array}{cc} 0 & 1 \\ -\omega^2_0 & 0 \end{array} \right] \left[ \begin{array}{c} x \\ v\end{array} \right] \; , \]

with solutions of the form

\[ \left[ \begin{array}{c} x \\ v\end{array} \right] = \left[ \begin{array}{cc} \cos(\omega_0 (t-t_0)) & \frac{1}{\omega_0} \sin(\omega_0 (t-t_0)) \\ -\omega_0 \sin(\omega_0 (t-t_0)) & \cos(\omega_0 (t-t_0)) \end{array} \right] \left[ \begin{array}{c} x \\ v \end{array} \right] \\ \equiv M(t-t_0)\left[ \begin{array}{c} x \\ v \end{array} \right] \; .\]

The matrix $$M(t-t_0)$$ plays the role of the evolution operator (also known as the state transition matrix by engineers and the fundamental matrix by mathematicians), moving solutions forward or backward in time as needed because the theory is deterministic.

If the dynamical variables are denoted collectively by $$q_i(t)$$ where the index $$i=1, 2$$ labels the variable in place of the explicit names $$x(t)$$ and $$v(t)$$, then the state-space evolution equation can be written compactly as

\[ q_i(t) = \sum_{j} M_{ij}(t-t_0) q^{0}_j \;, \]

where $$q^{0}$$ is the collection of initial conditions for each variable (i.e. $$q^{0}_1 = x0$$, $$q^{0}_2 = v0$$). As written, this compact form can be generalized to an arbitrary number of dynamic variables by allowing the indices $$i$$ and $$j$$ to increase their range appropriately.

The final step is then to imagine that the number of dynamic variables goes to infinity in such a way that there is a degree-of-freedom associated with each point in space. This is the typical model used in generalizing a discrete dynamical system such as a long chain of coupled oscillators to a continuum system that describes waves on a string. In this case, the indices $$i$$ and $$j$$ is now replaced by a label indicating the position ($$x$$ and $$x’$$), the sum is replaced by an integral, and we have

\[ q(x,t) = \int dx’ M(t-t_0;x,x’) q(t_0;x’) \; ,\]

which except for the obvious minor differences in notation is the same form as the boxed equation.

Thus we arrive at the mathematical meaning of the boxed equation. The kernel $$K(\vec r_2, t_2; \vec r_1; t_1)$$ takes all of the dynamical values of the system at a given time $$t_1$$ and evolves them up to time $$t_2$$. The time $$t_1$$ is arbitrary since the evolution is deterministic, so that any particular configuration can be regarded as the initial conditions for the ones that follow. Each point in space is considered a dynamical degree-of-freedom and all points at earlier times contribute to its motion through the matrix multiplication involved in doing the integral. That is why the boxed equation involves to integration over time.

The final step is to physically interpret what the kernel means. From its definition as the matrix element between $$\vec r_2$$ and $$\vec r_1$$ of the evolution operator, the kernel is the probability amplitude that a particle moves from $$\vec r_1$$ to $$\vec r_2$$ during its evolution during the time span $$[t_1,t_2]$$. Or in other words, the conditional probability density that a particle can be found at $$\vec r_2$$ at time $$t_2$$ given that it started at position $$\vec r_1$$ at time $$t_1$$ is
\[ Prob(\vec r_2,t_2 | \vec r_1, t_1 ) = \left| K(\vec r_2,t_2; \vec r_1, t_1) \right|^2 \]

Next week, I’ll interpret how a slight modification of the kernel can be interpreted as a Greens function.

Quantum Evolution – Part 1

This post will be the beginning of my extended effort to organize material on the time evolution operator, quantum propagators, and Greens functions.  The aim of this is to put into a self-consistent and self-contained set of posts the background necessary to gnaw away at a reoccurring confusion I have had over these items from their presentations in the literature as to the names, definitions, and uses of the objects. In particular, the use of the Schrodinger, Heisenberg, and Interaction pictures.

Once this organization is done, I hope to use these methods to serve as a springboard for research into methods of applying quantum mechanical techniques to classical dynamical systems. In particular, the use of the Picard iteration (aka the Dyson’s expansion) for time-varying Hamiltonians.

The references that I will be using are:

[1] Quantum Mechanics – Volume 1, Claude Cohen-Tannoudji, Bernard Diu, and Frank Laloe
[2] Quantum Mechanics, Leonard Isaac Schiff
[3] Principles of Quantum Mechanics, R. Shankar
[4] Modern Quantum Mechanics, J.J. Sakurai

Starting simply, in this post I will be reviewing the definition and properties of the evolution operator.

Adapting the material in [1] (p. 236, 308-311), the Schrodinger equation in a representation-free form is:

\[ i \hbar \frac{d}{dt} \left| \psi(t) \right> = H(t) \left| \psi(t)\right>\]

From the structure of the Schrodinger equation, the evolution of the state $$\left|\psi(t)\right>$$ is entirely deterministic being subject to the standard, well-known theorems about the existence and uniqueness of the solution.  For the skeptic that is concerned that $$\left|\psi(t)\right>$$ can be infinite-dimensional I don’t have much in the way of justification except to say three things. First that the Schrodinger equation in finite dimensions (e.g. two state systems) maps directly to the usual cases of coupled linear systems dealt with in introductory classes on differential equations. Second, it is common practice for infinite-dimensional systems (i.e., PDEs) to be discretized for numerical analysis, and so the resulting structure is again a finite-dimensional linear system, although with arbitrary sizes. That is to say, the practitioner can refine the mesh used arbitrarily until either his patience or his computer gives out. It isn’t clear that such a process necessarily converges but the fact that there isn’t a hue and cry of warnings in the community suggests that convergence isn’t a problem. Finally, for those cases where the system is truly infinite-dimensional, with no approximations allowed, there are theorems about the Cauchy problem and how to propagate forward in time from initial data and how the resulting solutions are deterministic. How to match up a evolution operator formalism to these types of problems (e.g., heat conduction) may be the subject of a future post. One last note, I am unaware of a single physical system that involves time evolution that can’t be manipulated (especially for numerical work) into the form $$\frac{d}{dt} \bar S = \bar f(\bar S; t)$$ where $$\bar S$$ is the abstract state and $$\bar f$$ is the vector field that is the function of the state and time. The Schrodinger equation is then an example where $$\bar f(\bar S;t)$$ is a linear operation.

From the theory of linear systems, the state at some initial time $$\left|\psi(t_0)\right>$$ is related to the state at the time $$t$$ by

\[ \left|\psi(t)\right> = U(t,t_0) \left|\psi(t_0)\right>\]

To determine what equation $$U(t,t_0)$$ obeys, simply substitute the above expression into the Schrodinger equation to yield
\[i \hbar \frac{d}{dt}\left[ U(t,t_0) \left|\psi(t_0)\right> \right] = H \; U(t,t_0) \left|\psi(t_0)\right> \; ,\]

and since $$\left|\psi(t_0)\right>$$ is arbitrary, the equation of motion or time development equation for the evolution operator is
\[i \hbar \frac{d}{dt} U(t,t_0) = H \; U(t,t_0) \; .\]

The required boundary condition is
\[ U(t_0,t_0) = Id \; ,\]
where $$Id$$ is the identity operator of the correct size to be consistent with the state dimension. That is to say that $$Id$$ finite dimensional matrix with rank equal to the state of $$\left| \psi(t) \right>$$ or it is infinite dimensional.

Some obvious properties can be deduced without an explicit expression for $$U(t,t_0)$$ by regarding as variable $$t_0$$. Assume that $$t_0$$ takes on a particular value $$t’$$ then the evolution operator can relate the state at that time to some later time $$t”$$ as
\[ \left| \psi(t”)\right> = U(t”,t’) \left| \psi(t’)\right> \; .\]
Now let $$t_0$$ take on the value $$t”$$ and connect the state at this time to some other time $$t$$ by
\[ \left| \psi(t)\right> = U(t,t”) \left| \psi(t”)\right> \; .\]
By composing these two expressions, the state at $$t’$$ can be related to the state at $$t$$ with a stop off at the intermediate time $$t”$$ resulting in the general composition relation
\[ U(t,t’) = U(t,t”) U(t”,t’) \; .\]
Using the same type of arguments the inverse of the evolution operator can be seen to be
\[U^{-1}(t,t_0) = U(t_0,t) \; \]
which can also be expressed as
\[ U(t,t_0) U(t_0,t) = U(t,t) = Id \; .\]

Formal solution for the equation of motion of the evolution operator is
\[ U(t,t_0) = Id – \frac{i}{\hbar} \int_{t_0}^{t} dt’ H(t’) U(t’,t_0) \]
which can be verified using the Liebniz rule for differentiation under the integral sign.

The Liebniz rule says that if the integral $$I(t)$$ is defined as
\[ I(t) = \int_{a(t)}^{b(t)} dx f(t,x) \]
then its derivative with respect to $$t$$ is
\[ \frac{d}{dt} I(t) = \int_{a(t)}^{b(t)} dx \frac{\partial}{\partial t} f(t,x) + f(b(t),x) \frac{\partial}{\partial t}b(t) – f(a(t),x) \frac{\partial}{\partial t}a(t) \]
Applying this to the formal solution for the evolution operator gives
\[ \frac{d}{dt} U(t,t_0) = \int_{t_0}^{t} dt’ \frac{\partial}{\partial t} \left( H(t’) U(t’,t_0) \right) + H(t) U(t,t_0) \frac{\partial}{\partial t} t \\ \\ = H(t) U(t,t_0) \; ,\]

There are three cases to be examined (based on the material in [4] pages 72-3).

1. The Hamiltonian is not time dependent, $$H \neq H(t)$$. In this case, the evolution operator has an immediate closed form solution given by
\[ U(t,t_0) = e^{-\frac{i H (t-t_0)}{\hbar} } \; .\]

2. The Hamiltonian is time dependent but it commutes with itself at different times, $$H = H(t)$$ and $$\left[ H(t),H(t’) \right] = 0$$. This case also possesses an immediate closed solution but with a slight modification
\[ U(t,t_0) = e^{-\frac{i}{\hbar}\int_{t_0}^{t} dt’ H(t’)} \]

3. The Hamiltonian is time dependent and it does not commute with itself at different times, $$H = H(t)$$ and $$\left[H(t),H(t’)\right] \neq 0$$. In this case, the solution that exists is written in the self-iterated form
\[ U(t,t_0) = Id + \\ \sum_{n=1}^{\infty} \left(-\frac{i}{\hbar}\right)^n \int_{t_0}^{t} dt_1 \int_{t_0}^{t_1} dt_2…\int_{t_0}^{t_{n-1}} dt_n H(t_1) H(t_2)… H(t_n) \; .\]

The structure of the iterated integrals in case 3, is formally identical to the Picard iteration, a technique that is used in a variety of disciplines to construct solutions to initial value problems, at least in a limited time span. I am not aware of any formal proof that the convergence in Case 3 is guaranteed in the most general setting when $$H(t)$$ and $$U(t,t_0)$$ are infinite dimensional but the iterated solution is used in quantum scattering and so the method is worth studying.

Next week, I’ll be exploring the behavior of a related object called the quantum propagator.