Tag Archive: Wronskian

One-Sided Greens Functions and Causality

This week we pick up where we left off in the last post and continue probing the structure of the one-sided Greens function $$K(t,\tau)$$. While the computations of the previous post can be found in most introductory textbooks, I would be remiss if I didn’t mention that both the previous post and this one were heavily influenced by two books: Martin Braun’s ‘Differential Equations and Their Applications’ and Larry C. Andrews’ ‘Elementary Partial Differential Equations with Boundary Value Problems’.

As a recap, we found that a second order inhomogeneous linear ordinary differential equation

\[ y”(t) + p(t) y'(t) + q(t) y(t) \equiv L[y] = f(t) \; , \]

($$y'(t) = \frac{d}{dt} y(t)$$) with boundary conditions

\[ y(t_0) = y_0 \; \; \& \; \; y'(t_0) = y_0′ \; \]

possesses the solution

\[ y(t) = A y_1(t) + B y_2(t) + y_p(t) \; ,\]

where $$y_i$$ are solutions to the homogeneous equation, $$\{A,B\}$$ are constants chosen to meet the initial conditions, and $$y_p$$ is the particular solution of the form

\[ y_p(t) = \int_{t_0}^{t} \, d\tau \, K(t,\tau) f(\tau) \; . \]

By historical convention, we call the kernel that propagates the influence of the inhomogeneous term in time (either forward or backward) a one-sided Greens function. The Wronskian provides the explicit formula

\[ K(t,\tau) = \frac{ \left| \begin{array}{cc} y_1(\tau) & y_2(\tau ) \\ y_1(t) & y_2(t) \end{array} \right| } { W[y_1,y_2](t)} = \frac{ \left| \begin{array}{cc} y_1(\tau) & y_2(\tau) \\ y_1(t) & y_2(t) \end{array} \right| } { \left| \begin{array}{cc} y_1(\tau) & y_2(\tau) \\ y_1′(\tau) & y_2′(\tau) \end{array} \right| } \; . \]

for the one-sided Greens function. Plugging $$t=t_0$$ into the particular solution, gives

\[ y_p(t_0) = \int_{t_0}^{t_0} \, d\tau \, K(t_0,\tau) f(\tau) = 0 \]

as the initial datum for $$y_p$$ and

\[ y_p'(t_0) = K(t_0,t_0) f(t_0) + \int_{t_0}^{t_0} \left. \frac{\partial}{\partial t} K(t,\tau) \right|_{t=t_0} f(\tau) = 0 \]

for the initial datum for $$y_p’$$, since the definite integral of any integrand with same lower and upper limits is identically zero and because

\[ K(t,t) = \frac{ \left| \begin{array}{cc} y_1(t) & y_2(t) \\ y_1(t) & y_2(t) \end{array} \right| } { W[y_1,y_2](t)} = 0 \; . \]

The initial conditions on the particular solution provide the justification that the constants $$\{A, B\}$$ can be chosen to meet the initial conditions or, in other words, the initial values are carried by the homogeneous solutions.

The results for the one-sided Greens function can be extended in four ways that make the practice of handling systems much more convenient.

Arbitrary Finite Dimensions

Arbitrary number of dimensions in the original differential equation are handled straightforwardly by the relation that

\[ K(t,\tau) = \frac{ \left| \begin{array}{cccc} y_1(\tau) & y_2(\tau) & \cdots & y_n(\tau) \\ y_1′(\tau) & y_2′(\tau) & \cdots & y_n'(\tau) \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-1)}(\tau) & y_2^{(n-1)}(\tau) & \cdots & y_n^{(n-1)} (\tau) \\ y_1(t) & y_2(t) & \cdots & y_n(t) \end{array} \right| } { W[y_1,y_2,\ldots,y_n](\tau)} \]

where the corresponding Wronskian is given by

\[ W[y_1,y_2,\cdots,y_n](\tau) = \left| \begin{array}{cccc} y_1(\tau) & y_2(\tau) & \cdots & y_n(\tau) \\ y_1′(\tau) & y_2′(\tau) & \cdots & y_n'(\tau) \\ \vdots & \vdots & \ddots & \vdots \\ y_1^{(n-1)}(\tau) & y_2^{(n-1)}(\tau) & \cdots & y_n^{(n-1)} (\tau) \\ y_1^{(n)}(\tau) & y_2^{(n)}(\tau) & \cdots & y_n^{(n)} (\tau) \end{array} \right| \]

and

\[ y^{(n)} \equiv \frac{d^n y}{d t^n} \; .\]

The generation of one-side Greens functions is then a fairly mechanical process once the homogeneous solutions are known and since we are guaranteed that the solutions for initial value problems exist and are unique, the corresponding one-sided Greens functions also exist and are unique. The following is a tabulated set of $$K(t,\tau)$$s adapted from Andrew’s book.

One-side Greens Functions – adapted from Elementary Partial Diff. Eqs. by L. C. Andrews
Operator \[ K(t,\tau) \]
\[ D + b\] \[ e^{-b(t-\tau)} \]
\[ D^n, \; n = 2, 3, 4, \ldots \] \[ \frac{(t-\tau)^{n-1}}{(n-1)!} \]
\[ D^2 + b^2 \] \[ \frac{1}{b} \sin b(t-\tau) \]
\[ D^2 – b^2 \] \[ \frac{1}{b} \sinh b(t-\tau) \]
\[ (D-a)(D-b), \; a \neq b \] \[ \frac{1}{a-b} \left[ e^{a(t-\tau)} + e^{b(t-\tau)} \right] \]
\[ (D-a)^n, \; n = 2, 3, 4, \ldots \] \[ \frac{(t-\tau)^{n-1}}{(n-1)!} e^{a(t-\tau)} \]
\[ D^2 -2 a D + a^2 + b^2 \] \[ \frac{1}{b} e^{a(t-\tau)} \sin b (t-\tau) \]
\[ D^2 -2 a D + a^2 – b^2 \] \[ \frac{1}{b} e^{a(t-\tau)} \sinh b (t-\tau) \]
\[ t^2 D^2 + t D – b^2 \] \[ \frac{\tau}{2 b}\left[ \left( \frac{t}{\tau} \right)^b – \left( \frac{\tau}{t} \right)^b \right] \]

Imposing Causality

The second extension is a little more subtle. Allow the inhomogenous term $$f(t)$$ to be a delta-function so that the differential equation becomes

\[ L[y] = \delta(t-a), \; \; y(t_0) = 0, \; \; y'(t_0) = 0 \; .\]

The particular solution

\[ y = \int_{t_0}^t \, d \tau \, K(t,\tau) \delta(\tau – a) = \left\{ \begin{array}{lc} 0, & t_0 \leq t < a \\ K(t,a), & t \geq a \end{array} \right. \] now represents how the system responds to the unit impulse delivered at time $$t=a$$ by the delta-function. The discontinuous response results from the fact that the system at $$t=a$$ receives a sharp blow that changes its evolution from the unforced evolution it was following before the impulse to a new unforced evolution with new initial conditions at $$t=a$$ that reflect the influence of the impulse. By applying a little manipulation to the right-hand side, and allowing $$t_0$$ to recede to infinity, the above result transforms into \[ K^+(t,\tau) = \left\{ \begin{array}{lc} 0, & t_0 \leq t < \tau \\ K(t,\tau), & \tau \leq t < \infty \end{array} \right. = \theta(t-\tau) K(t,\tau) \; ,\] which is a familiar result from Quantum Evolution – Part 3. In this derivation, we get an alternative and more mathematically rigorous was of understanding why Heaviside theta function (or step function, if you prefer) enforces causality. The undecorated one-sided Greens function $$K(t,\tau)$$ is a mathematical object capable of evolving the system forward or backward in time with equal facility. The one-sided retarded Greens function $$K^+(t,\tau)$$ is physically meaningful because it will not evolve the influence of an applied force to a time earlier than the force was applied.

Recasting in State Space Notation

An alternative and frequently more insightful approach to solving ordinary differential equations comes in recasting the structure into state space language, in which the differential equation(s) reduce to a set of coupled first order equations of the form

\[ \frac{d}{dt} \bar S = \bar f(\bar S; t) \]

Quantum Evolution – Part 2 presents this approach applied to the simple harmonic oscillator. The propagator (or state transition matrix or fundamental matrix) of the system contains the one-sided Greens function as the upper-right portion of its structure. It is easiest to see that result by working with a second order system with linearly-independent solutions $$y_1$$ and $$y_2$$ and initial conditions $$y(t_0) = y_0$$ and $$y'(t_0) = y’_0$$. In analogy with the previous post, the initial conditions can be solved at time $$t_0$$ to yield the expression

\[ \left[ \begin{array}{c} C_1 \\ C_2 \end{array} \right] = \frac{1}{W(t_0)} \left[ \begin{array}{cc} y_2′ & -y_2 \\ -y_1′ & y_1 \end{array} \right]_{t_0} \left[ \begin{array}{c} y_0 \\ y_0′ \end{array} \right] \equiv M_{t_0} \left[ \begin{array}{c} y_0 \\ y_0′ \end{array} \right] \; , \]

where the subscript notation $$[]_{t_0}$$ means that all of the expressions in the matrix are evaluated at time $$t_0$$. Now the arbitrary solution $$y(t)$$ to the homogeneous equation is a linear combination of the independent solutions weighted by the constants just determined

\[ \left[ \begin{array}{c} y(t) \\ y'(t) \end{array} \right] = \left[ \begin{array}{cc} y_1 & y_2 \\ y_1′ & y_2′ \end{array} \right]_{t} \left[ \begin{array}{c} C_1 \\ C_2 \end{array} \right] \equiv \Omega_{t} \left[ \begin{array}{c} C_1 \\ C_2 \end{array} \right] \equiv \Omega_{t} M_{t_0} \left[ \begin{array}{c} y_0 & y_0′ \end{array} \right] \; .\]

The propagator, which is formally defined as

\[ U(t,t_0) = \frac{\partial \bar S(t)}{\partial \bar S(t_0) } \; ,\]

is easily read off to be

\[ U(t,t_0) = \Omega_{t} M_{t_0} \; , \]

which, when back-substituting the forms of $$\Omega_t$$ and $$M_{t_0}$$, gives

\[ U(t,t_0) = \frac{1}{W(t_0)} \left[ \begin{array}{cc} y_1 & y_2 \\ y_1′ & y_2′ \end{array} \right]_{t} \left[ \begin{array}{cc} y_2′ & -y_2 \\ -y_1′ & y_1 \end{array} \right]_{t_0} \; .\]

In state space notation, the inhomogeneous term takes the form $$\left[ \begin{array}{c} 0 \\ f(t) \end{array} \right]$$ and so the relative component of the matrix multiplication is the upper right element, which is
\[ \left\{ U(t,t_0) \right\}_{1,2} = \frac{y_1(t_0) y_2(t) – y_1(t) y_2(t_0)}{W(t_0)} \; , \]

which we recognize as the one-sided Greens function. Multiplication of the whole propagator by the Heaviside function yields enforces causality and gives the retarded, one-sided Greens function in the $$(1,2)$$ component.

Using the Fourier Transform

While all of the machinery discussed above is straightforward to apply, it does involve a lot of steps (e.g., finding the independent solutions, forming the Wronskian, forming the one-sided Greens function, applying causality, etc.). There is often a faster way to perform all of these steps using the Fourier transform. This will be illustrated for a simple one-dimensional problem (adapted from ‘Mathematical Tools for Physics’ by James Nearing) of a mass moving in a viscous fluid subjected to a time-varying force

\[ \frac{dv}{dt} + \beta v = f(t) \; ,\]

where $$\beta$$ is a constant characterizing the fluid and $$f(t)$$ is the force per unit mass.

We assume that the velocity has a Fourier transform

\[ v(t) = \int_{-\infty}^{\infty} d \omega \, V(\omega) e^{-i\omega t} \; \]

with the corresponding transform pair

\[ V(\omega) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} dt \, v(t) e^{+i\omega t} \; .\]

Likewise, the force possesses a Fourier transform

\[ f(t) = \int_{-\infty}^{\infty} d \omega \, F(\omega) e^{-i\omega t} \; .\]

Plugging the transforms into the differential equation yields the algebraic equation

\[ -i \omega V(\omega) + \beta V(\omega) = F(\omega) \; ,\]

which is easily solved for $$V(\omega)$$ and which, when substituted back in, gives the expression for particular solution

\[ v_p(t) = i \int_{-\infty}^{\infty} d \omega \frac{F(\omega)}{\omega + i \beta} e^{-i\omega t} \; .\]

Eliminating $$F(\omega)$$ by using its transform pair, we find that

\[ v_p(t) = \frac{i}{2 \pi} \int_{-\infty}^{\infty} d\tau K(t,\tau) f(\tau) \]

with the kernel

\[ K(t,\tau) = \int_{-\infty}^{\infty} d \omega \frac{e^{-i \omega (t-\tau)}}{\omega + i \beta} \; .\]

This is exactly the form of a one-sided Greens function. Even more pleasing is the fact that when complex contour integration is used to solve the integral, we discover that causality is already built-in and that what we have obtained is actually a retarded one-side Green’s function

\[ K^+(t,\tau) = \left\{ \begin{array}{lc} 0 & t < \tau \\ -2 \pi i e^{-i \beta(t-\tau)} & t > \tau \end{array} \right. \]

Causality results since the pole of the denominator is in the lower half of the complex plane. The usual semi-circular contour used in Jordan’s lemma must be in the upper half-plane when $$t < \tau$$, in which case no poles are contained and no residue exists. When $$t > \tau$$, the semi-circle, which must be in the lower-half plane, and surrounds the pole at $$\omega = – i \beta$$, giving a non-zero residue.

The final form of the particular solution is

\[ v_p(t) = \int_{-\infty}^t d \tau e^{-\beta (t-\tau)} f(\tau) \]

which is the same result we would have received from using the one-sided Greens function for the operator $$D + \beta$$ shown in the table above.

The Wonderful Wronskian

Well, the long haul through quantum evolution is over for now, but there are a few dangling pieces of mathematical machinery that are worth examining. These pieces apply to all linear, time evolution (i.e. initial value) problems. This week I will be exploring the very useful Wronskian.

To start the exploration, we’ll consider the general form of a linear second-order ordinary differential equation with non-constant coefficients given in the usual Sturm-Liouville form. The operator $$L$$, defined as

\[ L[y](t) \equiv \frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t)y \; , \]
provides a convenient way to express the various equations, homogeneous and inhomogenous, that arise, without getting bogged down in notational minutia.

Let $$y_1(t)$$ and $$y_2(t)$$ be two solutions of the homogeneous equation $$L[y] = 0$$. Named after Jozef Wronski, the Wronskian,

\[ W[y_1,y_2](t) = y_1(t) y’_2(t) – y’_1(t) y_2(t) \; ,\]
is a function of the two solutions and their derivatives. In most contexts, we can eliminate the argument $$[y_1,y_2]$$ and simply express the Wronskian as $$W(t)$$. This simplification keeps the notation uncluttered and helps to isolate the important features.

The first amazing property of the Wronskian is that it provides a sure-fire test that the two solutions are independent. Finding independent solutions of the homogeneous equation amounts to solving the problem completely, since an arbitrary solution can always be decomposed as a linear combination of the independent solutions multiplied by the appropriate constants so that the solution satisfies the initial conditions.

The Wronskian indicates that the solutions are independent if $$W[y_1,y_2](t) \neq 0$$ for all times $$t$$. The proof follows fairly easily from an application of linear algebra in order to find the constants that meet the initial value problem. If $$y_1(t)$$ and $$y_2(t)$$ are independent solutions then $$y(t) = c_1 y_1(t) + c_2 y_2(t)$$ is the most general solution that can be constructed with the initial conditions $$y(t_0)= y_0$$ and $$y'(t_0) = y’_0$$, where the prime denotes differentiation with respect to $$t$$. Plugging $$t_0$$ into the general form yields the system of equations

\[ \left[ \begin{array}{c} c_1 y_1(t_0) + c_2 y_2(t_0) \\ c_1 y’_1(t_0) + c_2 y’_2(t_0) \end{array} \right] = \left[ \begin{array}{c} y_0 \\ y’_0 \end{array} \right] \]


which can be written in the more suggestive form

\[ \left[ \begin{array}{cc} y_1(t_0) & y_2(t_0) \\ y’_1(t_0) & y’_2(t_0) \end{array} \right] \left[ \begin{array}{c} c_1 \\ c_2 \end{array} \right] = \left[ \begin{array}{c} y_0 \\ y’_0 \end{array} \right] \; . \]


This equation only has a solution when the determinant of the matrix on the left-hand side is not equal to zero. Since the determinant of this matrix is the Wronskian, this completes the proof.

The reader might have a reasonable concern that since the Wronskian depends on time that it must be evaluated at every time in order to ensure that it doesn’t vanish and that performing this check severely limits its usefulness. Thankfully, this is not a concern since knowing the Wronskian at one time ensures that it is known at all times. The Wronskian’s equation of motion gives its time evolution and this is just the thing to see how the Wronskian’s value changes in time. Solving the Wronskian’s equation of motion starts with the observation


\[ \frac{d}{dt} W(t) = y’_1(t) y’_2(t) + y_1(t) y^{\prime \prime}_2(t) – y^{\prime \prime}_1(t) y_2(t) – y’_1(t) y’_2(t) \\ = y_1(t) y^{\prime \prime}_2(t) – y^{\prime \prime}_1(t) y_2(t) \; . \]

Now since each of the $$y_i$$ satisfy $$L[y](t) = 0$$, their second derivatives can be eliminated to yield $$y^{\prime \prime}_i = – p(t) y’_i – q(t) y_i$$. Substituting these relations in yields


\[ \frac{d}{dt} W(t) = y_1(t) \left( -p(t) y’_2 – q(t) y_2 \right) – \left( -p(t) y’_1 – q(t) y_1 \right) y_2 \\ = -p(t) \left( y_1(t) y’_2(t) – y’_1(t) y_2(t) \right) \]

Recognizing the presence of the Wronskian on the right-hand side, we find the particularly elegant equation for its evolution


\[ \frac{d}{dt} W(t) = -p(t) W(t) \]

that has solutions

\[ W(t) = W_0 \exp\left[ -\int_{t_0}^t p(t’) dt’ \right] \]


where $$W_0 \equiv W[y_1,y_2](t_0)$$. The mathematical community typically calls this result Abel’s Formula. So if the Wronskian has a non-zero value at $$t_0$$ it must have a non-zero value in the entire time span over which the operator $$L$$ is well-defined (i.e. where $$p(t)$$ or $$q(t)$$ are well-defined).

The Wronskian possesses another remarkable property. Given that we’ve found a solution to the equation $$L[y] = 0$$, the Wronskian can construct another, independent solution for us. It is rarely needed as there are easier ways to find these solutions (e.g. roots of the characteristic equation, Frobenius’s series solution, lookup tables, and the like) but it is a straightforward method that is guaranteed to work.

The construction starts with the observation that the Wronskian depends solely on the function $$p(t)$$ and not on the solutions to $$L[y] = 0$$. So once one solution is known, we can derive the differential equation satisfied by the second solution by using the known form of the Wronskian.  We find the equation to be

\[ y’_2 – \frac{y’_1}{y_1} y_2 = y’_2 – \left( \frac{d}{dt} \ln(y_1) \right) y_2 = \frac{W}{y_1} \; .\]

This is just a first-order inhomogeneous differential equation that can be solved using the integrating factor
\[ \mu(t) = \frac{1}{y_1} \; . \]

As reminder, an integrating factor $$\mu(t)$$ is a specially chosen function that multiplies
\[ \frac{dy}{dt} + a(t) y = b(t) \]
and transforms the left-hand side of the differential equation into a total time derivative
\[ \frac{d}{dt} \left( \mu(t) y \right)= \mu(t) b(t) \]
provided that
\[ \frac{d \mu(t)}{d t} = a(t) \mu(t) \]
or, once integrated,
\[ \mu(t) = \exp \left( \int a(t) dt \right) \; .\]
The solution to the original first-order equation is then
\[ y = \frac{1}{\mu(t)} \int \mu(t) b(t) \; . \]

Applying this to the equation for $$y_2$$ gives
\[ y_2(t) = y_1(t) \left( \int \frac{W(t)}{y_1(t)^2} dt \right) \;. \]

The usefulness of the Wronskian doesn’t stop there. It also provides a solution to the inhomogeneous equation


\[ \frac{d^2y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = g(t) \; ,\]

through the variation of parameters approach. In analogy with the homogeneous case, define the function

\[ \phi(t) = u_1(t) y_1(t) + u_2(t) y_2(t) \; , \]

where the $$u_i(t)$$ play the role of time-varying versions of the constants $$c_i$$, subject to $$\phi(t_0) = 0$$ and $$\phi'(t_0) = 0$$, which ensures that the homogeneous solution carries the initial conditions.

Now compute the first derivative of $$\phi(t)$$ to get

\[ \frac{d \phi}{dt} = [u’_1 y_1 + u’_2 y_2] + [u_1 y’_1 + u_2 y’_2] \]

We can limit the time derivatives of the $$u_i$$ to first order if we impose the condition, called the condition of osculation, that

\[ u’_1 y_1 + u’_2 y_2 = 0 \]


since $$\phi^{\prime \prime}$$ can at best produce terms proportional to $$u_i’$$. The condition of osculation simplifies the second derivative to

\[ \frac{d^2 \phi}{dt^2} = u’_1 y’_1 + u_1 y^{\prime \prime}_1 + u’_2 y’_2 + u_2 y^{\prime \prime}_2 \; .\]


Again the second derivatives of the $$y_i$$ can be eliminated by isolating them in $$L[y_i] = 0 $$ and then substituting the results back into the equation for $$\phi^{”}$$. Doing so, we arrive at


\[ \frac{d^2 \phi}{dt^2} = u’_1 y’_1 + u_1 \left( -p(t) y’_1 – q(t) y_1 \right) + u’_2 y’_2 + u_2 \left( -p(t) y’_2 – q(t) y_2 \right) \]

which simplifies to

\[ \frac{d^2 \phi}{dt^2} = u’_1 y’_1 + u’_2 y’_2 – p(t) \phi'(t) – q(t) \phi(t) \]

(still subject to condition of osculation). Now we can evaluate $$L[\phi]$$ to find

\[ L[\phi] = u’_1 y’_1 + u’_2 y’_2 \; .\]

This relation and the condition of osculation must be solved together to yield the unknown $$u_i$$. Recasting these relations into matrix equations
\[ \left[ \begin{array}{cc} y_1 & y_2 \\ y’_1 & y’_2 \end{array} \right] \left[ \begin{array}{c} u’_1 \\ u’_2  \end{array} \right] = \left[ \begin{array}{c} 0 \\ g(t) \end{array} \right] \]

allows for a transparent solution via linear algebra. The solution presents itself immediately as

\[ \left[ \begin{array}{c} u’_1 \\ u’_2 \end{array} \right] = \frac{1}{W(t)} \left[ \begin{array}{cc} y’_2 & -y_2 \\ -y’_1 & y_1 \end{array} \right] \left[ \begin{array}{c} 0 \\ g(t) \end{array} \right] = \frac{1}{W(t)}\left[ \begin{array}{c} -y_2(t) g(t) \\ y_1(t) g(t) \end{array} \right] \]

Since these equations are first order, a simple integration yields

\[ \left[ \begin{array}{c} u_1(t) \\ u_2(t) \end{array} \right] = \int_{t_0}^t d\tau \, \frac{1}{W(\tau)} \left[ \begin{array}{c} -y_2(\tau) \\ y_1(\tau) \end{array} \right] g(\tau) \; .\]

The full solution is written as
\[ \phi(t) = \int_{t_0}^t d\tau \frac{1}{W(\tau)} \left( -y_1(t) y_2(\tau) + y_2(\tau) y_1(\tau) \right) g(\tau) \; ,\]

which condenses nicely into

\[ \phi(t) = \int_{t_0}^t d\tau \, K(t,\tau) g(\tau) \; , \]

with

\[ K(t,\tau) = \frac{ -y_1(t) y_2(\tau) + y_1(\tau) y_2(t) }{W(\tau)} = \frac{ \left| \begin{array}{cc} y_1(\tau) & y_2(\tau) \\ y_1(t) & y_2(t) \end{array} \right|}{W(\tau)} \; .\]

The Wronskian is not limited to second-order equations and extensions to higher dimensions are relatively easy. For example, the equation


\[ y^{\prime \prime \prime} + a_2(t) y^{\prime \prime} + a_1(t) y’ + a_0 y = f(t) \]

has a Wronskian defined by

\[ W(t) = \left| \begin{array}{ccc} y_1 & y_2 & y_3 \\ y’_1 & y’_2 & y’_3 \\ y_1^{\prime \prime} & y_2^{\prime \prime} & y_3^{\prime \prime} \end{array} \right| \]

with the corresponding kernel for solving the inhomogeneous equation

\[ K(t,\tau) = \frac{\left| \begin{array}{ccc} y_1(\tau) & y_2(\tau) & y_3(\tau) \\ y’_1(\tau) & y’_2(\tau) & y’_3(\tau) \\ y_1(t) & y_2(t) & y_3(t) \end{array} \right|}{W(\tau)} \]

and with a corresponding equation of motion

\[ \frac{d}{dt} W(t) = -a_2(t) W(t) \; .\]

The steps to confirm these results follow in analogy with what was presented above. In other words, solving the homogeneous equation for the initial conditions gives the form of the Wronskian as a determinant and the variation of parameters method gives the kernel. The verification Abel’s formula follows from the recognition that when computing the derivative of a determinant, one first applies the product rule to produce 3 separate terms (one for each row) and that only the one with a derivative acting on the last row survives. Substitution using the original equation then leads to the Wronskians evolution only being dependent on the coefficient multiplying the second highest derivative (i.e. $$n-1$$). Generalizations to even higher dimensional systems are done the same way.

The expression $$K(t,\tau)$$ is called a one-sided Greens function and a study of it will be the subject of next week’s entry.