Monthly Archive: September 2015

Laplace Transforms – Part 4: Convolution

One of the most peculiar things about the Laplace Transform, specifically, and the integral transforms, in general, is the very nonintuitive behavior that results when performing either the forward or inverse transform on the product of two functions. Up to this point, only very special cases have been covered – cases where the results of the product could be determined easily from the definition of the transform itself (e.g. $${\mathcal L}[t f(t)]$$) or cases where the product of two transforms $$G(s)F(s)$$ resulted in a trivial inverse due to some property of the Laplace Transform that allowed for clever manipulation.

Unfortunately, cleverness has a limit. So this week’s column will be looking at the general theory of the transform/inverse-transform of the product of two functions. This leads directly to the convolution integrals.

The first thing to look at is the case where the product of a set of Laplace Transforms fail to inspire cleverness on our part. This situation can easily arise when solving a differential equation such as

\[ x”(t) + \omega^2 x(t) = f(t), \; \; x(0) = 0, \; x'(0) = 0 \; ,\]

which is the driven simple harmonic oscillator with homogenous initial conditions.

Application of the Laplace transform leads to

\[ X(s) = \frac{F(s)}{s^2 + \omega^2} \]

as the solution that we wish to find by the inverse Laplace Transform, where $$F(s) = {\mathcal L}[f(t)]$$.

For certain forms of $$F(s)$$ (usually simple ones) the inverse Laplace Transform can be done using the properties already established. But clearly, an arbitrary form requires a little something more.

The little bit more comes in the form of the convolution $$f*g$$, defined as

\[ (f*g)(t) = \int_{0}^{t} d\tau \, f(t-\tau) g(\tau) \; .\]

The convolution has exactly the same form as the kernel integration that arises in the study of the Green’s function and has been dealt with in detail in other entries of this blog.

The reason the convolution is useful can be seen if we examine its Laplace Transform

\[ {\mathcal L}[(f*g)(t)] = \int_0^{\infty} dt \, (f*g)(t) e^{-st} = \int_0^{\infty} dt \, \int_0^t d\tau \, f(t-\tau) g(\tau) e^{-st} \; .\]

The inner integral can have the same limits of integration as the outer one if we introduce the Heaviside step function

\[ H(x) = \left\{ \begin{array}{l} 0 \; \; x<0 \\ 1 \; \; x \geq 0 \end{array} \right. \; \] to get \[ {\mathcal L}[(f*g)(t)] = \int_0^{\infty} dt \int_0^{\infty} d\tau \, f(t-\tau) g(\tau) H(t-\tau) e^{-st} \; . \] Next switch the order of integration \[ {\mathcal L}[(f*g)(t)] = \int_0^{\infty} d\tau \, g(\tau) \int_0^{\infty} dt f(t-\tau) H(t-\tau) e^{-st} \; \] and then substitute $$\lambda = t – \tau$$ in the inner integral (realizing that $$\tau$$ is regarded as a parameter) to get \[ {\mathcal L}[(f*g)(t)] = \int_0^{\infty} d\tau \, g(\tau) \int_0^{\infty} d\lambda f(\lambda) H(\lambda) e^{-s(\lambda+\tau)} \; .\] Separate the exponential into two pieces; the first solely involving $$\tau$$ and the second solely involving $$\lambda$$ and bring the last factor out of the inner integral to get \[ {\mathcal L}[(f*g)(t)] = \int_0^{\infty} d\tau \, g(\tau) e^{-s\tau} \int_0^{\infty} d\lambda f(\lambda) H(\lambda) e^{-s\lambda} \; .\] The final two steps are, first, to realize that the $$H(\lambda)$$ term in the inner integral is always $$1$$ and therefore can be dropped and, second, to recognize that each integral is a Laplace Transform. Finally, we arrive at \[ {\mathcal L}[(f*g)(t)] = G(s) F(s) \; .\] At first glance, it may seem that there is a problem with the above relation. Clearly the order of the products on the right-hand side doesn’t matter $$G(s)F(s) = F(s)G(s)$$. In contrast, the left-hand side doesn’t appear to support commutivity of $$f$$ and $$g$$. However, appearances are deceiving, and it is a short proof to show that all is well. Start with the first form of the equation \[ (f*g)(t) = \int_{0}^{t} d \tau \, f(t-\tau) g(\tau) \] and make the substitution $$t-\tau = u$$ (which implies $$d\tau = -du$$) to transform the right-hand side to \[ (f*g)(t) = \int_{t}^{0} (-du) \, f(u) g(t-u) = \int_{0}^{t} du \, g(t-u) f(u) = (g*f)(t) \; .\] Returning to the original differential equation we started to solve, we find that \[ x(t) = \int_0^t \frac{sin(\omega(t-\tau))}{\omega} f(t) \; , \] which is exactly the expected result based on the Green’s function method. Some other interesting properties of the convolution operator is that it is distributive \[ f*(g+h) = f*g + f*h \] and that it is associative \[ (f*g)*h = f*(g*h) \; .\]

Both of these are easy to prove as well. For the first relation

\[ f*(g+h) = \int_0^t d\tau \, f(t-\tau) \left[ g(\tau) + h(\tau) \right] \\ = \int_0^t d\tau \, f(t-\tau) g(\tau) + \int_0^t \, f(t-\tau) h(\tau) = f*g + f*h \; .\]

For the second relation, things are a bit harder. And we’ll take it on faith for now. Having disposed of the forward transform theory in fine order, it is natural to ask about the inverse transform. Here the work is much simpler. There is an analogous expression that states \[ f(t) g(t) = {\mathcal L}^{-1}[ F(s) * G(s) ] \] which not particularly useful but follows just from the simple changing of the identity of the symbols in the analysis above. Finally, one might ask about the Laplace Transform of the product of two arbitrary (but transformable) functions $$f(t) g(t)$$. Here the theory is not so attractive. The transform is given abstractly as

\[ {\mathcal L}[f(t) g(t)] = \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} \, dp G(s-p) F(p) \\ = \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} \, dp F(s-p) G(p) \; ,\]

where the real constant $$c$$ is the largest of the abcissa of convergence of $$f(t)$$ and $$g(t)$$. This form is really dreadful and it demonstrates that there is no free lunch anywhere. The transform approach may make certain computations easier but other things are more complex. There are always trade-offs. To see how this comes miserable state-of-affairs about start with the basic definition of the Laplace Transform of the product \[ {\mathcal L}[f(t) g(t)] = \int_0^\infty dt \, f(t) g(t) e^{-st} \; .\] Now assume that both $$f(t)$$ and $$g(t)$$ individually have Laplace transforms so that the inverses \[ f(t) = \frac{1}{2 \pi i} \int_{c_f – i \infty}^{c_f + i \infty} ds \, F(s) e^{st}, \;\; \mbox{for} \; t > 0 \]

and

\[ g(t) = \frac{1}{2 \pi i} \int_{c_g – i \infty}^{c_g + i \infty} ds \, G(s) e^{st}, \;\; \mbox{for} \; t > 0 \]

exist and that $$c_f$$ and $$c_g$$ are the abcissa of convergence associated with the functional form of $$f(t)$$ and $$g(t)$$, respectively.

First substitute the inverse expression for $$f(t)$$ into the basic definition to get

\[ {\mathcal L}[f(t) g(t)] = \int_0^\infty dt \, \left\{ \frac{1}{2 \pi i} \int_{c_f – i \infty}^{c_f + i \infty} dp \, F(p) e^{pt} \right\} g(t) e^{-st} \; .\]

Since the integrals are assumed to be convergent, trading the order of integration is allowed, giving

\[ {\mathcal L}[f(t) g(t)] = \frac{1}{2 \pi i} \int_{c_f – i \infty}^{c_f + i \infty} dp \, F(p) \int_0^\infty dt \, e^{-(s-p)t} g(t) \; .\]

The inner integral can be simplified to

\[ \int_0^\infty dt \, e^{-(s-p)t} g(t) = G(s-p) \; \]

and so we arrive at

\[ {\mathcal L}[f(t) g(t)] = \frac{1}{2 \pi i} \int_{c_f – i \infty}^{c_f + i \infty} dp \, F(p) G(s-p) \; .\]

In a similar fashion (switching the order in which $$f(t)$$ and $$g(t)$$ are handled), we also get the analogous expression with the roles of $$F$$ and $$G$$ reversed.

Next week, I’ll be looking at solving systems of equations using the Laplace Transform, which will be a prelude to a brief dip into modern control theory.

Laplace Transforms – Part 3: Transient and Steady-State Behavior

There are two theorems that are of particular interest not so much for their general applicability but for their use as safety checks on the solutions obtained with the Laplace Transform: the Initial Value Theorem and the Final Value Theorem. Both of these theorems produce a particular value (either initial or final) of the underlying signal $$f(t)$$ from a limiting process on the signal’s Laplace Transform $$F(s)$$.

A moment’s reflection helps to understand why such interrogations are useful.

With respect to the Initial Value Theorem, the argument goes as follows. When searching for the solution of a differential equation using the Laplace Transform, derivatives of the unknown and sought-for signal $$f(t)$$ are replaced by algebraic quantities proportional to some power of the frequency variable $$s$$ times $$F(s)$$ or some power of $$s$$ multiplying the initial conditions $$f(0)$$, $$f'(0)$$. and so on. Once the explicit values of the initial conditions are substituted into the transformed equation, many algebraic manipulations follow, each one typically elementary but each on fraught with the possibility of mistake. When the final form of $$F(s)$$ is obtained, it is comforting to be able to tease back out that the original initial conditions are still in there; unmolested by the many steps used to obtain the final form.

With respect to the Final Value Theorem, the argument takes on a different caste entirely. Generally, even though we don’t know the explicit form of the signal $$f(t)$$ (otherwise it wouldn’t be unknown) we usually have a general notion of what to expect. The final solution may be oscillatory or it may damp out or it may tend to a limiting value and so on. The Final Value Theorem provides a simple way to see what the signal $$f(t)$$ is doing a long time into the future without the bother of having to perform an inverse Laplace Transform.

Thus the Initial Value Theorem provides a way of looking at the very start of the transient behavior of the signal $$f(t)$$ while the Final Value Theorem provides a way of determining some of its steady state characteristics. Both theorems are relatively easy to prove.

To prove the Initial Value Theorem, start with the property of the Laplace Transform

\[ {\mathcal L}\left[ \frac{d}{dt} f(t) \right] = s F(s) – f(0) \; .\]

Now examine the limit of the left-hand side as $$s \rightarrow \infty$$.

\[ \lim_{s \rightarrow \infty} {\mathcal L}\left[ \frac{d}{dt} f(t) \right] = \lim_{s \rightarrow \infty} \int_{0}^{\infty} dt \, \frac{d f(t)}{dt} e^{-st} \; . \]

Agreeing that the lower limit of the integral is approach from above (i.e. $$0^+$$), the limit with respect to $$s$$ can be brought inside the integral giving

\[ \int_{0}^{\infty} dt \, \frac{d f(t)}{dt} \lim_{s \rightarrow \infty} e^{-st} = \int_{0}^{\infty} dt \, \frac{d f(t)}{dt} \cdot 0 = 0 \; .\]

So the Initial Value Theorem is

\[ f(0) = \lim_{s \rightarrow \infty} s F(s) \; .\]

In a completely similar fashion, the initial value for the time derivative $$\dot f(0)$$ is obtained from the Laplace Transform identity

\[ {\mathcal L}\left[ \frac{d^2}{dt^2} f(t) \right] = s^2 F(s) – s f(0) – \dot f(0) \]

giving

\[ \dot f(0) = \lim_{s \rightarrow \infty} \left( s^2 F(s) – s f(0) \right) \]

once the appropriate limit on $$s$$ is taken.

To prove the Final Value Theorem, start with the same Laplace Transform property used for the Initial Value Theorem but take the limit as $$s \rightarrow 0$$:

\[ \lim_{s \rightarrow 0} \int_{0}^{\infty}dt \, \frac{d f(t)}{dt} e^{-st} = \lim_{s \rightarrow 0} s F(s) – f(0) \; . \]

Next take the limit inside the integral to transform the left-hand side as

\[ \int_{0}^{\infty}dt \, \frac{d f(t)}{dt} \lim_{s \rightarrow 0} e^{-st} = \int_{0}^{\infty}dt \, \frac{d f(t)}{dt} \cdot 1 \\ = \left. f(a) \right|_{a \rightarrow \infty} – f(0) \equiv f(\infty) – f(0) \; .\]

Combining yields

\[ f(\infty) – f(0) = \lim_{s \rightarrow 0} \left( s F(s) – f(0) \right) \]

or

\[ f(\infty) = \lim_{s \rightarrow 0} s F(s) \; .\]

Having proved both of these theorems, it will prove useful to look at an example.

Suppose we have the following differential equation

\[ \ddot x (t) + 5 x(t) = e^{-7t} \; \; , x(0) = 11 \;\; , \dot x(0) = 13 \; .\]

The corresponding Laplace Transform is given by

\[ F(s) = {{11\,s^2+13\,\left(s+7\right)+77\,s+1}\over{s^3+7\,s^2+5\,s+35}} \; \]

Finding the inverse Laplace Transform is not particularly inviting due to the complex nature of the denominator and we don’t want to spend time trying to do this if we’ve made an algebraic error. Also, from the structure of the original equation, we expect that the final signal should be a combination of oscillatory motion (due to the motion under the homogenous equation $$\ddot x + 5 x = 0$$) plus some motion that tends to follow the driving force, which damps to zero in the limit of large $$t$$.

So the first step is to see if the initial conditions are still contained in the Laplace Transform.

The Initial Value Theorem says that

\[ \lim_{t \rightarrow 0} f(t) = \lim_{s \rightarrow \infty} s F(s) \; .\]

Applying this to the particular form above gives

\[ \lim_{s \rightarrow \infty} s F(s) = \lim_{s \rightarrow \infty} {{11\,s^3+13 s\,\left(s+7\right)+77\,s^2+s}\over{s^3+7\,s^2+5\,s+35}} = 11 \; . \]

So far so good!

Now let’s look for the initial conditions on $$\dot x$$.

\[ \lim_{t \rightarrow 0} \dot f(t) = \lim_{s \rightarrow \infty} \left( s^2 F(s) – s f(0) \right) = 13 \; ,\]

which is most easily seen by noticing that the only term that can survive is the leading term $$11 s^4$$ which becomes in the limit $$11 s$$ which is exactly equal to the term subtracted off due to the Initial Value Theorem.

Finally, let’s look at the long-time or steady-state behavior (assuming that it exists). The Final Value Theorem when applied in this case yields

\[ \lim_{s \rightarrow 0} s F(s) = \lim_{s \rightarrow 0} {{11\,s^3+13 s\,\left(s+7\right)+77\,s^2+s}\over{s^3+7\,s^2+5\,s+35}} = 0 \; .\]

This answer is reasonable as the forcing function will have a tendency to damp out all motion as $$t$$ gets large.

As a check, the inverse Laplace Transformation (courtesy of wxMaxima) yields

\[x(t) = {{709\,\sin \left(\sqrt{5}\,t\right)}\over{54\,\sqrt{5}}}+{{593\,\cos \left(\sqrt{5}\,t\right)}\over{54}} + {{e^{-7\,t}}\over{54}}\]

which clearly has a limit of $$0$$ as $$t$$ gets large.

For reference, the wxMaxima steps for this analysis are:

IVT_FVT_wxMaxima

Next week, I’ll be looking at convolution integrals and the products of transforms.

Laplace Transforms – Part 2: Convergence

Having established some attractive properties of the Laplace Transform that make solving linear differential equations relatively easy, the next logical question to ask is, under what conditions does the Laplace transform actually exist? In answering this, I will follow closely the individual arguments found in Modern Control Engineering by Katsuhiko Ogata, but I’ll be modifying their context to aim at the larger question of the relation between the Laplace and Fourier Transforms.

The first step into understanding what functions are Laplace transformable is to look at the basic definition

\[ {\mathcal L}[f(t)] = \int_0^{\infty} dt \, e^{-st} f(t) \; .\]

The Laplace Transform will only exist when the integral on the right-hand side is convergent. Note that the integral restricts itself to the portion of $$f(t)$$ that falls in the range $$[0,\infty]$$. Since the Laplace Transform doesn’t ask or care about the behavior of $$f(t)$$ for values of $$t < 0$$, the common prescription is to define $$f(t) = 0$$ for $$t < 0$$.

It is a fair question to ask why this requirement is imposed if the the Laplace Transform doesn’t ‘see’ the values of $$f(t)$$ for $$t<0$$. Later on in this column, a relationship between the Fourier and Laplace Transforms presents a plausible justification. For now, let's just say that there are matters of causality that say a signal didn't exist in perpetuity and, thus, at some point in the past it had to turn on. That is what is meant by $$t=0$$.

Of course, there are deep philosophical problems with this point of view, but they don’t touch on the day-to-day workings of the physicist or controls engineer.

So, the operative question is, what type of function, so restricted, has a convergent integral? The class of functions that allow the integral to exist are said to be of exponential order. Operationally, this means that

\[ \lim_{t\rightarrow0} \, e^{-\gamma} |f(t)| = 0 \]

for some positive real number $$\gamma$$ ($$s = \gamma + i \omega$$). If the limit is such that it tends to $$0$$ for $$\gamma > \sigma_c$$ and tends to $$\infty$$ for $$\gamma < \sigma_c$$, the real, positive number $$\sigma_c$$ takes on the name the abscissa of convergence. What this is telling us is that the Laplace Transform only converges when the real part of $$\Re(s) = \gamma$$ is greater than $$\sigma_c$$.

As we will discuss in the example below, this means that $$\gamma$$ must be greater than the right-most pole of the transform $$F(s) = \int_0^{\infty} dt \, e^{-st} f(t)$$.

Some additional words are in order. Functions that increase faster than an exponential do not have Laplace Transforms. The most-oft cited counter-example is the function $$e^{t^2} \; \forall t > 0$$. Note that $$e^{t^2}$$ is a perfectly fine signal if it is defined with compact support, such that it is zero for times $$t>t_{max}$$.

There is an obvious concern that arises at this point. Since the purpose of the Laplace Transform is to turn differential equations into algebraic equations, there is no easy way to restrict the resulting solution to that algebraic equation to have one simple pole. For example, the differential equation $$\ddot x(t) + 3 \dot x(t) + 2 x(t) = f(t)$$ has a Laplace-Transformed left-hand-side of $$s^2 + 3s + 2 = (s+2)(s+1)$$ times $${\mathcal L}[x(t)] \equiv X(s)$$. Assuming the Laplace Transform of $$x(t)$$ and $$f(t)$$ exists, then the following algebraic equation corresponds to the original differential equation with homogeneous initial conditions:

\[ (s+2)(s+1)X(s) = F(s)\; . \]

Solving this for $$X(s)$$ sets before us the task of finding an inverse Laplace Transform for

\[ X(s) = \frac{F(s)}{(s+2)(s+1)} \; .\]

Even assuming that $$F(s)$$ has no poles of its own, the above analysis would suggest that the minimum value of $$\Re(s)$$ would be $$\Re(s)=-1+\epsilon$$, where $$\epsilon$$ is a small positive number. This condition restricts $$s$$ to a portion of the complex plane that doesn’t include the other pole. But the $$s$$ value at the other pole corresponds to physically allowed frequencies, so, how does one reconcile this?

To answer this question, I quote a passage from Ogata’s book:

we must resort to the theory of complex variables [where]…there is a theorem…that states that, if two analytic functions are equal for a finite length along any arc in a region in which both are analytic, then they are equal everywhere in the region. … although we require the real part of $$s$$ to be greater than the abscissa of convergence to make the $$\int_0^{\infty} dt \, e^{-st} f(t)$$ absolutely convergent, once the Laplace transform $$F(s)$$ is obtained, $$F(s)$$ can be considered value throughout the entire $$s$$ place except at the poles of $$F(s)$$.

– Katsuhiko Ogata.

So, since the theory is now well supported, let’s return to the question as to whether or not the restriction of $$f(t) = 0 \; \forall t < 0$$ is needed. To try to justify this, bring in the Fourier Transform (this argument is based on the ideas of P.C. Chau).

The basic form of the Fourier Transform is

\[ {\mathcal F}[f(t)] = \int_{-\infty}^{\infty} dt \, f(t) e^{-i \omega t} \equiv F(\omega) \]

with the inverse transform being

\[ f(t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} d\omega \, F(\omega) e^{i\omega t} \; .\]

Now consider the Fourier Transform of the special form of

\[ f(t) = e^{-\gamma t} g(t) \; .\]

The identity $$f(t) = {\mathcal F}^{-1}\left[{\mathcal F}[f(t)] \right]$$ becomes

\[ f(t) \frac{1}{2\pi} \int_{-\infty}^{\infty} d \omega \, \left[ \int_{-\infty}^{\infty} d \tau \, f(\tau) e^{-i \omega \tau} \right] e^{i\omega t} \]

and

\[ e^{-\gamma t} g(t) = \frac{1}{2\pi} \int_{\infty}^{\infty} d \omega \, \int_{-\infty}^{\infty} d \tau \, e^{-\gamma \tau} g(\tau) e^{-i \omega \tau} e^{i \omega t} \; .\]

Bringing the $$e^{-\gamma t}$$ over to the right-hand side and rearranging gives

\[ g(t) = \frac{1}{2\pi} \int_{\infty}^{\infty} d \omega \, e^{(i\omega + \gamma)t} \int_{-\infty}^{\infty} d \tau \, e^{-(i\omega + \gamma) \tau} g(\tau) .\]

Now define the Laplace variable $$s = \gamma + i \omega$$ from which $$d\omega = ds/i$$ and substitute back in to change the outer integral to get the final form

\[ g(t) = \frac{1}{2\pi} \int_{\gamma – i\infty}^{\gamma + i \infty} d s \, e^{s t} \int_{-\infty}^{\infty} d \tau \, e^{-s \tau} g(\tau) .\]

The inner integral will not converge unless $$\tau$$ is restricted so that it is always positive. So, here is a plausible, well-supported argument for always defining $$f(t)$$ in a piece-wise fashion so that it is zero in the range $$t<0$$.

A couple of final notes. The contour implied in the final integral is known as the Bromwich contour. It is rarely used in practice to compute an inverse Laplace Transform with the usual technique being a combination of a clever use of already tabulated transforms and the use of partial fraction decomposition. Another technique for obtaining the inverse Laplace Transform also exists that does not depend on either of these approaches and I’ll devote some future space to examining it.

Next week, I’ll be taking a look at some additional properties of the Laplace Transform that make convenient the separation of transient and steady-state effects.

Laplace Transforms – Part 1: Basic Definitions

As regular readers know, time evolution of classical and quantum systems is a popular topic on this blog, forming a major thread of the columns to date. But the majority of column space dealing with the initial value problem has either been devoted to time domain analysis or, less frequently, with frequency domain techniques that fall under the Fourier Transform heading. This week starts what will be a lazy meandering through the Laplace Transform.

The Laplace Transform is a favorite of engineers, specifically ones who have to work with feedback control, and it would be worth exploring simply for that reason alone. However, it is also curious that its existence is almost wholely absent from the standard training for physicists. Exactly why the latter tend to use the Fourier transform at the expense of all else can be confidently guessed at (the Fourier Transform seems to be built into Quantum Mechanics) but a deeper understanding will result simply by comparing and contrasting the two methods.

Well before that last goal can be reached, the Laplace Transform must be examined on its own merits.

The Laplace Transform is a functional that maps input functions to output functions via

\[ {\mathcal L}[f(t)] = \int_0^{\infty} dt \, e^{-st} f(t) \; .\]

It is often written in a shorthand way as

\[ {\mathcal L}[f(t)] \equiv F(s) \; , \]

since this notation makes for convenient manipulation.

The properties of the Laplace Transform that make it so useful can be summarized as:

  1. it is a linear transformation,
  2. it turns operations of differentiation and integration into algebraic operations,
  3. incorporates the initial conditions seamlessly, and
  4. is general, thus affording a single approach applicable to an extraordinary wide range of situations.

One caveat is needed on point 4. All Laplace Transforms can only be applied when the system obeys a set of linear evolution equations. In this regard, it is unable to tackle the range and scope of problems that the Lie Series can deal with but its convenience for linear (or linearized) systems makes it a favorite of the controls engineer.

Let’s look at the first 3 points in more detail. For an operation $${\mathcal O}$$ to be linear, the following relationship must hold

\[ {\mathcal O}[a F + b G] = a {\mathcal O}[F] + b {\mathcal O}[G] \; ,\]

where $$a,b$$ are constants and $$F,G$$ are functions. The properties of the integral insure that the Laplace Transform is linear as follows

\[ {\mathcal L}[a f(t) + b g(t) ] = \int_0^{\infty} dt \, e^{-st} \left( a f(t) + b g(t) \right) \\ = a \int_0^{\infty} dt \, e^{-st} f(t) + b \int_0^{\infty} dt \, e^{-st} g(t) \\ = a {\mathcal L}[f(t)] + b {\mathcal L}[g(t)] \; .\]

The Laplace Transform is a machine which takes an input function and maps it to an output function. Differentiation is also a machine that maps input functions to output functions. It’s logical to ask how do these two operations interact.

To figure this out, feed the derivative of a function $$f'(t)$$ into the Laplace Transform.

\[ {\mathcal L}[f'(t)] = \int_0^{\infty} dt \, e^{-st} f'(t) \]

and integrate by parts to move the derivative off of $$f(t)$$ and onto the exponential. This gives

\[ {\mathcal L}[f'(t)] = \left. e^{-st} f(t) \right|_0^{\infty} – \int_0^{\infty} dt \, \left( \frac{d}{dt} e^{-st} \right) f(t) \; , \]

which simplifies to

\[ {\mathcal L}[f'(t)] = -f(0) + \int_0^{\infty} dt \, s e^{-st} f(t) = s F(s) – f(0) \; .\]

The same steps can be followed for higher order derivatives but a more useful way is to simply use an induction argument. Define a second derivative to be the derivative of first derivative and apply the theorem twice.

\[{\mathcal L}[f^{\prime\prime}(t)] = {\mathcal L}[(f'(t))’] = s {\mathcal L}[f'(t)] – f'(0) = s^2 F(s) – s f(0) – f'(0) \; \]

This process can be repeated as often as desired yielding the general form

\[ {\mathcal L}\left[ \frac{d^n}{dt^n} f(t) \right] = s^n F(s) – s^{n-1} f(0) – s^{n-2} f^{(1)}(0) – s^{(n-3)} f^{(2)}(0) – \ldots \; ,\]

where

\[ f^{(n)}(0) = \left. \frac{d^n}{dt^n} f(t) \right|_0 \; .\]

Likewise, the Laplace Transform takes integrals of functions to algebraic relations as well. As in the derivative case, start with the most basic form of

\[{\mathcal L}\left[ \int dt’ \, f(t’) \right] = \int_0^{\infty} dt \; e^{-st} \int dt’ \, f(t’) \; . \]

An application of an integration-by-parts is next employed with

\[ dV = e^{-st} \implies V = – \frac{1}{s} e^{-st} \]

and

\[ U = \int dt’ \, f(t’) \implies \frac{dU}{dt} f(t) \; , \]

yielding

\[ \int_0^{\infty} dt \, e^{-st} \int dt’ \, f(t’) = \left.-\frac{1}{s} e^{-st} \int dt’ \, f(t’) \right|_0^{\infty} – \int_0^{\infty} \left(-\frac{1}{s} \right) e^{-st} f(t) \; \]

This expression simplifies to

\[ {\mathcal L}\left[ \int dt’ \, f(t’) \right] = \frac{f^{(-1)}(0)}{s} + \frac{1}{s} F(s) \; ,\]

where $$f^{(-1)}(0) = \int dt’ \, f(t’) \left.\right|_0$$ is a convenient short-hand.

Repeated applications yields the generalization

\[ {\mathcal L}\left[ \int dt’ \, \int dt^{\prime\prime} \, \ldots \int dt^{(n)} \, f(t^{(n)})\right] = \frac{f^{(-n)}(0)}{(s)} + \frac{f^{(-n+1)}(0)}{(s^2)} \\ + \ldots + \frac{F(s)}{s^n} \; .\]

In contrast, the Laplace Transform of products and divisions, results in derivatives and integrations in the $$s$$-space.

First consider multiplication of $$f(t)$$ by some power of $$t$$:

\[ {\mathcal L}[t f(t)] = \int_0^{\infty} dt \, e^{-st} t f(t) \; .\]

Recognize that

\[ \frac{d}{dt} \left( e^{-st} \right) = -t e^{-st} \;.\]

Thus

\[ {\mathcal L}[t f(t)] = -\int_0^{\infty} dt \, \frac{d}{ds} \left( e^{-st} \right) f(t) = -\frac{d}{ds} \int_0^{\infty} dt \, e^{-st} f(t) = -\frac{d}{ds} F(s) \; .\]

This is easily generalized to any arbitrary power to yield

\[ {\mathcal L}[t^n f(t) ] = (-1)^n \frac{d^n}{ds^n} {\mathcal L}[f(t)] = (-1)^n \frac{d^n}{ds^n} F(s) \; .\]

As the last piece, let’s look at division by a power of $$t$$, starting with the simplest case

\[ {\mathcal L}[f(t)/t] = \int_0^{\infty} dt \, e^{-st} f(t)/t \; .\]

Recognizing that

\[ \int_s^{\infty} ds’ \, e^{-s’ t} = e^{-st}/t \]

allows for the Laplace Transform to become

\[{\mathcal L}[f(t)/t] = \int_0^{\infty} dt \int_s^{\infty} ds’ \, e^{-s’ t} = \int_s^{\infty} ds’ \, F(s’) \; .\]

This is also easily generalized to arbitrary powers to yield

\[{\mathcal L}[f(t)/t^n] = \int_s^{\infty} ds’ \, \int_{s’}^{\infty} ds^{\prime\prime} \ldots \int_{s^{(n-1)}}^{\infty} ds^{(n)} F(s(n)) \;.\]

Next week, I’ll take a look at what functions possess a Laplace Transform and how the Laplace Transform relates to the Fourier Transform.