Skip to article frontmatterSkip to article content

Variable-step integration of chaotic systems

The University of Texas at Austin

Preamble: Run the cells below to import the necessary Python packages

Open this notebook in Google Colab: Open In Colab

import numpy as np

# Import local plotting functions and in-notebook display functions
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline

Previously, we encountered fixed-step integrators, which take steps towards the solution of an initial value problem using a fixed time step Δt\Delta t. In this notebook, we will explore variable-step integrators, which take steps of varying size to achieve a desired accuracy.

Relationship to optimization

In optimization, we saw second-order methods like Newton’s method, in which the gradient descent step scales inversely with the local concavity. In numerical integration, we will see methods that scale inversely with the local smoothness. This is a common theme in numerical methods: we want to take steps that are inversely proportional to the local smoothness of the problem.

The Hadley Cell model

  • As our toy system for today’s notebook, we will use the low-order Hadley cell model, which is derived from the coupled dynamics of atomospheric temperature and water vapor

  • We’ll start by generating a reference trajectory using an existing integrator scipy.solve_ivp

class Hadley:

    def __init__(self, a=0.2, b=4, f=9, g=1):
        self.a = a
        self.b = b
        self.f = f
        self.g = g
        self.name = self.__class__.__name__

    def rhs(self, t, X):
        x, y, z = X
        xdot = -(y ** 2) - z ** 2 - self.a * x + self.a * self.f
        ydot = x * y - self.b * x * z - y + self.g
        zdot = self.b * x * y + x * z - z
        return np.array([xdot, ydot, zdot])
    
    def __call__(self, t, X):
        return self.rhs(t, X)


from scipy.integrate import solve_ivp
ic = np.array([1.37, 0.93, 0.64])

hadley = Hadley()
sol = solve_ivp(hadley, [0, 2000], ic, method='RK45', rtol=1e-6, atol=1e-6)

plt.figure()
plt.plot(sol.y[0], sol.y[1], color='k', linewidth=0.05)


plt.figure()
sol = solve_ivp(hadley, [0, 100], ic, method='RK45', rtol=1e-6, atol=1e-6)
plt.plot(sol.y[0], sol.y[1],  color='k')
<Figure size 640x480 with 1 Axes><Figure size 640x480 with 1 Axes>

Variable-step methods

Previously, we solved the initial value problem

x˙(t)=f(t,x(t))x(t0)=x0\begin{align} \dot{\mathbf{x}}(t) &= \mathbf{f}(t, \mathbf{x}(t)) \\ \mathbf{x}(t_0) &= \mathbf{x}_0 \end{align}

by taking steps of size Δt\Delta t in time, for example by using the Euler method,

xn+1=xn+Δtf(tn,xn)\mathbf{x}_{n+1} = \mathbf{x}_n + \Delta t \, \mathbf{f}(t_n, \mathbf{x}_n)

We will instead add an additional update step to the Euler method, which will allow us to take steps of varying size. This is reminiscent of our optimization module, where for certain methods we changed the learning rate adaptively in each step.

Euler’s method with a variable step size

Recall that we defined the error in terms of the error introduced into our integration solution by taking a single step of size Δt\Delta t,

en=x(tn)xn=x(tn)xn1Δtf(tn1,xn1)\begin{align} \mathbf{e}_n &= \mathbf{x}(t_n) - \mathbf{x}_n \\ &= \mathbf{x}(t_n) - \mathbf{x}_{n -1} - \Delta t \, \mathbf{f}(t_{n -1}, \mathbf{x}_{n-1}) \end{align}

where x(tn)\mathbf{x}(t_n) is the exact solution at time tnt_n. The local error corresponds to the error introduced by taking a single step of size Δt\Delta t,

How do we reduce the error?

In principle, we can always reduce the error of our integration by taking a smaller step size Δt\Delta t. But how much should we decrease Δt\Delta t? We can use the local error to determine this. At each step, we compute an estimate of the local error en\mathbf{e}_n by comparing our current solution xn\mathbf{x}_n to solution we would have arrived at if we had instead taken two half steps of size Δt/2\Delta t/2.

We call this approximate local error estimate e~n\tilde{\mathbf{e}}_n,

xn1/2xn1+Δt2f(tn1,xn1)\mathbf{x}_{n - 1/2} \equiv \mathbf{x}_{n -1} + \frac{\Delta t}{2} \, \mathbf{f}(t_{n -1}, \mathbf{x}_{n-1})
e~n=(xn1/2+Δt2f(tn1/2,xn1/2))(xn1+Δtf(tn1,xn1))\tilde{\mathbf{e}}_n = \bigg(\mathbf{x}_{n - 1/2}+ \frac{\Delta t}{2} \, \mathbf{f}(t_{n -1/2}, \mathbf{x}_{n-1/2}) \bigg) - \bigg(\mathbf{x}_{n -1} + \Delta t \, \mathbf{f}(t_{n -1}, \mathbf{x}_{n-1})\bigg)

We define a cutoff e~n||\tilde{\mathbf{e}}_n || known as the relative tolerance, rr. If our error is below this threshold, then we accept the step at the current value of Δt\Delta t. If the error exceeds this number, then instead we update our step size Δt\Delta t at a rate proportional to the error e~n||\tilde{\mathbf{e}}_n ||

Δtn+1=Δtn0.9(re~n)1/2\Delta t_{n+1} = \Delta t_n \, 0.9 \,\bigg(\frac{r}{\|\tilde{\mathbf{e}}_n\|}\bigg)^{1/2}

The factor of rr ensures that we do not take steps that are too small, and the factor of e~n\|\tilde{\mathbf{e}}_n\| ensures that we do not take steps that are too large. The 0.9 factor is a safety factor to ensure that we do not take steps that are too large.

class BaseAdaptiveIntegrator:
    """A base class for adaptive numerical integrators"""

    def __init__(self, dt=1e-3, rtol=1e-2, max_iter=1e8, safety_factor=0.9):
        self.dt = dt
        self.name = self.__class__.__name__
        self.rtol = rtol
        self.max_iter = max_iter
        self.safety_factor = safety_factor

    def integrate(self, f, tspan, y):
        """Integrate the system using the Euler method"""
        
        dt_init = self.dt
        y_vals = [y.copy()]
        t = tspan[0]
        t_vals = [t]
        num_iter = 0
        while t < tspan[-1]:

            ## Steppers implemented in child classes
            y_coarse, y_fine = self.update(f, t, y_vals[-1])

            ## Error estimate
            err = np.linalg.norm(y_coarse - y_fine)

            ## Accept step if error is small enough
            if err < self.rtol:
                t += self.dt
                y_vals.append(y_fine)
                t_vals.append(t)
                self.dt = dt_init

            ## Otherwise, reduce step size
            else:
                self.dt *= self.safety_factor * np.sqrt(self.rtol / err)

            if num_iter > self.max_iter:
                print("Max iterations reached")
                break


        self.t, self.y = np.array(t_vals), np.array(y_vals)
        return self
    
    def update(self, f, t, y):
        """
        Update the solution using the integration scheme implemented in the child class
        """
        raise NotImplementedError
        

class EulerTwoStep(BaseAdaptiveIntegrator):

    def update(self, f, t, y):
        """Update the solution using the explicit Euler method"""

        ## Euler step
        y_next = y + self.dt * f(t, y)

        ## Euler step with half step
        y_half = y + 0.5 * self.dt * f(t, y)
        y_next_half = y_half + 0.5 * self.dt * f(t + 0.5 * self.dt, y_half)

        return y_next, y_next_half

Compare two-step adaptive Euler to a built-in adaptive integrator

plt.figure(figsize=(14, 5))
integrator = EulerTwoStep(dt=1e-2, rtol=1e-7)
integrator.integrate(hadley, [0, 100], ic)
plt.subplot(1, 2, 1)
plt.plot(integrator.y[:, 0], integrator.y[:, 1], color='k')
plt.title("Euler Two Step")

plt.subplot(1, 2, 2)
sol = solve_ivp(hadley, [0, 100], ic, method='Radau', rtol=1e-6, atol=1e-6)
plt.plot(sol.y[0], sol.y[1],  color='k')
plt.title("Scipy")
<Figure size 1400x500 with 2 Axes>
plt.figure(figsize=(10, 6))


plt.subplot(3, 1, 1)
plt.plot(integrator.y[:, 0])
plt.xlabel("Time")
plt.ylabel("True solution")
# plt.title("Euler Two Step")

plt.subplot(3, 1, 2)
plt.plot(np.diff(integrator.t))
plt.xlabel("Time")
plt.ylabel("Step size")
plt.title("Euler Two Step")

plt.subplot(3, 1, 3)
plt.plot(np.diff(sol.t))
plt.xlabel("Time")
plt.ylabel("Step size")
plt.title("Scipy")


plt.figure()
plt.scatter(integrator.y[:-1, 0], integrator.y[:-1, 1],  c=np.diff(integrator.t), s=1)
plt.colorbar(label="Step size")
<Figure size 1000x600 with 3 Axes><Figure size 640x480 with 2 Axes>

Some things to try

  • Try varying the relative tolerance ϵ\epsilon and see how the solution changes.
  • Try varying the initial step size Δt0\Delta t_0 and see how the solution changes.
  • What are some drawbacks of this method? How might we improve it?

Embedded methods

The Runge-Kutta-Fehlberg method is a variable-step method that is embedded, meaning that it is a pair of methods where one is more accurate than the other. We can think of embedded methods as generalizing the idea of computing a regular step and a half step to estimate local error in Euler’s method.

The embedded version of Runge-Kutta 4(5) is given by

k1=Δtf(tn,xn)k2=Δtf(tn+Δt4,xn+k14)k3=Δtf(tn+3Δt8,xn+3k132+k232)k4=Δtf(tn+12Δt13,xn+1932k121977200k22197+7296k32197)k5=Δtf(tn+Δt,xn+439k12168k2+3680k3513845k44104)k6=Δtf(tn+Δt2,xn8k127+2k23544k32565+1859k4410411k540)xn+1=xn+25k1216+1408k32565+2197k44104k55yn+1=xn+16k1135+6656k312825+28561k4564309k550+2k655\begin{align} \mathbf{k}_1 &= \Delta t \, \mathbf{f}(t_n, \mathbf{x}_n) \\ \mathbf{k}_2 &= \Delta t \, \mathbf{f}(t_n + \frac{\Delta t}{4}, \mathbf{x}_n + \frac{\mathbf{k}_1}{4}) \\ \mathbf{k}_3 &= \Delta t \, \mathbf{f}(t_n + \frac{3\Delta t}{8}, \mathbf{x}_n + \frac{3\mathbf{k}_1}{32} + \frac{\mathbf{k}_2}{32}) \\ \mathbf{k}_4 &= \Delta t \, \mathbf{f}(t_n + \frac{12\Delta t}{13}, \mathbf{x}_n + \frac{1932\mathbf{k}_1}{2197} - \frac{7200\mathbf{k}_2}{2197} + \frac{7296\mathbf{k}_3}{2197}) \\ \mathbf{k}_5 &= \Delta t \, \mathbf{f}(t_n + \Delta t, \mathbf{x}_n + \frac{439\mathbf{k}_1}{216} - 8\mathbf{k}_2 + \frac{3680\mathbf{k}_3}{513} - \frac{845\mathbf{k}_4}{4104}) \\ \mathbf{k}_6 &= \Delta t \, \mathbf{f}(t_n + \frac{\Delta t}{2}, \mathbf{x}_n - \frac{8\mathbf{k}_1}{27} + 2\mathbf{k}_2 - \frac{3544\mathbf{k}_3}{2565} + \frac{1859\mathbf{k}_4}{4104} - \frac{11\mathbf{k}_5}{40}) \\ \mathbf{x}_{n+1} &= \mathbf{x}_n + \frac{25\mathbf{k}_1}{216} + \frac{1408\mathbf{k}_3}{2565} + \frac{2197\mathbf{k}_4}{4104} - \frac{\mathbf{k}_5}{5} \\ \mathbf{y}_{n+1} &= \mathbf{x}_n + \frac{16\mathbf{k}_1}{135} + \frac{6656\mathbf{k}_3}{12825} + \frac{28561\mathbf{k}_4}{56430} - \frac{9\mathbf{k}_5}{50} + \frac{2\mathbf{k}_6}{55} \end{align}

where yn+1\mathbf{y}_{n+1} is the 5th order solution, and xn+1\mathbf{x}_{n+1} is the 4th order solution. The Butcher tableau for this method is given by

0000000141400000383329320000121319322197720021977296219700014392168368051384541040012827235442565185941041140025216014082565219741041501613506656128252856156430950255\begin{array}{c|cccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{1}{4} & \frac{1}{4} & 0 & 0 & 0 & 0 & 0 \\ \frac{3}{8} & \frac{3}{32} & \frac{9}{32} & 0 & 0 & 0 & 0 \\ \frac{12}{13} & \frac{1932}{2197} & -\frac{7200}{2197} & \frac{7296}{2197} & 0 & 0 & 0 \\ 1 & \frac{439}{216} & -8 & \frac{3680}{513} & -\frac{845}{4104} & 0 & 0 \\ \frac{1}{2} & -\frac{8}{27} & 2 & -\frac{3544}{2565} & \frac{1859}{4104} & -\frac{11}{40} & 0 \\ \hline & \frac{25}{216} & 0 & \frac{1408}{2565} & \frac{2197}{4104} & -\frac{1}{5} & 0 \\ & \frac{16}{135} & 0 & \frac{6656}{12825} & \frac{28561}{56430} & -\frac{9}{50} & \frac{2}{55} \end{array}

Just like the two-step Euler method, we estimate the local (step-wise) truncation error by comparing the two solutions. The error estimate is given by

en+1=yn+1xn+1\mathbf{e}_{n+1} = \mathbf{y}_{n+1} - \mathbf{x}_{n+1}

The error is estimated by

E=1Δten+12\mathcal{E} = \frac{1}{\Delta t} \left\| \mathbf{e}_{n+1} \right\|_2

The error is then used to determine the next step size, Δtn+1\Delta t_{n+1}, by

Δtn+1=Δt(0.9)(ϵE)1/4\Delta t_{n+1} = \Delta t \, (0.9)\left( \frac{\epsilon}{\mathcal{E}} \right)^{1/4}

where ϵ\epsilon is a desired accuracy. The step size is bounded by Δtmin\Delta t_{\text{min}} and Δtmax\Delta t_{\text{max}}.

class RungeKuttaFehlberg(BaseAdaptiveIntegrator):

    def update(self, f, t, y):
        """Update the solution using the RKF45 method"""

        k1 = f(t, y)
        k2 = f(t + 0.25 * self.dt, y + 0.25 * self.dt * k1)
        k3 = f(t + 0.375 * self.dt, y + 0.09375 * self.dt * k1 + 0.28125 * self.dt * k2)
        k4 = f(t + 12/13 * self.dt, y + 1932/2197 * self.dt * k1 - 7200/2197 * self.dt * k2 + 7296/2197 * self.dt * k3)
        k5 = f(t + self.dt, y + 439/216 * self.dt * k1 - 8 * self.dt * k2 + 3680/513 * self.dt * k3 - 845/4104 * self.dt * k4)
        k6 = f(t + 0.5 * self.dt, y - 8/27 * self.dt * k1 + 2 * self.dt * k2 - 3544/2565 * self.dt * k3 + 1859/4104 * self.dt * k4 - 11/40 * self.dt * k5)

        y_next = y + self.dt * (25/216 * k1 + 1408/2565 * k3 + 2197/4104 * k4 - 1/5 * k5)
        y_next_half = y + self.dt * (16/135 * k1 + 6656/12825 * k3 + 28561/56430 * k4 - 9/50 * k5 + 2/55 * k6)

        return y_next, y_next_half
plt.figure(figsize=(14, 5))
integrator = RungeKuttaFehlberg(dt=1e-1, rtol=1e-6)
integrator.integrate(hadley, [0, 100], ic)
plt.subplot(1, 2, 1)
plt.plot(integrator.y[:, 0], integrator.y[:, 1], color='k')
plt.title("RK Fehlberg")


plt.subplot(1, 2, 2)
sol0 = solve_ivp(hadley, [0, 100], ic, method='Radau', rtol=1e-6, atol=1e-6, max_step=1e-1)
plt.plot(sol0.y[0], sol0.y[1],  color='k')
plt.title("Scipy")
<Figure size 1400x500 with 2 Axes>
plt.figure(figsize=(10, 6))


plt.subplot(3, 1, 1)
plt.plot(integrator.y[:, 0])
plt.xlabel("Time")
plt.ylabel("True solution")

plt.subplot(3, 1, 2)
plt.plot(np.diff(integrator.t))
plt.xlabel("Time")
plt.ylabel("Step size")

plt.subplot(3, 1, 3)
plt.plot(np.diff(sol.t))
plt.xlabel("Time")
plt.ylabel("Step size")
plt.title("Scipy")


plt.figure()
plt.scatter(integrator.y[:-1, 0], integrator.y[:-1, 1],  c=np.diff(integrator.t), s=4)
plt.colorbar(label="Step size")
<Figure size 1000x600 with 3 Axes><Figure size 640x480 with 2 Axes>

Dormand Prince

  • This method is a 7th order method with 8 stages, and is the default method used in scipy.solve_ivp

  • In MATLAB, this method is called ode45

class DormandPrince(BaseAdaptiveIntegrator):

    def update(self, f, t, y):
        """Update the solution using the DormandPrince method"""

        k1 = f(t, y)
        k2 = f(t + 1/5 * self.dt, y + 1/5 * self.dt * k1)
        k3 = f(t + 3/10 * self.dt, y + 3/40 * self.dt * k1 + 9/40 * self.dt * k2)
        k4 = f(t + 4/5 * self.dt, y + 44/45 * self.dt * k1 - 56/15 * self.dt * k2 + 32/9 * self.dt * k3)
        k5 = f(t + 8/9 * self.dt, y + 19372/6561 * self.dt * k1 - 25360/2187 * self.dt * k2 + 64448/6561 * self.dt * k3 - 212/729 * self.dt * k4)
        k6 = f(t + self.dt, y + 9017/3168 * self.dt * k1 - 355/33 * self.dt * k2 + 46732/5247 * self.dt * k3 + 49/176 * self.dt * k4 - 5103/18656 * self.dt * k5)
        k7 = f(t + self.dt, y + 35/384 * self.dt * k1 + 0 + 500/1113 * self.dt * k3 + 125/192 * self.dt * k4 - 2187/6784 * self.dt * k5 + 11/84 * self.dt * k6)

        y_next = y + self.dt * (35/384 * k1 + 0 + 500/1113 * k3 + 125/192 * k4 - 2187/6784 * k5 + 11/84 * k6)
        y_next_half = y + self.dt * (5179/57600 * k1 + 0 + 7571/16695 * k3 + 393/640 * k4 - 92097/339200 * k5 + 187/2100 * k6 + 1/40 * k7)

        return y_next, y_next_half
plt.figure(figsize=(14, 5))
integrator = DormandPrince(dt=1e-1, rtol=1e-6)
integrator.integrate(hadley, [0, 100], ic)
plt.subplot(1, 2, 1)
plt.plot(integrator.y[:, 0], integrator.y[:, 1], color='k')
plt.title("RK Fehlberg")


plt.subplot(1, 2, 2)
sol0 = solve_ivp(hadley, [0, 100], ic, method='Radau', rtol=1e-6, atol=1e-6, max_step=1e-1)
plt.plot(sol0.y[0], sol0.y[1],  color='k')
plt.title("Scipy")
<Figure size 1400x500 with 2 Axes>

Stochastic differential equations

What if our differential equation has a stochastic component? We can use the same methods as before, but we need to take into account the stochastic component. We will use the Euler-Maruyama method, which is a first-order method for stochastic differential equations. The method is given by

xn+1=xn+Δt  f(xn)+Δt  g(xn)ξnξnN(0,1)\begin{align} \mathbf{x}_{n+1} &= \mathbf{x}_n + \Delta t\; \mathbf{f}(\mathbf{x}_n) + \sqrt{\Delta t}\; g(\mathbf{x}_n) \xi_n \\ \xi_n &\sim \mathcal{N}(0, 1) \end{align}
  • Notice that this is similar to how we implemented Ornstein-Uhlenbeck dynamics earlier in the course. We are just replacing the harmonic oscillator forcing with our general right hand side.

  • We scale our noise term by Δt\sqrt{\Delta t}. Recall that Brownian motion has a mean square displacement that scales as Δt\Delta t, which corresponds to the average distance from origin scaling as t\sqrt{t}.

  • Nice discussion of more sophisticated methods, as well as Ito and Stratonovich calculus, by Tim Sauer here

class AdaptiveEulerMaruyama(BaseAdaptiveIntegrator):

    def __init__(self, noise=1e-2, **kwargs):
        super().__init__(**kwargs)
        self.noise = noise

    def update(self, f, t, y):
        """Update the solution using the Euler-Maruyama method"""

        noise = self.noise * np.random.randn(*y.shape) * np.sqrt(self.dt)
        y_next = y + self.dt * f(t, y) + noise
        y_next_half = y + 0.5 * self.dt * f(t, y) + noise

        return y_next, y_next_half
    

integrator = AdaptiveEulerMaruyama(dt=1e-2, noise=1e-1)
integrator.integrate(hadley, [0, 100], ic)
plt.plot(integrator.y[:, 0], integrator.y[:, 1], color='k')
plt.title("Adaptive Euler-Maruyama")
<Figure size 640x480 with 1 Axes>

Symplectic integration

  • Suppose that we want to integrate a system with an explicit conserved quantity, like a Hamiltonian or a stream function.

  • We will pick a dynamical system with a conserved invariant: the total mechanical energy



class DoublePendulum:

    def __init__(self, g=9.82, l1=1.0, l2=1.0, m1=1.0, m2=1.0):
        self.g = g
        self.l1 = l1
        self.l2 = l2
        self.m1 = m1
        self.m2 = m2

    def __call__(self, t, x):
        g = self.g
        l1 = self.l1
        l2 = self.l2
        m1 = self.m1
        m2 = self.m2

        theta1, theta2, omega1, omega2 = x

        delta_theta = theta1 - theta2

        denom = (2 * m1 + m2 - m2 * np.cos(2 * delta_theta))

        dtheta1 = omega1
        domega1 = ((-g * (2 * m1 + m2) * np.sin(theta1) - m2 * g * np.sin(theta1 - 2 * theta2) - 2 * np.sin(delta_theta) * m2 * (omega2**2 * l2 + omega1**2 * l1 * np.cos(delta_theta))) / (l1 * denom))

        dtheta2 = omega2
        domega2 = ((2 * np.sin(delta_theta) * (omega1**2 * l1 * (m1 + m2) + g * (m1 + m2) * np.cos(theta1) + omega2**2 * l2 * m2 * np.cos(delta_theta))) / (l2 * denom))

        return np.array([dtheta1, dtheta2, domega1, domega2])
    
    def convert_to_physical(self, y):
        """
        Given a vector of coordinates in the form angle1, angle2, omega1, omega2, 
        convert to physical coordinates of the tips of the pendulums
        """
        l1 = self.l1
        l2 = self.l2
        theta1, theta2 = y[:, 0], y[:, 1]
        x1 = l1 * np.sin(theta1)
        y1 = -l1 * np.cos(theta1)
        x2 = x1 + l2 * np.sin(theta2)
        y2 = y1 - l2 * np.cos(theta2)
        return np.array([x1, y1, x2, y2]).T
    
    def compute_energy(self, y):
        g = self.g
        l1 = self.l1
        l2 = self.l2
        m1 = self.m1
        m2 = self.m2
        theta1, theta2 = y[:, 0], y[:, 1]
        omega1, omega2 = y[:, 2], y[:, 3]
        
        delta_theta = theta1 - theta2

        T = 0.5 * m1 * l1**2 * omega1**2 + 0.5 * m2 * (l1**2 * omega1**2 + l2**2 * omega2**2 + 2 * l1 * l2 * omega1 * omega2 * np.cos(delta_theta))
        U = - m1 * g * l1 * np.cos(theta1) - m2 * g * (l1 * np.cos(theta1) + l2 * np.cos(theta2))
        
        E = T + U
        return E
    
class Euler:
    """The Euler integration scheme"""

    def __init__(self, dt=1e-3):
        self.dt = dt
        self.name = self.__class__.__name__

    def integrate(self, f, tspan, y):
        """Integrate the system using the symplectic Euler method"""
        
        m = len(y) // 2
        y_vals = [y.copy()]
        t = tspan[0]
        t_vals = [t]
        while t < tspan[-1]:
            
            yprev = y_vals[-1].copy()
            y_vals.append(yprev + self.dt * f(t, yprev))

            t += self.dt
            t_vals.append(t)

        self.t, self.y = np.array(t_vals), np.array(y_vals)
        return self

pendulum = DoublePendulum()

from scipy.integrate import solve_ivp

# Initial conditions
x0 = np.array([np.pi / 2.1, np.pi / 2, 0.1, 0.2])
t0 = 0.0

# Set up integrator
# integrator = solve_ivp(pendulum, [t0, 40], x0, method='RK45', max_step=1e-3)
# Convert to physical coordinates
# y = pendulum.convert_to_physical(integrator.y.T)
# Set up integrator
integrator = Euler(dt=1e-3)
integrator.integrate(pendulum, [t0, 100], x0)
# Convert to physical coordinates
y = pendulum.convert_to_physical(integrator.y)

plt.figure()
plt.plot(y[:, 2], y[:, 3], 'k', linewidth=0.1)
plt.xlabel('x')
plt.ylabel('y')

plt.figure()
plt.plot(pendulum.compute_energy(integrator.y))
plt.xlabel('time')
plt.ylabel('Total Mechanical Energy')
<Figure size 640x480 with 1 Axes><Figure size 640x480 with 1 Axes>
# Set up integrator
integrator = solve_ivp(pendulum, [t0, 1000], x0, method='RK45', max_step=1e-3)
# Convert to physical coordinates
y = pendulum.convert_to_physical(integrator.y.T)
# Set up integrator

Symplectic methods preserve the energy

  • We can use our knowledge of the existence of a conserved quantity to design a method that respects the conservation law
  • Symplectic integrators are variable-step methods, which might have implicit structure if they involve solving a nonlinear Hamiltonian
  • More sophisticated methods: Verlet and Leapfrog methods (higher order), and splitting methods

Suppose that our Hamiltonian describes a harmonic oscillator,

H(t,x,p)=12pTp+12xTx\mathcal{H}(t, \mathbf{x}, \mathbf{p}) = \frac{1}{2} \mathbf{p}^T \mathbf{p} + \frac{1}{2} \mathbf{x}^T \mathbf{x}

We can write the equations of motion as

dxdt=pdpdt=x\begin{align} \frac{d \mathbf{x}}{dt} &= \mathbf{p} \\ \frac{d \mathbf{p}}{dt} &= - \mathbf{x} \end{align}

Suppose we perform an Euler update on the momentum first,

pn+1=pnΔtxn\mathbf{p}_{n+1} = \mathbf{p}_n - \Delta t \, \mathbf{x}_n

Given this update, we want to find an update to the position that will preserve the Hamiltonian:

H(tn+1,xn+1,pn+1)=H(tn,xn,pn)\mathcal{H}(t_{n+1}, \mathbf{x}_{n+1}, \mathbf{p}_{n+1}) = \mathcal{H}(t_{n}, \mathbf{x}_{n}, \mathbf{p}_{n})

Inserting our expression, we can solve for the update

12pnTpn+12xnTxn=12pn+1Tpn+1+12xn+1Txn+1\frac{1}{2} \mathbf{p}_n^T \mathbf{p}_n + \frac{1}{2} \mathbf{x}_n^T \mathbf{x}_n = \frac{1}{2} \mathbf{p}_{n+1}^T \mathbf{p}_{n+1} + \frac{1}{2} \mathbf{x}_{n+1}^T \mathbf{x}_{n+1}
pnTpn+xnTxn=(pnΔtxn)T(pnΔtxn)+xn+1Txn+1\mathbf{p}_n^T \mathbf{p}_n + \mathbf{x}_n^T \mathbf{x}_n = (\mathbf{p}_n - \Delta t \, \mathbf{x}_n)^T (\mathbf{p}_n - \Delta t \, \mathbf{x}_n) + \mathbf{x}_{n+1}^T \mathbf{x}_{n+1}
xn+1Txn+1=pnTpn+xnTxn(pnΔtxn)T(pnΔtxn)\mathbf{x}_{n+1}^T \mathbf{x}_{n+1} = \mathbf{p}_n^T \mathbf{p}_n + \mathbf{x}_n^T \mathbf{x}_n - (\mathbf{p}_n - \Delta t \, \mathbf{x}_n)^T (\mathbf{p}_n - \Delta t \, \mathbf{x}_n)
xn+1Txn+1=(1Δt2)xnTxn+2pnΔtxnTpn\mathbf{x}_{n+1}^T \mathbf{x}_{n+1} = (1 - \Delta t^2)\mathbf{x}_n^T \mathbf{x}_n + 2\mathbf{p}_n \Delta t \, \mathbf{x}_n^T \mathbf{p}_n
xn+1=xnΔt2xn+2Δtpn\mathbf{x}_{n+1} = \mathbf{x}_n - \Delta t^2 \mathbf{x}_n + 2 \Delta t \mathbf{p}_n
xn+1=xnT(xn+Δt(2pnΔtxn))\mathbf{x}_{n+1} = \mathbf{x}_n^T \bigg( \mathbf{x}_n + \Delta t (2 \mathbf{p}_n - \Delta t \, \mathbf{x}_n) \bigg)
xn+1=xn+Δtpn+1\mathbf{x}_{n+1} = \mathbf{x}_n + \Delta t \, \mathbf{p}_{n+1}
Symplectic integration

Energy-conserving integration therefore can be performed by performing the following steps sequentially

pn+1=pnΔtxn\mathbf{p}_{n+1} = \mathbf{p}_n - \Delta t \, \mathbf{x}_n
xn+1=xn+Δtpn+1\mathbf{x}_{n+1} = \mathbf{x}_n + \Delta t \, \mathbf{p}_{n+1}

For a general Hamiltonian, we can write the equations of motion as

dxdt=Hpdpdt=Hx\begin{align} \frac{d \mathbf{x}}{dt} &= \frac{\partial \mathcal{H}}{\partial \mathbf{p}} \\ \frac{d \mathbf{p}}{dt} &= - \frac{\partial \mathcal{H}}{\partial \mathbf{x}} \end{align}

We can write the Euler update as

xn+1=xn+ΔtHppn+1=pnΔtHx\begin{align} \mathbf{x}_{n+1} &= \mathbf{x}_n + \Delta t \, \frac{\partial \mathcal{H}}{\partial \mathbf{p}} \\ \mathbf{p}_{n+1} &= \mathbf{p}_n - \Delta t \, \frac{\partial \mathcal{H}}{\partial \mathbf{x}} \end{align}
class SymplecticEuler:

    def __init__(self, dt=1e-3):
        self.dt = dt
        self.name = self.__class__.__name__

    def integrate(self, f, tspan, y):
        """Integrate the system using the symplectic Euler method"""
        
        m = len(y) // 2
        y_vals = [y.copy()]
        t = tspan[0]
        t_vals = [t]
        while t < tspan[-1]:
            
            # yprev = y_vals[-1].copy()
            # y_vals.append(yprev + self.dt * f(t, yprev))

            x0, p0 = y_vals[-1][:m], y_vals[-1][m:]
            # Compute derivatives using the given function f
            dxdt = f(t, np.hstack([x0, p0]))[:m]
            
            # First update positions using the old momenta
            x1 = x0 + self.dt * dxdt
            
            # Now compute new derivatives with the updated positions
            dpdt_new = f(t, np.hstack([x1, p0]))[m:]
            
            # Update momenta using the new derivatives
            p1 = p0 + self.dt * dpdt_new

            y_vals.append(np.hstack((x1, p1)))

            t += self.dt
            t_vals.append(t)

        self.t, self.y = np.array(t_vals), np.array(y_vals)
        return self
pendulum = DoublePendulum()

# Initial conditions
x0 = np.array([np.pi / 2.1, np.pi / 2, 0.1, 0.2])
t0 = 0.0

# Set up integrator
integrator = SymplecticEuler(dt=1e-4)
integrator.integrate(pendulum, [t0, 100], x0)

# Convert to physical coordinates
y = pendulum.convert_to_physical(integrator.y)

plt.figure()
plt.plot(y[:, 2], y[:, 3], 'k', linewidth=0.05)
plt.xlabel('x')
plt.ylabel('y')
plt.title("Position of pendulum tip")

plt.figure()
plt.plot(pendulum.compute_energy(integrator.y))
plt.xlabel('time')
plt.ylabel('Total Mechanical Energy')
<Figure size 640x480 with 1 Axes><Figure size 640x480 with 1 Axes>