Variable-step integration of chaotic systems¶
Preamble: Run the cells below to import the necessary Python packages
This notebook created by William Gilpin. Consult the course website for all content and GitHub repository for raw files and runnable online code.
import numpy as np
# Import local plotting functions and in-notebook display functions
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
Previously, we encountered fixed-step integrators, which take steps towards the solution of an initial value problem using a fixed time step $\Delta t$. In this notebook, we will explore variable-step integrators, which take steps of varying size to achieve a desired accuracy.
Relationship to optimization¶
In optimization, we saw second-order methods like Newton's method, in which the gradient descent step scales inversely with the local concavity. In numerical integration, we will see methods that scale inversely with the local smoothness. This is a common theme in numerical methods: we want to take steps that are inversely proportional to the local smoothness of the problem.
The Hadley Cell model¶
As our toy system for today's notebook, we will use the low-order Hadley cell model, which is derived from the coupled dynamics of atomospheric temperature and water vapor
We'll start by generating a reference trajectory using an existing integrator
scipy.solve_ivp
class Hadley:
def __init__(self, a=0.2, b=4, f=9, g=1):
self.a = a
self.b = b
self.f = f
self.g = g
self.name = self.__class__.__name__
def rhs(self, t, X):
x, y, z = X
xdot = -(y ** 2) - z ** 2 - self.a * x + self.a * self.f
ydot = x * y - self.b * x * z - y + self.g
zdot = self.b * x * y + x * z - z
return np.array([xdot, ydot, zdot])
def __call__(self, t, X):
return self.rhs(t, X)
from scipy.integrate import solve_ivp
ic = np.array([1.37, 0.93, 0.64])
hadley = Hadley()
sol = solve_ivp(hadley, [0, 2000], ic, method='RK45', rtol=1e-6, atol=1e-6)
plt.figure()
plt.plot(sol.y[0], sol.y[1], color='k', linewidth=0.05)
plt.figure()
sol = solve_ivp(hadley, [0, 100], ic, method='RK45', rtol=1e-6, atol=1e-6)
plt.plot(sol.y[0], sol.y[1], color='k')
[<matplotlib.lines.Line2D at 0x13981ad90>]
Variable-step methods¶
Previously, we solved the initial value problem
$$ \begin{align} \dot{\mathbf{x}}(t) &= \mathbf{f}(t, \mathbf{x}(t)) \\ \mathbf{x}(t_0) &= \mathbf{x}_0 \end{align} $$
by taking steps of size $\Delta t$ in time, for example by using the Euler method,
$$ \mathbf{x}_{n+1} = \mathbf{x}_n + \Delta t \, \mathbf{f}(t_n, \mathbf{x}_n) $$
We will instead add an additional update step to the Euler method, which will allow us to take steps of varying size. This is reminiscent of our optimization module, where for certain methods we changed the learning rate adaptively in each step.
Euler's method with a variable step size¶
Recall that we defined the error in terms of the error introduced into our integration solution by taking a single step of size $\Delta t$,
$$ \begin{align} \mathbf{e}_n &= \mathbf{x}(t_n) - \mathbf{x}_n \\ &= \mathbf{x}(t_n) - \mathbf{x}_{n -1} - \Delta t \, \mathbf{f}(t_{n -1}, \mathbf{x}_{n-1}) \end{align} $$ where $\mathbf{x}(t_n)$ is the exact solution at time $t_n$. The local error corresponds to the error introduced by taking a single step of size $\Delta t$,
How do we reduce the error?¶
In principle, we can always reduce the error of our integration by taking a smaller step size $\Delta t$. But how much should we decrease $\Delta t$? We can use the local error to determine this. At each step, we compute an estimate of the local error $\mathbf{e}_n$ by comparing our current solution $\mathbf{x}_n$ to solution we would have arrived at if we had instead taken two half steps of size $\Delta t/2$.
We call this approximate local error estimate $\tilde{\mathbf{e}}_n$,
$$ \mathbf{x}_{n - 1/2} \equiv \mathbf{x}_{n -1} + \frac{\Delta t}{2} \, \mathbf{f}(t_{n -1}, \mathbf{x}_{n-1}) $$ $$ \tilde{\mathbf{e}}_n = \bigg(\mathbf{x}_{n - 1/2}+ \frac{\Delta t}{2} \, \mathbf{f}(t_{n -1/2}, \mathbf{x}_{n-1/2}) \bigg) - \bigg(\mathbf{x}_{n -1} + \Delta t \, \mathbf{f}(t_{n -1}, \mathbf{x}_{n-1})\bigg) $$
We define a cutoff $||\tilde{\mathbf{e}}_n ||$ known as the relative tolerance, $r$. If our error is below this threshold, then we accept the step at the current value of $\Delta t$. If the error exceeds this number, then instead we update our step size $\Delta t$ at a rate proportional to the error $||\tilde{\mathbf{e}}_n ||$
$$ \Delta t_{n+1} = \Delta t_n \, 0.9 \,\bigg(\frac{r}{\|\tilde{\mathbf{e}}_n\|}\bigg)^{1/2} $$
The factor of $r$ ensures that we do not take steps that are too small, and the factor of $\|\tilde{\mathbf{e}}_n\|$ ensures that we do not take steps that are too large. The $0.9$ factor is a safety factor to ensure that we do not take steps that are too large.
class BaseAdaptiveIntegrator:
"""A base class for adaptive numerical integrators"""
def __init__(self, dt=1e-3, rtol=1e-2, max_iter=1e8, safety_factor=0.9):
self.dt = dt
self.name = self.__class__.__name__
self.rtol = rtol
self.max_iter = max_iter
self.safety_factor = safety_factor
def integrate(self, f, tspan, y):
"""Integrate the system using the Euler method"""
dt_init = self.dt
y_vals = [y.copy()]
t = tspan[0]
t_vals = [t]
num_iter = 0
while t < tspan[-1]:
## Steppers implemented in child classes
y_coarse, y_fine = self.update(f, t, y_vals[-1])
## Error estimate
err = np.linalg.norm(y_coarse - y_fine)
## Accept step if error is small enough
if err < self.rtol:
t += self.dt
y_vals.append(y_fine)
t_vals.append(t)
self.dt = dt_init
## Otherwise, reduce step size
else:
self.dt *= self.safety_factor * np.sqrt(self.rtol / err)
if num_iter > self.max_iter:
print("Max iterations reached")
break
self.t, self.y = np.array(t_vals), np.array(y_vals)
return self
def update(self, f, t, y):
"""
Update the solution using the integration scheme implemented in the child class
"""
raise NotImplementedError
class EulerTwoStep(BaseAdaptiveIntegrator):
def update(self, f, t, y):
"""Update the solution using the explicit Euler method"""
## Euler step
y_next = y + self.dt * f(t, y)
## Euler step with half step
y_half = y + 0.5 * self.dt * f(t, y)
y_next_half = y_half + 0.5 * self.dt * f(t + 0.5 * self.dt, y_half)
return y_next, y_next_half
Compare two-step adaptive Euler to a built-in adaptive integrator¶
plt.figure(figsize=(14, 5))
integrator = EulerTwoStep(dt=1e-2, rtol=1e-7)
integrator.integrate(hadley, [0, 100], ic)
plt.subplot(1, 2, 1)
plt.plot(integrator.y[:, 0], integrator.y[:, 1], color='k')
plt.title("Euler Two Step")
plt.subplot(1, 2, 2)
sol = solve_ivp(hadley, [0, 100], ic, method='Radau', rtol=1e-6, atol=1e-6)
plt.plot(sol.y[0], sol.y[1], color='k')
plt.title("Scipy")
Text(0.5, 1.0, 'Scipy')
plt.figure(figsize=(10, 6))
plt.subplot(3, 1, 1)
plt.plot(integrator.y[:, 0])
plt.xlabel("Time")
plt.ylabel("True solution")
# plt.title("Euler Two Step")
plt.subplot(3, 1, 2)
plt.plot(np.diff(integrator.t))
plt.xlabel("Time")
plt.ylabel("Step size")
plt.title("Euler Two Step")
plt.subplot(3, 1, 3)
plt.plot(np.diff(sol.t))
plt.xlabel("Time")
plt.ylabel("Step size")
plt.title("Scipy")
plt.figure()
plt.scatter(integrator.y[:-1, 0], integrator.y[:-1, 1], c=np.diff(integrator.t), s=1)
plt.colorbar(label="Step size")
<matplotlib.colorbar.Colorbar at 0x13fb2de90>