Fixed-Step integration of chaotic systems¶
Preamble: Run the cells below to import the necessary Python packages
This notebook created by William Gilpin. Consult the course website for all content and GitHub repository for raw files and runnable online code.
import numpy as np
# Import local plotting functions and in-notebook display functions
import matplotlib.pyplot as plt
%matplotlib inline
Numerical integration¶
We want to solve problems of the form
$$\frac{d\mathbf{y}}{dt} = \mathbf{f}(t, \mathbf{y})$$
where $\mathbf{y}$ is a vector of unknowns. We can do this by discretizing time.
Relationship to optimization¶
- Every optimization problem can be written as an ODE for potential flow $\dot{\mathbf{y}} = \mathbf{f}(t, \mathbf{y}) = -\nabla U(\mathbf{y}, t)$. However, not every ODE can be written as an optimization problem.
- Limit cycles, chaos, and other interesting phenomena can be found in ODEs, but not in optimization problems, because potential flows have zero curl.
- While we cannot always define a potential function for a given set of ODEs, the gradient and the Hessian are still well-defined. In an ODE context, the gradient is just the right-hand side of the ODE, $\frac{d\mathbf{y}}{dt}$, and the Hessian corresponds to the Jacobian matrix of the right-hand side.
$$ \mathbb{J} = \frac{\partial \dot{\mathbf{y}}}{\partial \mathbf{y}} = \frac{\partial \mathbf{f}}{\partial \mathbf{y}} $$
Writing this out, $$ \mathbb{J} = \begin{bmatrix} \frac{\partial f_1}{\partial y_1} & \frac{\partial f_1}{\partial y_2} & \cdots & \frac{\partial f_1}{\partial y_n} \\ \frac{\partial f_2}{\partial y_1} & \frac{\partial f_2}{\partial y_2} & \cdots & \frac{\partial f_2}{\partial y_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_n}{\partial y_1} & \frac{\partial f_n}{\partial y_2} & \cdots & \frac{\partial f_n}{\partial y_n} \end{bmatrix} $$
Just as we saw in optimization, stepping the right-hand side is always necessary to find the next step in the solution. Knowledge of the Jacobian can improve convergence and stability, as well as reveal mathemtical properties of the ODE solution.
The Thomas Model¶
Let's use a well-known set of coupled differential equations to demonstrate our numerical integration schemes $$ \dot{x} = \sin(y) - b x \\ \dot{y} = \sin(z) - b y \\ \dot{z} = \sin(x) - b z \\ $$ These equations describe the motion of a particle confined in a 3D harmonic potential, subject to a periodic lattice of dispersive forces.
An initial value problem¶
In numerical integration, we are usually given an initial condition $\mathbf{x}(0)$ and asked to find $\mathbf{x}(t)$ for some $t > 0$.
class ThomasModel:
def __init__(self, b=0.185):
self.b = b
# notice this special method name
def __call__(self, t, X):
return self.rhs(t, X)
def rhs(self, t, X):
x, y, z = X
xdot = np.sin(y) - self.b * x
ydot = np.sin(z) - self.b * y
zdot = np.sin(x) - self.b * z
return np.array([xdot, ydot, zdot])
class LorenzModel:
def __init__(self, sigma=10, rho=28, beta=8/3):
self.sigma = sigma
self.rho = rho
self.beta = beta
def __call__(self, t, X):
return self.rhs(t, X)
def rhs(self, t, X):
"""
Lorenz dynamical model
"""
x, y, z = X
xdot = self.sigma * (y - x)
ydot = x * (self.rho - z) - y
zdot = x * y - self.beta * z
return np.array([xdot, ydot, zdot])
thomas_model = ThomasModel()
lorenz_model = LorenzModel()
# Let's cheat and integrate the ODE using scipy.integrate.solve_ivp
from scipy.integrate import solve_ivp
ic = [0.21, 0.3, 0.4]
t = np.linspace(0, 6000, 17000)
sol = solve_ivp(thomas_model, [t[0], t[-1]], ic, t_eval=t)
X = sol.y.T
plt.figure(figsize=(10, 10))
plt.plot(X[:, 0], X[:, 1], linewidth=0.1, color='k', alpha=1.0)
plt.axis('off')
(-4.575815104646805, 4.577407306751987, -4.574487627901447, 4.5702694417497645)
About scipy.integrate.solve_ivp
¶
We're cheating by using a so-called "built-in" integrator
scipy.integrate.solve_ivp
. This wraps several popular integration methods in a single API, and it defaults to Runge-Kutta under the hood.Some Python users might use
scipy.integrate.odeint
instead ofscipy.integrate.solve_ivp
. The latter is a more recent and general suite of solvers
# Let's create an expensive reference trajectory using solve_ivp
from scipy.integrate import solve_ivp
ic = [0.21, 0.3, 0.4]
t = np.linspace(0, 600, 3000)
sol = solve_ivp(thomas_model, [t[0], t[-1]], ic, t_eval=t, method="Radau", max_step=0.005, rtol=1e-8, atol=1e-16)
X_reference = sol.y.T.copy()
t_reference = sol.t.copy()
plt.figure(figsize=(5, 5))
plt.plot(X_reference[:, 0], X_reference[:, 1], linewidth=1, color='k')
plt.title("Reference Trajectory")
plt.axis('off')
(-4.5551990862981935, 4.564902618405367, -4.561749452508924, 4.560615030434169)
Explicit fixed-step integrators¶
These convert continuous-time dynamical systems to discrete-time maps, corresponding to snapshots of the dynamics every $\Delta t$ timesteps
The Euler method is the simplest explicit fixed-step integrator. It is a first-order method, and is unconditionally stable for a sufficiently small step size. It is also the most computationally efficient method per evaluation, but it is also the least accurate.
$$ \mathbf{x}_{n+1} = \mathbf{x}_n + \Delta t\; \mathbf{f}(t_n, \mathbf{x}_n) $$
- A common class of fixed-step integrators are Runge-Kutta methods. These are more computationally expensive than the Euler method, but they are also more accurate.
class BaseFixedStepIntegrator:
"""
A base class for fixed-step integration methods.
"""
def __init__(self, dt=1e-3):
self.dt = dt
self.name = self.__class__.__name__
def integrate(self, f, tspan, y0, t_eval=None):
"""
Integrate the ODE y' = f(t, y) from t0 to tf using an integrator's update method
"""
t0, tf = tspan
# Create an array of time points to evaluate the solution
if t_eval is None:
t_eval = np.arange(t0, tf, self.dt)
# Create an array to store the solution (pre-allocate)
y = np.zeros((len(t_eval), len(y0)))
# Set the initial condition
y[0] = y0
# Integrate the ODE
for i in range(1, len(t_eval)):
t = t_eval[i - 1]
y[i] = self.update(f, t, y[i - 1]) # this will be defined in the subclass
self.t, self.y = t_eval, np.array(y)
return self
def update(self, f, t, y):
"""
Update the solution using the integrator's method
"""
raise NotImplementedError("This method must be implemented in a subclass")
class Euler(BaseFixedStepIntegrator):
"""
Note:
super() calls the constructor of BaseIntegrator
kwargs collects any passed keyword arguments and passes them on to the constructor
of BaseIntegrator
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
def update(self, f, t, y):
return y + self.dt * f(t, y)
integrator = Euler(dt=0.0001)
ic = np.array([0.21, 0.3, 0.4])
t = np.linspace(0, 600, 3000)
sol = integrator.integrate(thomas_model, [t[0], t[-1]], ic)
plt.figure(figsize=(10, 10))
plt.plot(X_reference[:, 0], X_reference[:, 1], linewidth=3, color='b', alpha=0.3)
plt.plot(sol.y[:, 0], sol.y[:, 1], linewidth=1, color='k')
plt.axis('off')
(-4.565603884591137, 4.571356431218626, -4.561749452508924, 4.560615030434169)