The Game of Life, Inheritance, and Cellular Automata¶
Preamble: Run the cells below to import the necessary Python packages
This notebook created by William Gilpin. Consult the course website for all content and GitHub repository for raw files and runnable online code.
import numpy as np
# Import local plotting functions and in-notebook display functions
import matplotlib.pyplot as plt
%matplotlib inline
The Game of Life and Cellular Automata¶
- Cellular automata are systems where all physical laws and fields are discretized: space is a lattice, and time is discrete
- Physical laws take the form of IF-THEN statements
- Basic cellular automata are Markovian: given current field values at some set of lattice points, perform an instantaneous update based on a set of rules
The Game of Life¶
- Conway, 1970
- Complex dynamics emerge from simple rules
- Markovian: next state of universe depends only on current state of universe
- Local: next state at a lattice point depends only on current site and nearest neighbors
- Synchronous: we update all sites at once, rather than raster scanning
- Finite fields: A site can have values of 1 (alive) or 0 (dead)
Game life rules:¶
- Underpopulation: Any live cell with fewer than two live neighbours dies
- Survival: Any live cell with two or three live neighbours lives stays alive
- Overpopulation: Any live cell with more than three live neighbours dies
- Reproduction: Any dead cell with exactly three live neighbours becomes a live cell
These simple rules give rise to surprisingly complex behaviors; in fact, the Game of Life has been shown to support universal computation, given a suitable encoding scheme based on initial conditions. We've included some examples of fixed-point, limit-cycle, and soliton like dynamics below

.




Images from Scholarpedia
Implementing Cellular Automata using inheritance in Python¶
Below, we are going to define a "base" class for cellular automata, and then define subclasses for various specific automata. The base class handles state counting, initialization, and will contain a simulate
method for the simulation loop (where we repeatedly call the update rule).
In the base class, we will define a next_state
method because the simulate
method depends on it. But because the state rule is different for each automaton, we will leave the implementation of next_state
to be overwritten by the subclasses (the base class raises an exception)
This convention is an example of the Template Method Pattern in software development.
# A base allows us to impose structure on specific cases. We will use this as a parent
# for subsequent classes
class CellularAutomaton:
"""
A base class for cellular automata. Subclasses must implement the step method.
Parameters
n (int): The number of cells in the system
n_states (int): The number of states in the system
random_state (None or int): The seed for the random number generator. If None,
the random number generator is not seeded.
initial_state (None or array): The initial state of the system. If None, a
random initial state is used.
"""
def __init__(self, n, n_states, random_state=None, initial_state=None):
self.n_states = n_states
self.n = n
self.random_state = random_state
np.random.seed(random_state)
## The universe is a 2D array of integers
if initial_state is None:
self.initial_state = np.random.choice(self.n_states, size=(self.n, self.n))
else:
self.initial_state = initial_state
self.state = self.initial_state
self.history = [self.state]
def next_state(self):
"""
Output the next state of the entire board
"""
raise NotImplementedError
def simulate(self, n_steps):
"""
Iterate the dynamics for n_steps, and return the results as an array
"""
for i in range(n_steps):
self.state = self.next_state()
self.history.append(self.state)
return self.state
ca = CellularAutomaton(10, 2, random_state=1)
ca.simulate(10)
# plt.imshow(ca.initial_state)
--------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[28], line 49 45 return self.state 48 ca = CellularAutomaton(10, 2, random_state=1) ---> 49 ca.simulate(10) 50 # plt.imshow(ca.initial_state) Cell In[28], line 43, in CellularAutomaton.simulate(self, n_steps) 39 """ 40 Iterate the dynamics for n_steps, and return the results as an array 41 """ 42 for i in range(n_steps): ---> 43 self.state = self.next_state() 44 self.history.append(self.state) 45 return self.state Cell In[28], line 36, in CellularAutomaton.next_state(self) 32 def next_state(self): 33 """ 34 Output the next state of the entire board 35 """ ---> 36 raise NotImplementedError NotImplementedError:
Just to show how inheritance works, we will start by defining two simpler automata
An extinction automaton that sends every single input condition to the same output state
A random automaton that assigns each input state to either 0 or 1 randomly
class ExtinctionAutomaton(CellularAutomaton):
"""
A cellular automaton that simulates the extinction of a species.
"""
def __init__(self, n, **kwargs):
super().__init__(n, 2, **kwargs)
def next_state(self):
"""
Output the next state of the entire board
"""
next_state = np.zeros_like(self.state)
return next_state
class RandomAutomaton(CellularAutomaton):
"""
A cellular automaton with random updates
"""
def __init__(self, n, random_state=None, **kwargs):
np.random.seed(random_state)
super().__init__(n, 2, **kwargs)
self.random_state = random_state
def next_state(self):
"""
Output the next state of the entire board
"""
next_state = np.random.choice(self.n_states, size=(self.n, self.n))
return next_state
Let's try running these automata, to see how they work
model = ExtinctionAutomaton(40, random_state=0)
model.simulate(200)
plt.figure()
plt.imshow(model.initial_state, cmap="gray")
plt.title("Initial state")
plt.figure()
plt.imshow(model.state, cmap="gray")
plt.title("Final state")
Text(0.5, 1.0, 'Final state')
model = RandomAutomaton(40, random_state=0)
model.simulate(200)
plt.figure()
plt.imshow(model.initial_state, cmap="gray")
plt.title("Initial state")
plt.figure()
plt.imshow(model.state, cmap="gray")
plt.title("Final state")
Text(0.5, 1.0, 'Final state')
We are now ready to define the Game of Life as a subclass of the base class.
Game life rules:¶
- Underpopulation: Any live cell with fewer than two live neighbours dies
- Survival: Any live cell with two or three live neighbours lives stays alive
- Overpopulation: Any live cell with more than three live neighbours dies
- Reproduction: Any dead cell with exactly three live neighbours becomes a live cell
# This child class inherits methods from the parent class
class GameOfLife(CellularAutomaton):
"""
An implementation of Conway's Game of Life in Python
Args:
n (int): The number of cells in the system
**kwargs: Additional keyword arguments passed to the base CellularAutomaton class
"""
def __init__(self, n, **kwargs):
# the super method calls the parent class's __init__ method and passes the
# arguments to it.
super().__init__(n, 2, **kwargs)
def next_state(self):
"""
Compute the next state of the lattice
"""
# Compute the next state
next_state = np.zeros_like(self.state) # preallocate the next state
for i in range(self.n):
for j in range(self.n):
# Count the number of neighbors
n_neighbors = np.sum(
self.state[i - 1:i + 2, j - 1:j + 2]
) - self.state[i, j]
# Update the next state
if self.state[i, j] == 1:
if n_neighbors in [2, 3]:
next_state[i, j] = 1
else:
next_state[i, j] = 0
else:
if n_neighbors == 3:
next_state[i, j] = 1
else:
next_state[i, j] = 0
return next_state
We can now run this automaton and see how it behaves. We will also try using the timing utility to see how long it takes to run the simulation for one step.
model = GameOfLife(40, random_state=0)
model.simulate(500)
# print timing results
print("Average time per iteration: ", end="")
%timeit model.next_state()
plt.figure()
plt.imshow(model.initial_state, cmap="gray")
plt.title("Initial state")
plt.figure()
plt.imshow(model.state, cmap="gray")
plt.title("Final state")
Average time per iteration: 3.22 ms ± 20.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Text(0.5, 1.0, 'Final state')
We can also access the time series of automaton states, which our base class stores in a list. We can use this to make animations, and to analyze the behavior of the automaton over time.
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
# Assuming frames is a numpy array with shape (num_frames, height, width)
frames = np.array(model.history).copy()
fig = plt.figure(figsize=(6, 6))
img = plt.imshow(frames[0], vmin=0, vmax=1, cmap="gray");
plt.xticks([]); plt.yticks([])
# tight margins
plt.margins(0,0)
plt.gca().xaxis.set_major_locator(plt.NullLocator())
def update(frame):
img.set_array(frame)
ani = FuncAnimation(fig, update, frames=frames, interval=50)
HTML(ani.to_jshtml())
How long did that take?¶
Memory Scaling¶
- To avoid 2D vs 3D, let's just say there are $N$ total sites. So our 2D universe is $\sqrt{N} \times \sqrt{N}$ in span, etc
- For synchronous update rules, each site value $s_{ij}(t+1)$ at timestep $t + 1$ depends on the lattice $s_{ij}(t)$. That means that each timestep requires a full copy of the entire universe, into which we write the next states one-by-one
- The original copy requires space $\sim N$, on top of the $\sim N$ memory requirements to store the initial lattice. So if our original lattice was 1 MP, then we need to set aside 2 MP of memory
- So-called "Big-O" notation usually ignores prefactors, and so we would say that the asymptotic space requirements to simulate the Game of Life are $\mathcal{O}(N)$
Runtime scaling¶
Let's look at what our algorithm does carefully: at each lattice site, we poll the nearest neighbors (including diagonal neighbors), and compute a sum. We then execute a conditional on that sum to write to the future state.
Since every site gets updated, we know that we have to perform at least $N$ operations in order to update the entire universe
For each site, let's keep a running sum of all neighbors, and a check of the current value. that's $9$ operations per site. For a generalized neighborhood, that's $K$ operations per site
So the runtime looks something like $N \times K$. Since $K \ll N$ we can treat it as a prefactor, and say that the runtime is $\mathcal{O}(N)$
However, what if we had a global cellular automaton, like an N-body simulation, where we need to compute pairwise forces between all lattice sites? In this limit $K \sim N$ because we have to check every other site every time we update a single site. So a global CA has runtime $\mathcal{O}(N^2)$
Low-level optimizations (vectorizations, convolution, etc) decrease the runtime by reducing the prefactor, but we cannot change the Big-O scaling properties, which are intrinstic to algorithms
Can we improve our implementation?¶
A key idea of cellular automata is locality: the update of each site depends only on the state of its neighbors. The Game of Life uses a Moore neighborhood, in which the first nearest neighbors (including diagonals) are considered. The Game of Life update rule can therefore be seen as first implementing convolution of the grid state $f$ with a kernel $g$ of the form
$$ g \equiv \begin{bmatrix} 1 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \end{bmatrix} $$
This discrete convolution essentially takes the sum of the neighbors, and uses this sum plus the state of the center cell in order to decide what to do. This makes the Game of Life an example of an outer totalistic cellular automaton. Recall that a two-dimensional discrete convolution has the form
$$ (f * g)(x, y) = \sum_{m=-\infty}^{\infty} \sum_{n=-\infty}^{\infty} f(x-m, y-n) g(m, n) $$
However, when the kernel has compact support (as in our nearest-neighborhood update rule), this double summation simplifies,
$$ (f * g)(x, y) = \sum_{m=-K}^{K} \sum_{n=-K}^{K} f(x-m, y-n) g(m, n) $$
For our Game of Life example, $K=1$. The periodic boundary conditions can be implemented by padding each dimension of $f$ with values from the other side of the array, and then cropping the convolution result to the original size of $f$. We can estimate the number of multiplications required to perform a discrete convolution by counting the number of elements in the kernel, $M$, and the number of elements in the grid, $N$, for an overall time complexity of $\mathcal{O}(M\,N)$.
counting_kernel = np.ones((3, 3))
counting_kernel[1, 1] = 0
plt.figure(figsize=(3, 3))
plt.imshow(counting_kernel, cmap="gray")
plt.xticks([]); plt.yticks([])
([], [])
# import requests
# import numpy as np
# import io
# url = "https://raw.githubusercontent.com/williamgilpin/cphy/main/resources/hank.npz"
# response = requests.get(url)
# response.raise_for_status() # Ensure we notice bad responses
# image = np.load(io.BytesIO(response.content), allow_pickle=True)
# ## Convert to binary (0 and 1)
# image_bw = image > 0.5
# plt.figure()
# plt.imshow(image_bw, cmap="gray")
# from scipy.signal import convolve2d
# image_counts = convolve2d(image_bw, counting_kernel, mode='same', boundary='wrap')
# plt.figure()
# plt.imshow(image_counts, cmap="gray")
# plt.colorbar(label="Number of neighbors")
from scipy.signal import convolve2d
class GameOfLife(CellularAutomaton):
def __init__(self, n, **kwargs):
# the super method calls the parent class's __init__ method
# and passes the arguments to it. It is a constructor
super().__init__(n, 2, **kwargs)
def next_state(self):
"""
Compute the next state of the lattice using discrete convolutions to implement
the rules of the Game of Life.
"""
# Compute the next state
next_state = np.zeros_like(self.state)
# Compute the number of neighbors for each cell
totalistic_kernel = np.ones((3, 3))
totalistic_kernel[1, 1] = 0 # don't count the center cell itself
# convolve with periodic boundary conditions
n_neighbors = convolve2d(self.state, totalistic_kernel, mode='same', boundary='wrap')
# implement the rules of the Game of Life in a vectorized way
next_state = np.logical_or(
np.logical_and(self.state == 1, n_neighbors == 2),
n_neighbors == 3
)
return next_state
model = GameOfLife(40, random_state=0)
model.simulate(200)
print("Average time per iteration: ", end="")
%timeit model.next_state()
plt.figure()
plt.imshow(model.initial_state, cmap="gray")
plt.title("Initial state")
plt.figure()
plt.imshow(model.state, cmap="gray")
plt.title("Final state")
Average time per iteration: 41.8 µs ± 318 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Text(0.5, 1.0, 'Final state')
import requests
import numpy as np
import io
url = "https://raw.githubusercontent.com/williamgilpin/cphy/main/resources/hank.npz"
response = requests.get(url)
response.raise_for_status() # Ensure we notice bad responses
image = np.load(io.BytesIO(response.content), allow_pickle=True)
## Convert to binary (0 and 1)
image_bw = image > 0.5
plt.figure()
plt.imshow(image_bw, cmap="gray")
from scipy.signal import convolve2d
image_counts = convolve2d(image_bw, counting_kernel, mode='same', boundary='wrap')
plt.figure()
plt.imshow(image_counts, cmap="gray")
plt.colorbar(label="Number of neighbors")
<matplotlib.colorbar.Colorbar at 0x12b98f3d0>
model = GameOfLife(40, random_state=0)
## Overwrite the initial state with the image
model.state = image_bw.astype(int)
model.history = [model.state.copy()]
model.simulate(400)
# plt.figure()
# plt.imshow(model.history[0], cmap="gray")
# plt.title("Initial state")
# plt.figure()
# plt.imshow(model.history[-1], cmap="gray")
# plt.title("Final state")
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
# Assuming frames is a numpy array with shape (num_frames, height, width)
frames = np.array(model.history).copy()
fig = plt.figure(figsize=(6, 6))
img = plt.imshow(frames[0], vmin=0, vmax=1, cmap="gray");
plt.xticks([]); plt.yticks([])
# tight margins
plt.margins(0,0)
plt.gca().xaxis.set_major_locator(plt.NullLocator())
def update(frame):
img.set_array(frame)
ani = FuncAnimation(fig, update, frames=frames, interval=50)
HTML(ani.to_jshtml())