# Reimplementing Blondie24: Convolutional Version

Fogel and Chellapilla followed up their 1999 Blondie24 paper with another paper, Evolving an Expert Checkers Playing Program without Using Human Expertise, published in 2001.

## Convolutional Layer

This paper was very similar to the 1999 paper, but it had one key difference that improved the performance of the evolved players: they inserted a convolutional layer between the input layer and first hidden layer in their neural network.

• Input Layer: $32$ linearly-activated nodes and $1$ bias node (checkers board has $64$ squares but only half of them are used)
• Convolutional Layer: one tanh-activated node for each $N \times N$ subsquare of the checkers board with $N = 3, 4, 5, 6, 7, 8.$
• First Hidden Layer: $40$ tanh-activated nodes and $1$ bias node
• Second Hidden Layer: $10$ tanh-activated nodes and $1$ bias node
• Output Layer: $1$ tanh-activated node

Recall that the checkers board has dimensions $8 \times 8.$ So, there is a single $8 \times 8$ subsquare (namely, the entire board). Likewise, there are four $7 \times 7$ subsquares, nine $6 \times 6$ subsquares, sixteen $5 \times 5$ subsquares, sixteen $5 \times 5$ subsquares, twenty-five $4 \times 4$ subsquares, and thirty-six $3 \times 3$ subsquares. Consequently, the total number of nodes in the convolutional layer is

\begin{align*} 1 + 4 + 9 + 16 + 25 + 36 = 91. \end{align*}

The convolutional layer is also known as a spatial preprocessing layer because it allows the network to perceive spatial characteristics of the board at different levels of “zoom”. Today, most modern image classification systems leverage convolutional neural networks.

With the addition of the convolutional layer, the total number of weights in the Blondie24 neural network increases to $5046.$ These weights are all variable and are learned through the process of evolution. Note that the piece difference node is still included.

## King Value Update

There was one more minor difference in the 2001 paper. The king value was updated in a slightly different way:

\begin{align*} K^\text{child} = K^\text{parent} + \delta \end{align*}

where $\delta$ is randomly chosen from $\left{ -0.1, 0, 0.1 \right}.$ The updated value of $K$ is still constrained to range $[1,3].$

## Performance Curve

Generate a performance curve the same way you did for the previous (non-convolutional) implementation of Blondie24, playing your evolved networks against your heuristic strategy. The curve should look fairly similar but should ideally level off to a slightly higher level of performance.

Tags: