Artificial Intelligence Research

Here we describe current artificial intelligence (AI) research projects. These will be at various stages of completion; some finished, others on-going. Downloads, white-papers and references are provided as appropriate.

Experiments in Computer Generated Music Composition

This is a project called Opus+ which explores the limits of music composition using computers. This work is being sponsored by Logos Software Ltd as part of the Opus+ development effort. Further information can be found on the following site:

XOR Neural Network

Parallel Distributed Processing Vol. 1 by Rumelhart & McClelland is a seminal work, published by MIT Press (ISBN: 0-262-68053-X) which re-established neural networks at the centre stage of AI and cognitive neuroscience in the mid 1980's, and later influencing the philosophy of consciousness. Connectionism, a theory of cognition explored in great detail in this excellent book, challenges the domination of symbolic computation as the best model of the mind.

Professor Marvin Minsky upon examining an earlier idea from the 1950's called the Perceptron (McCulloch), had dismissed all PDP architectures as uninteresting, and demonstrated that the Perceptron could not even correctly distinguish the logical XOR function. That is, given two input units and a single output unit there was no linear activation function that would activate the output unit when only one of the input units was activated, and turn the output unit off when either both or neither input units were activated. Given his standing in the AI community, PDP architectures were ignored for several years until the PDP Research Group at MIT discovered the limitation of the Perceptron lay in two of its features: its linear activation function and the absence of hidden units.

This issue is encapsulated in the simplest five unit network with two hidden units and a non-linear activation function. This 'back propagation' network is capable of learning the XOR function from a random initial set of weights by the presentation of a series of training episodes, and is discussed at length in this PDP Vol.1. The philosophical implications are astounding and are underestimated even today. Unfortunately one omission from this book is a simple program that reveals this processing clearly. Here is a very simple C program that implements this XOR neural network using extremely simple data structures so that the nature of the processing is absolutely as clear as possible. When the program is executed it runs in three stages.

  • First, it randomly sets the network weights and displays the entire network four times, once for each input pattern in the training set ((1,1) (1,0) (0,1) (0,0)). This first phase shows the activation levels and all the random weights in the network.

  • The second phase is the training phase. Here the network has the training set applied repeatedly and the backprop training algorithm is applied. For each iteration the mean square error is displayed as a single number and shows the network iterating down a nine dimensional error surface as it learns the XOR function.

  • The third phase is identical to the first, but operates on the fully trained network.

When I first wrote this program in 1988, solely from information provided in PDP Vol.1, my initial reaction on observing a little program, 'learning all by itself' was simple amazement. In the second phase the error slowly creeps down - would it converge? - and on my Atari 1040 ST it took half a day to execute! I realized then that it was at least possible for a purely material process to learn, and that this idea further strengthened the case for materialism.