First artificial neural network

First artificial neural network
Who
Frank Rosenblatt, Perceptron
What
First
Where
United States (Washington)
When
07 July 1958

The first artificial neural network was the Perceptron, designed and programmed by Frank Rosenblatt (USA) at the Cornell Aeronautical Laboratory in Buffalo, New York, USA. The Perceptron was first demonstrated to the public in Washington, D.C., on 7 July 1958.

Frank Rosenblatt originally trained as a psychologist, and arrived at an interest in artificial intelligence through his interest in human intelligence. The architecture of the Perceptron was based on a concept called a McCulloch-Pitts neuron, which was theorized by two researchers at the University of Chicago in 1943.

Rosenblatt imagined the Perceptron as a specialised piece of computing hardware that could mimic the networks of neurons in a human brain, but the initial demonstration of the concept, performed at the United States Weather Bureau in Washington, D.C., simulated this bespoke hardware using a 5-ton IBM 704 vacuum-tube mainframe.

This prototype Perceptron was designed to handle a very simple image recognition task. The input consisted of numerous pieces of white card, each marked with a randomly positioned black dot. These cards were placed under a scanner that consisted of a 20 x 20 grid of photocells. This scanner encoded an image of the card as 400 pixels that were either black or white.

The data from the scanner was then passed to the computer. Each pixel's value (black or white; 0 or 1) was assigned to a block of memory called a "neuron". For each neuron, the computer would assign a value called "weight" which changed as it gathered training data.

Rosenblatt's demonstration involved training the Perceptron to recognize what side of a piece of card the black dot was on. For this purpose the computer was shown numerous cards with dots in different places and told whether they were marked on the right or the left.

Over time, this led the machine to assign high weights to neurons on one side and low weights to neurons on the other. After the training data had been established, the Perceptron could identify what side a dot was on, even if it wasn't placed in a spot it had seen before. If the sum of each neuron's weight values was greater than the threshold value it had established from training cards, it recognized the dot as being on the right and if it was lower, on the left.

Rosenblatt and his team went on to design and build a specialised piece of hardware for neural-network computing, which they called the Perceptron Mark I. This larger and more capable machine was set the task of learning to recognize written letters. Compared to modern "deep learning" neural networks (which have many interacting layers of neurons) this simple machine struggled to perform the task reliably. By the late 1960s, AI researchers were moving on to other methods, having concluded neural networks were a dead end.