Artificial Neural Networks

This is a computational method that strives to mimic the complex interconnections between neurons in a biologic system.

Essentially there are three main components to the model; (i) specific inputs (input nodes), (ii) connections between these inputs and some intermediary data points (hidden nodes) that then lead to (iii) specific outputs (output nodes). Figure 1 illustrates this model.

Each level (input nodes, hidden nodes and output nodes) of the neural network is called a "layer." A simple linear model has inputs and an output—two layers. Neural networks have at a minimum three layers.

When designing a neural network, the data to be analyzed in randomly distributed into three groups: a training data set, a testing data set and a validation data set.

The training data set is initially used to "train" the network. A set of inputs are entered, these inputs are multiplied by weights along each connection to produce hidden node values. These values are then multiplied by secondary weights along the next set of connections to finally produce an output. The error between the produced output and the desired output is then calculated and used to readjust the weights along the connection pathway from the first layer through the last. This process is repeated till the error between produced output and desired output is minimized. This is called the perceptron learning rule. More complex networks can be designed by utilizing more layers and designing feedback mechanisms that can adjust weighting values at any layer level without sequentially propagating through the system.

One must remember, though, that just because a model is designed as a neural network does not mean than it will perform better than a linear model. In fact, most neural networks must be compared to linear counterparts, if available, to assess if it is truly a benefit to utilize it.

0 0

Post a comment