Friday, October 5, 2012

Neural Network


Neural Network
In a typical computer, made according to what is called a Von Neumann architecture, memory banks live in an isolated module. There is only one processor, which processes instructions and memory rewrites one by one, using a serial architecture. A different approach to computing is the neural network. In a neural network, made up of thousands or even millions of individual "neurons" or "nodes," all processing is highly parallel and distributed. "Memories" are stored within the complex interconnections and weightings between nodes.
Neural networking is the type of computing architecture used by animal brains in nature. This isn't necessarily because the neural network is an inherently superior mode of processing than serial computing, but because a brain that uses serial computing would be much more difficult to evolve incrementally. Neural networks also tend to deal with "noisy data" better than serial computers.
In a feedforward neural network, an "input layer" filled with specialized nodes takes in information, then sends a signal to a second layer based on the information it received from the outside. This information is usually a binary "yes or no" signal. Sometimes, to move from a "no" to a "yes," the node has to experience a certain threshold amount of excitement or stimulation.
Data moves from the input layer to the secondary and tertiary layers, and so on, until it reaches a final "output layer" which displays results on a screen for programmers to analyze. The human retina works based on neural networks. First level nodes detect simple geometric features in the visual field, like colors, lines, and edges. Secondary nodes begin to abstract more sophisticated features, such as motion, texture, and depth. The final "output" is what our consciousness registers when we look at the visual field. The initial input is just a complex arrangement of photons that would mean little without the neurological hardware to make sense of it in terms of meaningful qualities, such as the idea of an enduring object.
In back propagating neural networks, outputs from earlier layers can return to those layers to constrain further signals. Most of our senses work this way. The initial data can prompt an "educated guess" at the final result, followed by looking at future data in the context of that educated guess. In optical illusions, our senses make educated guesses that turn out to be wrong.
Instead of programming neural networks algorithmically, programmers must configure a neural network with training or delicate tuning of individual neurons. For example, training a neural network to recognize faces would require many training runs in which different "facelike" and "unfacelike" objects were shown to the network, accompanied by positive or negative feedback to coax the neural network into improving recognition skills.

Feedforward Neural Network

A feedforward neural network is a type of neural network where the unit connections do not travel in a loop, but rather in a single directed path. This differs from a recurrent neural network, where information can move both forwards and backwards throughout the system. A feedforward neural network is perhaps the most common type of neural network, as it is one of the easiest to understand and configure. These types of neural networks are used in data mining and other areas of study where predictive behavior is required.
A neural network is an artificial intelligence network designed to loosely imitate the "thinking" processes of a human brain. By feeding strings of data into the network, the computer is given opportunities to "learn" the patterns flowing through it, enabling it to correctly identify answers and provide trend analysis. They are used in tasks where a certain degree of learning and pattern recognition is required, such as during data mining operations. Data mining is simply the analysis of trends from a collection of information, such as the analysis of consumer purchasing trends and stock market progressions.
Information traveling through a feedforward neural network goes into the input layer, travels through the hidden layer, and emerges from the outer layer of the network, providing the end user with an answer to their query. An input layer is simply the place where the user enters the raw data or parameters of the information. The meat of the transaction takes place in the hidden layer, where the computer falls back upon its "experience" of handling similar data to produce an estimated reply. The information is funneled through the output layer, where an answer is provided back to the end user.
A feedforward neural network typically becomes more efficient as the end user provide it with more and more experimental data. Much like calculating an average, a more accurate result will be reached from using a wide number of test events. For example, the probability of rolling a "1" on a six-sided die is 16.667 percent; but it will take hundreds or thousands of simulations before the calculated average is confirmed through the use of real-world data. Feedforward neural networks are the same; their responses will become more accurate with time and experience.

No comments:

Post a Comment