Neutral Networks -Model Representation 1
At first look, neural networks may seem a black box; an input layer gets the data into the “hidden layers” and after a magic trick we can see the information provided by the output layer
A neutral network usually involves a large number of processors operating in parallel and arranged in tiers. The first tier receives the raw input information — analogous to optic nerves in human visual processing. Each successive tier receives the output from the tier preceding it, rather than from the raw input — in the same way, neurons further from the optic nerve receive signals from those closest to it. The last tier produces the output of the system. The neutral networks are very much analogous to the neutrons in our brain,
Neural Networks, which turns out to be a much better way to learn complex hypotheses, complex nonlinear hypotheses even when your input feature space, even when n is large.
Model Representation
Our input nodes (layer 1), also known as the “input layer”, go into another node (layer 2), which finally outputs the hypothesis function, known as the “output layer”.
We can have intermediate layers of nodes between the input and output layers called the “hidden layers.”
The values for each of the “activation” nodes is obtained as follows:
This is saying that we compute our activation nodes by using a 3×4 matrix of parameters. We apply each row of the parameters to our inputs to obtain the value for one activation node. Our hypothesis output is the logistic function applied to the sum of the values of our activation nodes, which have been multiplied by yet another parameter matrix Θ^(2) containing the weights for our second layer of nodes.
Each layer gets its own matrix of weights, Θ^(j).
The picture below describes all the above-studied concept
Read Next -Neutral Network -Model Representation 2