In forward propagation, some random weights are assigned to neurons while training the neural network and at the end, we get an actual output which is represented as ŷ. Also, we know what the predicted output (represented by y) should be.
Now, the Loss function is calculated as (y — ŷ) ^2. Then, optimizers are used during backpropagation to adjust the weights and this is done repeatedly until y = ŷ
Let’s learn about various types of optimization algorithms -
Intuition — Consider a dataset with 100k records. The entire dataset is considered in forward propagation and then the loss…
Activation Functions determine the output of the neural network. They are responsible for the accuracy of the neural net and the computation power needed to train the network. These activation functions are attached to each neuron in the network and function as a gate thus ‘firing’ the neuron when the right set of inputs is received.
Example — You dip your hand in cold water and you feel the sensation of cold and when you dip your hand in hot water you sense the hotness. …
A recurrent neural network is a type of deep learning neural net that remembers the input sequence, stores it in memory states/cell states, and predicts the future words/sentences.
RNNs works well with inputs that are in the form of sequences. As an example, consider, “I like eating ice-creams. My favorite is chocolate ____”.
For humans, it is obvious to fill the blank with the word ice-cream, but the machine has to understand the context and remember the previous words in the sentence to predict the subsequent word. This is where RNNs are useful.
Applications: — Speech recognition(Google Voice Search), Machine…