Glossary
What is Vanishing / Exploding Gradient
The concepts of Vanishing Gradient and Exploding Gradient are crucial in the context of deep learning and neural network training.
Vanishing Gradient refers to the scenario where the gradients become exceedingly small during backpropagation, effectively causing the weights to stop updating. This phenomenon is prevalent in deep networks, particularly those using sigmoid or tanh activation functions, leading to slow learning or stagnation.
On the other hand, Exploding Gradient occurs when gradients grow excessively large during backpropagation, resulting in unstable weight updates and divergence of the model. This is often seen in networks with many layers, especially when using ReLU activation functions.
Both phenomena significantly impact the training efficiency and effectiveness of deep learning models. Researchers have proposed various architectures like LSTM to mitigate the effects of Vanishing Gradients and techniques such as gradient clipping to handle Exploding Gradients.
As deep learning evolves, addressing these issues becomes increasingly crucial, with new activation functions and network designs aimed at maintaining gradient stability.