Data Science at Home
Data Science at Home
Make Stochastic Gradient Descent Fast Again (Ep. 113)
0:00
-20:35

Make Stochastic Gradient Descent Fast Again (Ep. 113)

There is definitely room for improvement in the family of algorithms of stochastic gradient descent. In this episode I explain a relatively simple method that has shown to improve on the Adam optimizer. But, watch out! This approach does not generalize well.

Join our Discord channel and chat with us.

References

0 Comments