Data Science at Home
Data Science at Home
Compressing deep learning models: distillation (Ep.104)
0:00
-22:19

Compressing deep learning models: distillation (Ep.104)

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.

In this episode I explain one of the first methods: knowledge distillation

Come join us on Slack

Reference

0 Comments