Kale-ab Tessera
Kale-ab Tessera
Home
Publications
Talks
Open Source
Contact Me
CV
Light
Dark
Automatic
Deep Learning
Deep Learning Indaba Practicals 2022
A collection of high-quality practicals covering a variety of modern machine learning techniques.
Code
Just-in-Time Sparsity: Learning Dynamic Sparsity Schedules
Sparse neural networks have various benefits such as better efficiency and faster training and inference times, while maintaining the …
Kale-ab Tessera
,
Chiratidzo Matowe
,
Arnu Pretorius
,
Benjamin Rosman
,
Sara Hooker
PDF
Poster
Slides
Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization
We use gradient flow to study sparse network optimization.
Kale-ab Tessera
,
Sara Hooker
,
Benjamin Rosman
PDF
Cite
Poster
Slides
On Sparsity In Deep Learning: The Benefits And Pitfalls Of Sparse Neural Networks And How To Learn Their Architectures
Overparameterization in deep learning has led to many breakthroughs in the field. However, overparameterized models also have …
Kale-ab Tessera
PDF
Cite
×