Kale-ab Tessera
Kale-ab Tessera
Home
Publications
Talks
Open Source
Contact Me
CV
Light
Dark
Automatic
2
Just-in-Time Sparsity: Learning Dynamic Sparsity Schedules
Sparse neural networks have various benefits such as better efficiency and faster training and inference times, while maintaining the …
Kale-ab Tessera
,
Chiratidzo Matowe
,
Arnu Pretorius
,
Benjamin Rosman
,
Sara Hooker
PDF
Poster
Slides
On pseudo-absence generation and machine learning for locust breeding ground prediction in Africa
Desert locust outbreaks threaten the food security of a large part of Africa and have affected the livelihoods of millions of people …
Ibrahim Salihu Yusuf
,
Kale-ab Tessera
,
Thomas Tumiel
,
Zohra Slim
,
Amine Kerkeni
,
Sella Nevo
,
Arnu Pretorius
PDF
Cite
Code
Blog
Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization
We use gradient flow to study sparse network optimization.
Kale-ab Tessera
,
Sara Hooker
,
Benjamin Rosman
PDF
Cite
Poster
Slides
Cite
×