Sparsity in RL

In the first year of my PhD I worked, among other things, on dynamic sparse training for deep reinforcement learning. On the MuJoCo environments such as Humanoid-v3, HalfCheetah-v3, etc, we discovered that the level to which we can sparsify the models (>90% for Humanoid, 70% for HalfCheetah) without performance degradation depends on the particular environment.

See the poster here, I presented it at the Sparse Neural Networks (SNN) workshop in July 2022.