Our Director of ML Architecture Lawrence Spracklen is speaking at AI DevWorld virtually on October 27th, 1:00 – 1:50PM PDT. He will be talking about Numenta’s sparse networks and algorithms that unlock the full potential of sparsity on current hardware platforms.
AI DevWorld is the world’s largest artificial intelligence developer conference with tracks covering chatbots, machine learning, open source AI libraries, AI for the enterprise, and deep AI / neural networks. This conference targets software engineers and data scientists who are looking for an introduction to AI as well as AI dev professionals looking for a landscape view on the newest AI technologies. Register here.
In recent years interest in sparse neural networks has steadily increased, accelerated by NVIDIA’s inclusion of dedicated hardware support in their recent Ampere GPUs. Sparse networks feature both limited interconnections between the neurons and restrictions on the number of neurons that are permitted to become active. By introducing this weight and activation sparsity, significant simplification of the computations required to both train and use the network is achieved. These sparse networks can achieve equivalent accuracy to their traditional ‘dense’ counterparts but have the potential to outperform the dense networks by an order of magnitude or more. In this presentation we start by discussing the opportunity associated with sparse networks and provide an overview of the state-of-the-art techniques used to create them. We conclude by presenting new software algorithms that unlock the full potential of sparsity on current hardware platforms, highlighting 100X speedups on FPGAs and 20X on CPUs and GPUs.