Circuits and Systems for Efficient Machine Learning and Artificial Intelligence
Overview
Machine Learning (ML) and Artificial Intelligence (AI) applications require the use of more and more advanced algorithms for extracting meaningful information from increasingly (and sometimes incredibly) large sets of data. While the algorithmic part has recently seen significant advances (as for instance through the adoption of Deep or Convolutional Neural Networks), it sometimes comes at the cost of high computational complexity which hinders their straightforward implementability.
This activity aims at advancing in this direction by:
- proposing architectures to reduce the computational cost of ML and AI algorithms either by solely hardware design changes or by joint hardware-algorithm co-design. Examples are the use of weights with largely reduced precision in DNNs or minimization of data transfer bringing the computation at the edge of the cloud;
- Changing the algorithms for processing big data to largely reduce memory requirements and hardware complexity. An example in this direction is the use of suitably modified streaming principal component analysis algorithms.
- Designing methodologies for neural architecture search (NAS) and co-optimized training and hardware implementation of mixed-precision ML algorithms.
- Exploiting ML and AI advantages to design efficient and high performance hardware architectures for image and video coding.