This is a past event
Can we systematically accelerate basic mathematical operations in computers by trading off precision in the results in a stochastic manner? In this talk we address this question for one of the most basic operations: matrix multiplication.
The generic matrix multiply (GEMM) function is the core element of high-performance linear algebra libraries used in many DSP applications. We propose an acceleration technique for GEMM based on dynamically allowing (and controlling) imprecision of computation (distortion). We derive the optimal throughput-distortion control framework for GEMM for a broad class of input sources. Our approach thus converts matrix multiplication in processors into a computation channel: when increasing the processing throughput, the output noise (error) increases due to (i) coarser quantization and (ii) computational errors caused by exceeding the machine-precision limitations.
We validate the benefits of the proposed framework within three DSP applications: a noise cancellation system, a face recognition application, and a neural network training for metadata feature learning (large music database).
- Speaker
- Dr Yiannis Andreopoulos
- Hosted by
- School of Engineering
- Venue
- FN3