Master Thesis: Fault-Tolerant Techniques for Computation in Memory for Deep Learning Application

Matrix vector multiplication (MVM) is one of the most frequently performed operations in neural network (NN) hardware accelerators. The computation-in-memory (CiM) architecture is a promising alternative to conventional processor-centric computing architecture, which can efficiently perform MVM operations. However, the memristive devices used in CiM architecture suffer from different defects or faults. This can negatively impact the inference accuracy of deep learning applications. Thus, it is necessary to have fault-tolerant strategies such as error correction coding (ECC), fault-aware training, sensitivity analysis, circuit level and architecture level strategy, etc.
We want to develop an efficient fault-tolerant strategy (in terms of computation, memory, area, latency, energy, etc.) for CiM architecture that can enhance the accuracy of NNs under faulty scenarios. We are not limited to CiM only; we can also extend the idea for a digital hardware-based accelerator (GPU, TPU, etc.) for NN applications.
 

Skills required for the thesis

   •    Basic understanding of computer architecture, digital hardware logic design
   •    Background in deep learning and its implementation using Pytorch
   •    Programming skills: Python, Verilog, C/C++

Skills acquired within the thesis

   •    Opportunity to contribute to the cutting edge of fault-tolerant NN accelerators
   •    Gain experience in developing digitallogic design using Verilog HDL
   •    Gain experience in evaluating NN performance for fault-free and faulty scenario
   •    Potential to publish findings in a top-tier Design/Test/EDA conference or journal
 

PDF-Version of this adverstisement