A Novel Analog-Computing-in-Memory Architecture with Scalable Multi-Bit MAC Operations and Flexible Weight Organization for DNN Acceleration


UNUTULMAZ A.

Electronics (Switzerland), cilt.14, sa.20, 2025 (SCI-Expanded, Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 14 Sayı: 20
  • Basım Tarihi: 2025
  • Doi Numarası: 10.3390/electronics14204030
  • Dergi Adı: Electronics (Switzerland)
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Communication Abstracts, INSPEC, Metadex, Directory of Open Access Journals, Civil Engineering Abstracts
  • Anahtar Kelimeler: analog-domain AI accelerator, computing-in-memory, multiply-and-accumulate
  • Marmara Üniversitesi Adresli: Evet

Özet

Deep neural networks (DNNs) require efficient hardware accelerators due to the high cost of vector–matrix multiplication operations. Computing-in-memory (CIM) architectures address this challenge by performing computations directly within memory arrays, reducing data movement and improving energy efficiency. This paper introduces a novel analog-domain CIM architecture that enables flexible organization of weights across both rows and columns of the CIM array. A pipelining scheme is also proposed to decouple the multiply-and-accumulate and analog-to-digital conversion operations, thereby enhancing throughput. The proposed architecture is compared with existing approaches in terms of latency, area, energy consumption, and utilization. The comparison emphasizes architectural principles while deliberately avoiding implementation-specific details.