Towards Building a Compiler for a CIM architecture
Master Project


Project Description

Compute-in-Memory (CIM) architectures promise dramatic gains in performance and energy efficiency by performing computation where data resides, thereby reducing costly data movement. However, most existing CIM systems are tailored to narrow workloads (e.g., linear algebra or machine learning), and programming them typically requires low-level, hardware-specific expertise. A compiler for a general-purpose CIM model would enable developers to express algorithms in familiar high-level languages while automatically mapping computation, data layout, and control to the constraints of in-memory execution. Such a compiler is essential to unlock CIM’s benefits beyond specialized accelerators and make it viable for broader classes of applications. Ultimately, this would bridge the gap between emerging CIM hardware capabilities and practical, portable software development.

What is the goal of this project?

The goal of this project is to design and implement a minimal compiler from a given DSL to a particular CIM architecture taking into account the following meta-abstractions: implicit scheduling, hardware generalization, optimization and explicit expression of parallelism.

Further Reading

  • Starting point:
  • Related Work:
    • Asif Ali Khan et al. “CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms”. In: Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 4. ASPLOS ’24. Hilton La Jolla Torrey Pines, La Jolla, CA, USA: Association for Computing Machinery, 2025, pp. 31–46. ISBN: 9798400703911. DOI: 10.1145/3622781.3674189. URL.
    • Songyun Qu et al. “CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators”. In: Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2. ACM, Apr. 2024, pp. 185–200. DOI: 10.1145/3620665.3640359. URL.
    • Joao Ambrosi et al. “Hardware-Software Co-Design for an Analog-Digital Accelerator for Machine Learning”. In: 2018 IEEE International Conference on Rebooting Computing (ICRC). 2018, pp. 1–13. DOI: 10.1109/ICRC.2018.8638612.
    • Andi Drebes et al. “TC-CIM: Empowering Tensor Comprehensions for Computing-In-Memory”. In: IMPACT 2020 workshop (associated with HIPEAC 2020). Informal proceedings. 2020.
    • Jonathan Ragan-Kelley et al. “Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines”. In: SIGPLAN Not. 48.6 (June 2013), pp. 519–530. ISSN: 0362-1340. DOI: 10.1145/2499370.2462176. URL.
    • Asif Ali Khan et al. The Landscape of Compute-near-memory and Compute-in-memory: A Research and Commercial Overview. 2024. arXiv: 2401 . 14428 [cs.AR]. URL

Towards Building a Compiler for a CIM architecture

Supervisor(s): Andreea Costea, Mahmood Naderan-Tahan
Posted: February 24, 2026