Submissions due: February 27, 2025
Publication: Sept/Oct 2025


With the proliferation of mobile and edge computing devices, data generation continues to grow at an exponential rate, reaching an estimated 181 zettabytes processed per year by 2025. In response, computing systems large and small need to process ever-increasing amounts of data quickly and efficiently, leading to the rise of data-centric computing.
Data-centric computing covers a broad range of hardware and software co-design topics, spanning techniques that (1) reduce the amount data transmitted, (2) optimize data movement using knowledge of latency and bandwidth of the connections between compute and sources of data, (3) integrate specialized heterogeneous or non-von-Neumann components in data-processing systems, or (4) develop new methods to synthesize or summarize data in place or minimize the overhead of data accesses.
A common thread emerging across data-centric computing techniques is the need for hardware/software co-design in compute, memory, storage, and interconnect to deliver sizable improvements in performance and energy efficiency that rely on both traditional and unconventional scaling techniques

This special issue of IEEE Micro solicits academic and industrial research on co-designed solutions that revisit traditional boundaries between compute, memory, storage, interconnect and the software to support new  architectures and programming abstractions. The solutions that will meet  the test of time will balance specificity with generality, classify general principles, and denote metrics to measure a solution’s benefits and highlight remaining challenges. These solutions will serve as a template for how to apply future innovations in hardware and software to emerging use cases requiring even more generated data.

TOPICS OF INTEREST

  • Novel systems that address application domains currently limited by bandwidth or media latency (e.g., large-scale AI training and inference, databases, computational genomics, HPC), and demonstrate dramatic improvements to end-to-end application performance and/or reduction in overall in energy use
  • Computation near or in media (e.g., processing-in-memory, processing-near-memory, processing-using-memory, in-storage computing) using digital or analog computational devices and the end-to-end hardware/software infrastructure required to prepare the data for computation
  • Techniques to monitor lifetime of data and ensure long-term data resilience of retained data in data-centric computing solutions
  • Operational
    datacenter challenges of migrating existing data and applications to
    use new data-centric computing solutions to meet future application
    requirements
  • Techniques to mitigate the overhead of multi-tenant data-intensive applications and data processing infrastructure
  • Primitives or systems/hardware architectural enhancements using data processing unit/infrastructure processing unit (DPU/IPU) or peer-to-peer data movement for enabling application software to schedule selective parts of large data sets for optimal data movement for when compute becomes available
  • Tools to characterize and synthesize data-intensive workloads to model and explore possible system architectures and find new opportunities for efficient data process in compute, interconnects, storage media, and software
Call for Papers: IEEE Micro Special Issue on Data-Centric Computing