Call for Papers
Foundation models have become the foundation of a new wave of machine learning models. The application of such models spans from natural language understanding into image processing, protein folding, and many more. The main objective of this workshop is to bring the attention of machine learning and system communities to the upcoming architectural and system challenges for the foundational models and drive the productive usage of these models in chip design process and system design. Subject areas of the workshop included (but not limited to):
-
🆕 Agents for accelerating hardware development and improving hardware design productivity
-
🆕 System design for extremely large chain-of-thought-reasoning models
-
🆕 Noisy hardware-efficient approximation (e.g. numerics and analog)
-
🆕 Generative AI for security and vulnerability detection, design verification and testing
-
🆕 Self-optimizing hardware using ML
-
🆕 Hardware accelerators for neurosymbolic and hybrid AI models
-
🆕 ML-driven resilient computing
-
System and architecture support of foundational models at scale
-
Efficient model compression (e.g. quantization, sparsity) techniques
-
Efficient and sustainable training and serving
-
Benchmarking and evaluation of foundational models
-
Learned models for computer architecture and systems optimization
-
Machine learning techniques for compiler and code optimization
-
Distributed systems and infrastructure design for machine learning workloads
-
Machine learning for hardware/software co-design (AutoML for Hardware)
-
Automated machine learning in EDA tools
-
Optimized code generation for hardware and software
-
Evaluation of deployed machine learning systems and architectures
Areas: Computer Architecture, Systems, Compilers, Model Scaling, Security, Self-Attention, Foundational Models, EDA, Foundational Model Compression.