Call for Papers
Transformer models have become the foundation of a new wave of machine learning models. The application of such models spans from natural language understanding into image processing, protein folding, and many more. The main objective of this workshop is to bring the attention of our community to the upcoming architecture and system challenges for these foundational models and drive the innovation for supporting efficient execution of these ever-scaling models. To achieve this, the format of the workshop will consists of a combination of keynote speakers, short talks, followed by a panel discussion. Subject areas of the workshop included (but not limited to):
-
System and architecture support of transformer models at scale
-
Distributed training and infrastructure support
-
Efficient model compression (e.g. quantization, sparsity) techniques
-
Chiplet architecture and system support for transformer models
-
Efficient and sustainable training and serving
-
Real system evaluation of hardware and system
-
Benchmarking and evaluation of transformer models
-
System and architecture support for Mixture-of-Expert (MoEs)
-
Ethical accelerator and system design for AGI
Full Paper Submission Deadline: April 15th, 2023, 11:59 AoE (OpenReview)
Paper Notification (Tentative): May 1st, 2023.
Workshop: June 17th, 2023