The current landscape of Machine Learning (ML) and Deep Learning (DL) is rife with non-uniform models, frameworks, and system stacks. It lacks standard tools and methodologies to evaluate and profile models or systems. Due to the absence of standard tools, the state of the practice for evaluating and comparing the benefits of proposed AI innovations (be it hardware or software) on end-to-end AI pipelines is both arduous and error-prone — stifling the adoption of the innovations in a rapidly moving field.
The goal of the tutorial is to bring experts from the industry and academia together to shed light on the following topics to foster systematic development, reproducible evaluation, and performance analysis of deep learning artifacts. It seeks to address the following questions:
- What are the benchmarks that can effectively capture the scope of the ML/DL domain?
- Are the existing frameworks sufficient for this purpose?
- What are some of the industry-standard evaluation platforms or harnesses?
- What are the metrics for carrying out an effective comparative evaluation?
Register for the tutorial and enjoy an early registration discount until May 24th (https://iscaconf.org/isca2019/registration.html).
Learn more here: https://sites.google.com/g.harvard.edu/mlperf-bench/home