CALL FOR PAPERS (with extended deadlines)

==========================================================

2021 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench’21)

http://www.benchcouncil.org/bench21/index.html

 Abstracts deadline: August 6, 2021 (extended deadline)

 Full papers deadline: August 6, 2021 (extended deadline)

 Conference date: Nov. 14th – Nov. 16th, 2021 (Virtual)

==========================================================

Introduction

—————-

Sponsored and organized by the International Open Benchmark Council (BenchCouncil), the Bench conference encompasses a wide range of topics in benchmarking, measurement, evaluation methods and tools. Bench’s multi-disciplinary emphasis provides an ideal environment for developers and researchers from the architecture, system, algorithm, and application communities to discuss practical and theoretical work covering workload characterization, benchmarks and tools, evaluation, measurement and optimization, and dataset generation.

Bench’21 conference invites manuscripts describing original work in the area of benchmarking, evaluation methods and tools in Big Data, Artificial Intelligence, High-Performance Computing and Computing Architectures (Call for Papers) . All accepted papers will be presented at the Bench’21 conference and will be published in a special issue of the BenchCouncil Transactions on Benchmarks, Standards and Evaluation (TBench).

Sponsored by BenchCouncil, Bench’21 conference will present numerous awards, including the BenchCouncil Achievement Award ($3000), the BenchCouncil Rising Star Award ($1000), the BenchCouncil Distinguished Doctoral Dissertation Award ($1000),and the BenchCouncil Best Paper Award ($1000). To encourage reliable and reproducible research using the benchmarks from all organizations, the Bench conference presents the BenchCouncil Award for Excellence for Reproducible Research to the papers using publicly available benchmarks. Each article receives a $100 prize, for up to 12 articles.


Call for papers

————————

We solicit papers describing original and previously unpublished research. Specific topics of interest include, but are not limited to, the following.

**Benchmark and standard specifications, implementations, and validations:

 -Big Data, Artificial intelligence (AI), High performance computing (HPC), Machine learning, Warehouse-scale computing, Mobile robotics, Edge and fog computing, Internet of Things (IoT), Blockchain, Data management and storage, Financial, Education, Medical or other application domains.

**Dataset Generation and Analysis:

 -Research or industry data sets, including the methods used to collect the data and technical analyses supporting the quality of the measurements; Analyses or meta-analyses of existing data and original articles on systems, technologies and techniques that advance data sharing and reuse to support reproducible research; Evaluations of the rigor and quality of the experiments used to generate data and the completeness of the descriptions of the data; Tools generating large-scale data.

**Workload characterization, quantitative measurement, design and evaluation studies:

 -Characterization and evaluation of Computer and communication networks, protocols and algorithms; Wireless, mobile, ad-hoc and sensor networks, IoT applications; Computer architectures, hardware accelerators, multi-core processors, memory systems and storage networks; HPC systems; Operating systems, file systems and databases; Virtualization, data centers, distributed and cloud computing, fog and edge computing; Mobile and personal computing systems; Energy-efficient computing systems; Real-time and fault-tolerant systems; Security and privacy of computing and networked systems;Software systems and services, and enterprise applications; Social networks, multimedia systems, web services; Cyber-physical systems.

**Methodologies, abstractions, metrics, algorithms and tools:

 -Analytical modeling techniques and model validation; Workload characterization and benchmarking; Performance, scalability, power and reliability analysis; Sustainability analysis and power management; System measurement, performance monitoring and forecasting; Anomaly detection, problem diagnosis and troubleshooting; Capacity planning, resource allocation, run time management and scheduling; Experimental design, statistical analysis and simulation.

**Measurement and evaluation:

 -Evaluation methodologies and metrics; Testbed methodologies and systems; Instrumentation, sampling, tracing and profiling of large-scale, real-world applications and systems; Collection and analysis of measurement data that yield new insights; Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks); Methods and tools to monitor and visualize measurement and evaluation data; Systems and algorithms that build on measurement-based findings; Advances in data collection, analysis and storage (e.g., anonymization, querying, sharing); Reappraisal of previous empirical measurements and measurement-based conclusions; Descriptions of challenges and future directions that the measurement and evaluation community should pursue.


Paper Submission

————————

The reviewing process is double-blind. Upon acceptance, papers will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the Bench’21 conference. All accepted and eligible papers will be considered, by a panel of reviewers, for the BenchCouncil Best Paper Award and the BenchCouncil Award for Excellence for Reproducible Research.

Papers must be submitted in PDF. For a full paper, the page limit is 12 double column pages in TBench format (All research article page limits do not include references and author biographies). For a short paper, the page limit is 8 double column pages in TBench format, not including references and author biographies. The submissions will be judged based on the merit of the ideas rather than the length. We only wish to publish papers of significant scientific content. Very short papers (of fewer than 4 pages) may be moved to the back matter. Such papers will neither be available for indexing nor visible as individual papers on SpringerLink. They will, however, be listed in the Table of Contents.


Submission site:

https://bench2021.hotcrp.com/

TBench Latex template:

http://mirrors.ctan.org/macros/latex/contrib/els-cas-templates.zip


Awards

———————

* BenchCouncil Achievement Award ($3,000)

– This award recognizes a senior member who has made long-term contributions to benchmarking, measuring, and optimizing. The winner is eligible for the status of a BenchCouncil Fellow.

* BenchCouncil Rising Star Award ($1,000)

– This award recognizes a junior member who demonstrates outstanding potential for research and practice in benchmarking, measuring, and optimizing.

* BenchCouncil Best Paper Award ($1,000)

– This award recognizes a paper presented at the Bench conferences, which demonstrates potential impact on research and practice in benchmarking, measuring, and optimizing.

* BenchCouncil Award for Excellence for Reproducible Research (Each winning paper earns a $100 prize, maximally up to 12 papers).

– BenchCouncil incubates and hosts benchmark projects, and further encourages reliable and reproducible research using the benchmarks from BenchCouncil or other organizations. To this end, we present the BenchCouncil Award for Excellence for Reproducible Research to the papers using all publicly available benchmarks.


Organization

—————–


General Chairs

Resit Sendag, University of Rhode Island, USA

Arne J. Berre, SINTEF Digital, Norway


Program Chairs

Lei Wang, ICT, Chinese Academy of Sciences, China

Axel Ngonga, Paderborn University, Germany

Chen Liu, Clarkson University, USA


Special Session Chair

Xiaoyi Lu, The University of California, Merced, USA


Publications Chair

Chunjie Luo, ICT, Chinese Academy of Sciences, China


Registration Chair

Fanda Fan, Chinese Academy of Sciences, China


Technical Support Chair

Ke Liu, Chinese Academy of Sciences, China


Publicity Chairs

Chen Zheng, Institute of Software, Chinese Academy of Sciences, China

Zhen Jia, Amazon, USA

Biwei Xie, Institute of Computing Technology, Chinese Academy of Sciences, China

Pengfei Chen, Sun Yat-sen University, China

Roberto V. Zicari, Z-Inspection® Initiative, Yrkeshögskolan Arcada,

Helsinki, Seoul National University, South Korea


Web Chair

Guoxin Kang, Chinese Academy of Sciences, China


Bench Steering Committees

Jack Dongarra, University of Tennessee

Geoffrey Fox, Indiana University

D. K. Panda, The Ohio State University

Felix, Wolf, TU Darmstadt

Xiaoyi Lu, University of California, Merced

Wanling Gao, ICT, Chinese Academy of Sciences & UCAS

Jianfeng Zhan, ICT, Chinese Academy of Sciences &BenchCouncil

[Bench’21] CFP – 2021 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing Extended Deadlines