/bench22

Bench conference encompasses a wide range of topics in benchmarking, measurement, evaluation methods and tools. Bench’s multi-disciplinary emphasis provides an ideal environment for developers and researchers from the architecture, system, algorithm, and application communities to discuss practical and theoretical work covering workload characterization, benchmarks and tools, evaluation, measurement and optimization, and dataset generation.

2022 BenchCouncil International Symposium On Benchmarking, Measuring And Optimizing (Bench22)

** Bench22 CFP: https://www.benchcouncil.org/bench22/cfp.html**

** Sponsored and organized by the International Open Benchmark Council (BenchCouncil), the Bench conference encompasses a wide range of topics in benchmarking, measurement, evaluation methods and tools. Bench’s multi-disciplinary emphasis provides an ideal environment for developers and researchers from the architecture, system, algorithm, and application communities to discuss practical and theoretical work covering workload characterization, benchmarks and tools, evaluation, measurement and optimization, and dataset generation. Bench’22 conference invites manuscripts describing original work in the area of benchmarking, evaluation methods and tools in Big Data, Artifical Intelligence, High-Performance Computing and Computing Architectures. All accepted papers will be presented at the Bench’22 conference, and will be published by Springer LNCS (Indexed by EI). Distinguished papers will be recommended to and published by the BenchCouncil Transactions on Benchmarks, Standards and Evaluation (TBench).**

Important Dates

Full Papers: July 28, 2022 at 11:59 PM AoE

Notification: September 6, 2022 at 11:59 PM AoE

Final Papers Due: October 11, 2022 at 11:59 PM AoE

Award

BenchCouncil Achievement Award ($3,000)

--- This award recognizes a senior member who has made long-term contributions to benchmarking, measuring, and optimizing. The winner is eligible for the status of a BenchCouncil Fellow.

BenchCouncil Rising Star Award ($1,000)

--- This award recognizes a junior member who demonstrates outstanding potential for research and practice in benchmarking, measuring, and optimizing.

BenchCouncil Best Paper Award ($1,000)

--- This award recognizes a paper presented at the Bench conferences, which demonstrates potential impact on research and practice in benchmarking, measuring, and optimizing.

BenchCouncil Distinguished Doctoral Dissertation Award ($2000)

--- This award recognizes and encourages superior research and writing by doctoral candidates in the broad field of benchmarks, data, standards, evaluations, and optimizations community. This year, the award includes two tracks, including the BenchCouncil Distinguished Doctoral Dissertation Award in Computer Architecture ($1000) and BenchCouncil Distinguished Doctoral Dissertation Award in other areas ($1000). Among the submissions of each track, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the Bench’22 Conference and contribute research articles to BenchCouncil Transactions on Benchmarks, Standards and Evaluation. Finally, one among the four will receive the award for each track, which carries a $1,000 honorarium.

Call For Paper

We solicit papers describing original and previously unpublished work. The topics of interest include, but are not limited to, the following. ** Benchmark and standard specifications, implementations, and validations of: Big Data Artificial intelligence (AI) High-performance computing (HPC) Machine learning Big scientific data Datacenters Cloud Warehouse-scale computing Mobile robotics Edge and fog computing Internet of Things (IoT) Blockchain Data management and storage Financial domains Education domains Medical domains Other application domains

** Data: Detailed descriptions of research or industry data sets, including the methods used to collect the data and technical analyses supporting the quality of the measurements. Analyses or meta-analyses of existing data and original articles on systems, technologies, and techniques that advance data sharing and reuse support reproducible research. Evaluations of the rigor and quality of the experiments used to generate data and the completeness of the data's descriptions. Tools generating large-scale data while preserving their original characteristics.

** Workload characterization, quantitative measurement, design, and evaluation studies of: Computer and communication networks, protocols, and algorithms Wireless, mobile, ad-hoc and sensor networks, IoT applications Computer architectures, hardware accelerators, multi-core processors, memory systems, and storage networks HPC Operating systems, file systems, and databases Virtualization, data centers, distributed and cloud computing, fog and edge computing Mobile and personal computing systems Energy-efficient computing systems Real-time and fault-tolerant systems Security and privacy of computing and networked systems Software systems and services, and enterprise applications Social networks, multimedia systems, web services Cyber-physical systems, including the smart grid

** Methodologies, abstractions, metrics, algorithms, and tools for: Analytical modeling techniques and model validation Workload characterization and benchmarking Performance, scalability, power, and reliability analysis Sustainability analysis and power management System measurement, performance monitoring and forecasting Anomaly detection, problem diagnosis, and troubleshooting Capacity planning, resource allocation, run time management and scheduling Experimental design, statistical analysis and simulation

** Measurement and evaluation: Evaluation methodologies and metrics Testbed methodologies and systems Instrumentation, sampling, tracing and profiling of large-scale, real-world applications and systems Collection and analysis of measurement data that yield new insights Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks) Methods and tools to monitor and visualize measurement and evaluation data Systems and algorithms that build on measurement-based findings Advances in data collection, analysis and storage (e.g., anonymization, querying, sharing) Reappraisal of previous empirical measurements and measurement-based conclusions Descriptions of challenges and future directions that the measurement and evaluation community should pursue ** Optimization methodologies and Tools.

Bench'22 Submission

Bench'22 Submission Site: https://bench2022.hotcrp.com/

Papers must be submitted in PDF. For a full paper, the page limit is 15 pages in the LNCS format, not including references. For a short paper, the page limit is 8 pages in the LNCS format, not including references. The submissions will be judged based on the merit of the ideas rather than the length. After the conference, the proceeding will be published by Springer LNCS (Indexed by EI). Please note that the LNCS format is the final one for publishing. Distinguished papers will be recommended to and published by the BenchCouncil Transactions on Benchmarks, Standards and Evaluation (TBench).

At least one author must pre-register for the symposium, and at least one author must attend the symposium to present the paper. Papers for which no author is pre-registered will be removed from the proceedings.

Committees

General Co-Chairs

• Emmanuel Jeannot, INRIA, France

• Peter Mattson, Google, USA

• Wanling Gao, University of Chinese Academy of Sciences, China

Program Co-Chairs

• Chunjie Luo, ICT, Chinese Academy of Sciences, China

• Ce Zhang, ETH Zurich, Switzerland

• Ana Gainaru, Oak Ridge National Laboratory, USA

Workshop and Tutorial Co-Chairs

• Kai Shu, Illinois Institute of Technology, USA

• Reza Zafarani, Syracuse University, USA

Publicity Co-Chairs:

• David Kanter, MLCommons

• Rui Ren, Beijing Institute of Open Source Chip

• Zhen Jia, Amazon

Web Co-chairs:

• Jiahui Dai, BenchCouncil

• Qian He, Beijing Institute of Open Source Chip

Bench Steering Committees

• Prof. Dr. Jack Dongarra, University of Tennessee

• Prof. Dr. Geoffrey Fox, Indiana University

• Prof. Dr. D. K. Panda, The Ohio State University

• Prof. Dr. Felix, Wolf, TU Darmstadt.

• Prof. Dr. Xiaoyi Lu, University of California, Merced

• Dr. Wanling Gao, ICT, Chinese Academy of Sciences & UCAS

• Prof. Dr. Jianfeng Zhan, ICT, Chinese Academy of Sciences &BenchCouncil