1st Championship Value Prediction (CVP-1)

in conjunction with: ISCA-45 http://iscaconf.org/isca2018/index.html

 

The workshop on computer architecture competitions is a forum for holding competitions to evaluate computer architecture research topics. The sixth JWAC workshop is organized around a competition for value prediction algorithms. The Championship Value Prediction (CVP) invites contestants to submit their value prediction code to participate in this competition. Contestants will be given a fixed storage budget to implement their best predictors on a common evaluation framework provided by the organizing committee.

Objective

The goal for this competition is to compare different value prediction algorithms in a common framework. Predictors will be evaluated for all instructions producing output values.
Predictors must be implemented within a fixed storage budget as specified in the competition rules. The simple and transparent evaluation process enables dissemination of results and techniques to the larger computer architecture community and allows independent verification of results.

Prizes

The championship will have three tracks, each designing value predictors with different storage budgets: 8KB, 32KB, and unlimited size. In each category an additional budget of 2048 bits is allowed for tracking additional information used by the predictor (e.g., global history). The top performer for each track will receive a trophy commemorating his/her triumph (OR some other prize to be determined later). Top submissions will be invited to present at the workshop, when results will be announced. All source code, write-ups and performance results will be made publicly available through the CVP-1 website.

Submission Requirements

Each submission should include an abstract, write up, and predictor code. We should be able to simulate your predictor with a reasonable amount of memory (not exceeding 32GB),
and within six hours of simulation time. Also, your predictors must not violate causality (cannot use future information to predict the current value). Furthermore, you are not allowed to spawn another thread from your predictor code.

Competition Rules

The competition will proceed as follows. Contestants are responsible for implementing and evaluating their algorithm in the distributed framework. An initial set of 136 traces (30 million instructions each) will be released to the competitors with the distributed framework. Submissions will be compiled and run with the original version of the framework. Quantitatively assessing the cost/complexity of predictors is difficult. To simplify the review process, maximize transparency, and minimize the role of subjectivity in selecting a champion, CVP-1 will make
no attempt to assess the cost/complexity of predictor algorithms. All predictors must be implemented within the constraints of the 8KB, 32KB, and unlimited budget category. Competitors can choose not to compete in a particular budget category. In each budget category an additional budget of 2048 bits is allowed for tracking additional information used by the predictor (e.g., global history). Clear documentation, in the code as well as the paper write up, must be provided to assure that this is the case. Predictors will be scored on overall cycle count for the final evaluation trace set by the organizing committee, which will not be the same set of traces released to the competitors with the evaluation framework. The final evaluation traces will not be available to the public after the final evaluation. The arithmetic mean of cycle counts of the final evaluation traces will be used as the final score of a predictor. Predictors are not allowed to “profile” traces
in order to adjust their algorithms for a particular trace or group of traces, nor are they allowed to violate causality (cannot use future information to predict the current value). Furthermore, competitors are not allowed to spawn another thread from the predictor code.

Acceptance Criteria

In the interest of assembling a quality program for workshop attendees and future readers, there will be an overall selection process, of which performance ranking is the primary component. To be considered, submissions must conform to the submission requirements described above. Submissions will be selected to appear in the workshop on the basis of the performance ranking, novelty, practicality of the predictor, and overall quality of the paper and commented code. Novelty is not a strict requirement, for example, a contestant may submit his/her previously
published design or make incremental enhancements to a previously proposed design. In such cases, overall cycle count is a heavily weighted criterion, as is overall quality of the paper (for example, analysis of new results on the common framework, etc.).

 

CVP-1 Kit: Download and Directions : see https://www.microarch.org/cvp1/

 Important Dates :

Evaluation framework available : February 11th 2018

Submissions due : April 1st 2018, 11:59 PM CST

Acceptance notification : April 10th 2018

Camera Ready version due : May 31st 2018

Results announced : At ISCA workshop (Sunday morning, June 3rd 2018)

Call for contributions : 1st Championship Value Prediction @ISCA-45