welcome: please sign in
location: Diff for "BenchmarkSubmission"
Differences between revisions 15 and 16
Revision 15 as of 2010-11-22 12:22:51
Size: 17498
Comment:
Revision 16 as of 2010-11-22 12:24:23
Size: 17535
Comment:
Deletions are marked like this. Additions are marked like this.
Line 31: Line 31:
      
Line 32: Line 33:
      ''Complexity:'' P (30%), NP (50%), Beyond NP (20%)          ''Complexity:'' P (30%), NP (50%), Beyond NP (20%)
Line 34: Line 37:
      
Line 35: Line 39:
Line 36: Line 41:
      

Call for Benchmark Problems

The first phase of the Third Answer Set Programming Competition will consist in the collection of benchmarks. Participants and researchers in the field are encouraged to provide their help by proposing and/or devising new challenging benchmark problems, as well as providing support for them. In particular, the submission of problems arising from applications having practical impact are strongly encouraged; problems used in the former ASP Competitions, or variants thereof, can be re-submitted.

Benchmark problems will be collected, selected and (possibly) refined with the help of the ASP community, by means of a, mostly informal, two-stage submission process. Benchmarks problems and instances will be eventually published on the competition web site.

Schedule

  • Problem Proposal and Discussion

    • November 22th, 2010 - EasyChair submission site is open

    • December 25th, 2010 - Problem Proposal and Discussion is closed

  • Problem Validation and Final Submission

    • January 3st, 2011 - Notification of acceptance, list of candidate problems is out

    • January 3st, 2011 - Problem finalization starts

    • January 10th, 2011 - Final Acceptance, problems selection is out

Problem categories

Problems will be selected and classified according to this Problem Classification.

In order to guarantee a fair balance between several factors (relative importance of the various complexity categories, modeling difficulty and expressiveness, standard vs open language) problems will be roughly selected according to the following quotas:

  • System competition:
    • Language: ASP-Core (90%) and ASP-RFC (10%)

      Type: search and query

      • Complexity: P (30%), NP (50%), Beyond NP (20%)

  • Model & Solve competition:

    • Language: open

      Type: search (60%) and optimization (40%) problems

      Complexity: P (30%), NP (50%), Beyond NP (20%)

Problem submission procedure

The benchmark problems submission procedure will be articulated in the following two stages:

  1. Proposal and discussion;
  2. Validation and final submission.

At the first stage, problem descriptions are submitted and made publicly available to the ASP community, which is invited to discuss and (possibly) improve the proposals. At the second stage, a selection of proposed benchmarks is validated by the organizing committee (see Final acceptance of problems, and finalized by the contributors.

Proposal and discussion stage

The selection of benchmark problems is handled via the EasyChair system. Instructions for submitting a benchmark description and participating to the discussion follow.

Problem submission.

To submit a new problem just login the EasyChair system at:

A problem submission is handled just like a paper submission. The system will require to fill a form, in which title (conventional problem name), abstract (problem specification), and problem classification (see Section Benchmark problem classification) have to be mandatorily provided by checking appropriate form items; EasyChair will require also to fill the keyword fields, please provide us with some.

Proposal Submission.

Problem specifications can be either partial (problem specification only) or detailed. A submission can be uploaded in EasyChair (as a paper submission) enclosed in a a single compressed package (zip, rar, and tar.gz are allowed formats), containing:

  1. a textual problem description (same as the abstract) where both names and arguments of input and output predicates are clearly specified;
  2. a problem encoding;
  3. some sample instances, that is some instances for the problem, which comply with the Input specification (see Problem I/O and Instance Specification), added in order to help evaluating the specification (sample instances should not be used for the competition).

  4. a correctness checker, that is, a program or a script able to decide whether the output predicates occurring in some answer set form a solution to a given instance of the problem at hand (and in case of an optimization problem, the program should be able to compute the "quality" of the answer). The checker reads from standard input the output of a system/call script (see File and Language Format for details) and writes in standard output a single row of text containing the string "OK", and the string "FAIL" otherwise. The string "OK" must be followed by an integer representing the witness cost in case of optimization problems. An exit code different from zero indicates a checker problem.

The source code of the correctness checker has to be be included in the package; moreover, the provided software must be able to (build and) run on the Debian i686 GNU/Linux(kernel ver. 2.6.26) operating system. A checker can be a logic specification for a solver of choice: in such a case binaries of the solver must be provided together with the checker encoding.

More in detail, a problem specification has to be packaged in a single compressed archive named benchmark_name-contributor_name.zip (rar/tar.gz) containing:

  1. a benchmark_name.txt file containing the textual problem description (a template is available, see below);

  2. a benchmark_encoding.asp file containing a proposed encoding for the problem (which must match the provided language classification);

  3. a folder named "checker" containing the sources of a correctness checker together with a README.txt file containing the instructions for (building and) running it;

  4. a folder named "samples" containing some instances (one instance per text file).

In this stage, the submission of correctness checkers, as well as problem encodings and sample instances, are optional and left up to participants; nonetheless, we are grateful if you can already provide us with some. Especially, the submission of problems encodings often helps in disambiguating blurred specifications, so its provision is greatly appreciated.

The formats for specifying both input/output of problems and instances are reported in Problem I/O and Instance Specification). A template package for benchmark submission can be downloaded here (step by step archive creation is described in the appendix at the end of this page).

We encourage to provide from the beginning a complete submission, although abstract-only submission are accepted in the proposal stage. For abstract-only submissions, check the corresponding box at the end of the submission page, and just provide the problem statement in the abstract textfield. The problem statement must be anyway unambiguous and include the description of input and output predicates according to the problem description format described above.

Problem discussion.

Benchmarks can be discussed by connecting to:

The usual comment tools provided by EasyChair will be used to post comments and discuss the submissions. Unlike the usual conference "submit, then discuss" procedure, the discussion will be ongoing while problems are being submitted.

Researchers interested in discussing/improving problem proposals will get the virtual program committee member role in the EasyChair system. In particular, authors will be added as participants to the discussion by default; people interested only in discussing problems can send an e-mail to mailto:aspcomp2011__AT__mat.unical.it with subject:

  • DISCUSSION: First Name - Second Name - Email

for obtaining access to the discussions. The organizing committee will take into account comments in order to select problems in the second stage.

  • The community is strongly encouraged to participate to this important moment in which problems are submitted and evaluated.

Validation and final submission stage

Benchmark problems submitted and discussed in the first stage are evaluated by the competition organizing committee. As for paper submissions in a conference, authors will be notified regarding the final acceptance decision. Benchmark problems that are accepted by the organizing committee (see Final acceptance of problems) will be then finalized by the authors.

Finalized submission shall be uploaded to EasyChair enclosed in a single compressed package (zip, rar, and tar.gz are allowed formats), containing:

  1. the textual problem description (same as the abstract) where both names and arguments of input and output predicates are clearly specified;
  2. the problem encoding;
  3. a correctness checker (see above for specifications)
  4. either an instance generator or at least 30 hard ground instances of the problem (using only input predicates);

  5. a "demonstration" that the benchmark problem can be effectively solved on the class of instances provided, e.g. a report about tests carried out on a system of choice.

More in detail, a problem specification has to be packaged in a single compressed archive named benchmark_name-contributor_name.zip(rar/tar.gz) containing:

  1. a benchmark_name.txt file containing the textual problem description (a template is available, see below);

  2. a benchmark_encoding.asp file containing a proposed encoding for the problem (that must match the provided language classification);

  3. a folder named "'checker'" containing the sources of a correctness checker together with a README.txt file containing suitable instruction for (building and) running it.
  4. a folder named "instances" containing at least 50 numbered instances (one per file) named XX-benchmark_name-MAXINT-MAXLEVEL.asp where: XX is the instance number, MAXINT is the maximum integer (0 if not relevant), and the maximum function nesting level is MAXLEVEL (0 if not relevant), see Section Programs with Function symbols and integers;

  5. a folder named "generator" containing the sources of an instance generator together with a README.txt file containing suitable instruction for (building and) running it; and,

  6. a demonstration.txt file in which the author provides sufficient arguments demonstrating that the benchmark problem can be effectively solved.

At least one among item 4 and 5 must be provided. In this phase the contribution of benchmarks authors is fundamental, submission which are incomplete at the end of this stage are unsuitable for usage in the final competition and will be rejected (see Section Final acceptance of problems). The organizing committee takes the right to exclude benchmark problems whose provided instance family turns out to be blatantly too easy or difficult in terms of expected evaluation time.

A template package for benchmark submission is available here.

Problem I/O and Instance Specification

In the following is specified the allowed format for specifying input and output of problems and instances. Samples are available in the competition web site.

Problem Input and Output.

Benchmark problems specifications have to clearly indicate the vocabulary of input and output predicates, i.e., a list of predicate names for the input and a list of predicate names for the output.

Instance Specification.

Input instance are sequences of Facts (atoms followed by the dot "." character) with only predicates of the input vocabulary, possibly separated by spaces and line breaks, entirely saved in a text file (only one instance per file is allowed).

Maximum Integer and maximum function nesting level,

have to be provided in a per-instance basis. In particular, they are specified in the filename containing a given instance. We recall that instances must be named XX-benchmark_name-MAXINT-MAXLEVEL.asp where XX is the instance number, MAXINT is the maximum integer (0 if not relevant), and MAXLEVEL is the maximum function nesting level (0 if not relevant).

Remark.

Recall that each ASP system (or solver script) will read an input instance (from standard input) and will produce an output (to standard output) according to the specifications for the input and output format of call scripts reported in the Competition Rules document.

Programs with Function symbols and integers

Programs with function symbols and integers are in principle subject to no restriction. For the sake of Competition, and to facilitate implementors of ASP-RFC and ASP-Core it is prescribed that

  • each selected problem encoding P must provably have finitely many finite answer sets for any of its benchmark instance I, that is AS(P U I) must be a finite set of finite elements. "Proofs" of finiteness can be given in terms of membership to a known decidable class of programs with functions and/or integers, or any other formal mean.

  • a bound kP on the maximum nesting level of terms, and a bound mP on the maximum integer value appearing in answer sets originated from P must be known. That is, for any instance I and for any term t appearing in AS(P U I), the nesting level of t must not be greater than kP and, if t is an integer it must not exceed mP.

The values mP and kP will be provided in input to participant systems, when invoked on P: problem designers are thus invited to provide such values accordingly.

Final acceptance of problems

The final inclusion of problems in the competition is subject to to the conclusive approval of the competition organizing committee. Among the collected problems, the committee takes the right to discard problems which are not compliant with one or more of the following criteria:

  1. The instance set (or the instances generator output) does not comply with the competition input/output format (see Section Problem I/O and Instance Specification);

  2. The set of provided instances counts less than 25 instances, or the provided generator is missing or failing;
  3. The provided (or generated) instances are trivial or not provably solvable when run on the competition hardware;

Appendix :

How to create the template package

Windows Operating System
  1. Open a Text Editor (like notepad) using the Start Menu;
  2. In the Text Editor write your textual problem description;
  3. Save the file with name benchmark_name.txt;

  4. Repeat the steps 2 and 3 writing the proposed encoding for the problem and save the file with name benchmark_encoding.asp;

  5. In the same folder where you have saved the previous files, create the folder samples using the popup menu displaied using by right-click of the mouse;

  6. Use the Text Editor to create the instances and save the files of the instances inside the folder samples;

  7. Zip the created elements (files and folder), giving the name benchmark_name-contributor_name.zip, by right-click on the created elements, point to Send To, and then click Compressed (zipped) Folder;

Linux Operating System
  1. Open the terminal console on your Linux Operating System;
  2. Open a Text Editor by writing the name of it in the console. For example if you have installed gedit, just write either gedit or go to the folder of gedit using the cd command (e.g. cd /home/user/gedit/) and write ./gedit;

  3. In the Text Editor write your textual problem description;
  4. Save the file with name benchmark_name.txt;

  5. Repeat the steps 3 and 4 writing the proposed encoding for the problem and save the file with name benchmark_encoding.asp;

  6. In the same folder where you have saved the previous files, create the folder samples writing in the console the command mkdir samples;

  7. Use the Text Editor to create the instances and save the files of the instances inside the folder samples;

  8. Zip the created elements (files and folder) using the following command:
    • tar czvf benchmark_name-contributor_name.tar.gz benchmark_name.txt benchmark_encoding.asp samples.

ASP Competition 2011: BenchmarkSubmission (last edited 2011-01-11 16:23:42 by GiovambattistaIanni)