welcome: please sign in

Revision 22 as of 2010-12-16 07:44:23

Clear message
location: ParticipationRules

Rules & Scoring

Rules for System competition

Principle

The scenario of the System Competition aims at reproducing, as faithfully as possible, a setting in which it is measured the performance of a given system against a problem encoding (and instance thereof) of unknown nature and complexity.

Rules

  1. The System competition is open to general-purpose ASP systems, able to parse the two fixed formats ASP-Core and ASP-RFC (see File and Language Formats).

  2. The competition is run over a selection of problems. For each problem, a corresponding, fixed encoding in ASP-Core or ASP-RFC, together with a set of benchmarks instances, is chosen by the organizers (see Benchmark problems classification);

  3. Each participating solver will be launched with its default settings on each problem instance.
  4. Syntactic special-purpose solving techniques are strictly forbidden. Among syntactic solving techniques we classify the switch of internal solver options depending on:

    • command-line file names;
    • predicate and variable names;
    • "signature" techniques, aimed at recognizing a particular benchmark problem, such as counting the number of rules, constraints, predicates and atoms in a given encoding.
    In order to discourage such techniques the competition committee holds the right to introduce in the competition evaluation platform syntactic means for scrambling program encodings, such as, e.g., file, predicate and variable random renaming. Furthermore, the committee takes, in general, the right to replace official program encodings with syntactically changed versions.
  5. The semantic recognition of the program structure is allowed (and encouraged), instead. Among allowed semantic recognition techniques, we classify:

    • Recognition of the class the program encoding belongs to (e.g., stratified, positive, etc.) and possible consequent switch on of on-purpose evaluation techniques.
    • Recognition of general rule and program structures (e.g., common un-stratified even and odd-cycles, common join patterns within a rule body, etc.), provided these techniques are general and not peculiar of a given problem selected for the competition.

Rules for the Model & Solve competition

Principle

The scenario of the Model & Solve Competition aims at reproducing, as faithfully as possible, a setting in which a system's team faces a problem specification and is asked, in a limited time range, to provide a tailored solution, optimized both in terms of problem encoding and evaluation strategy.

Rules

  1. The competition organizers select a number of problem specifications, together with a set of test instances, these latter expressed in a common instance input format (basically, a set of facts in standard syntax).
  2. Per each problem, teams are allowed to submit a specific solver (or a solving script) and a problem encoding.
  3. Solutions must be objectively based on a declarative specification system at their core.

Reproducibility of Competition results

The committee will not disclose submitted material until the end of the competition, although willingly participants are allowed to share their own work at any moment. In order to allow transparency and reproducibility of the competition results, the participants agree that any material (system binaries, scripts, problems encodings) submitted by participants will be made public after the competition.

On request of competitors, material will be published under an explicit 'limited usage license' (e.g. usage of binaries for academic purposes only), and can be properly watermarked (e.g. scripts and systems' banners can include explicit mention of the Third ASP Competition).

Scoring

The scoring system adopted in each category of the competition is basically the one used in the first and second ASP competitions (which was mainly based on a weighted sum of the number of instances solved within the given time-bound), extended by rewarding additional scores that measure time performances. In particular, the scoring system has been conceived by balancing the following factors:

  1. Problems having more instances should not have bigger scoring range. Thus, for a given problem P, a normalized score shall be awarded, obtained by averaging the score awarded to each instance.

  2. Non-sound solvers, and encodings, are strongly discouraged. Thus, if system S outputs an incorrect answer for instance I of some problem P, this shall invalidate all the score achieved for problem P.

  3. A system managing to solve a given problem instance sets a clear gap over systems not able to. Thus, per each instance I of problem P, a flat reward shall be given if I is solved in the allotted time.

  4. A system is generally perceived as clearly faster if solving time stays orders of magnitude below the maximum allowed time. Thus, (similarly to SAT competition scores) a logarithmically weighted bonus is awarded to faster systems.
  5. In the case of optimization problems, scoring should depend also on the quality of the provided solution. Thus, points have to be rewarded when finding better solutions, by taking into account the fact that small improvements in solution-quality are usually obtained at the price of strong computational efforts: bonus

for a better quality solution is thus given on an exponential weighting basis.

In general, 100 points per instance of a given benchmark problem can be earned. The final score of a solver will then consist of the sum of the scores over all benchmarks.

For detailed information about scoring, see Scoring Details

Global Ranking

The global ranking is computed by awarding each ASP system a score that is the sum of its score in each problem. The system with the higher score is proclaimed as winner in the corresponding track.

In the remote case a competition track ends in a tie, this is awarded ex-aequo.

Policy in Case of Evaluation Errors

The policy here is unchanged with respect to the previous competition. For the sake of clarity, it is next stated.

A solver that returns an incorrect answer for an instance of a particular benchmark (e.g., answers NO ANSWER SET FOUND to a satisfiable problem instance, or returns an erroneous witness, or answers OPTIMUM FOUND when the generated witness is not optimal) gets zero points as overall score for that benchmark problem. Of course, it is not disqualified from the competition, nor the score of other benchmark problems is influenced.

It is worth noting that, in some cases, the verification of some sort of answers might be untractable. Pragmatically, if a solver returns NO ANSWER SET FOUND for an instance, and no solver finds a witness for it, this is considered correct. Similarly, if a solver returns OPTIMUM FOUND for an optimization instance, and no solver finds a better witness, this is considered correct.

Dispute resolution

The competition committee holds the right to enforce

Hardware

The competition will be run on a Linux platform featuring a 4 core Intel Xeon CPU X3430 running at 2.4 Ghz. Except for (non-competing) parallel solvers, all the problem instances will be evaluated in single-core mode.