welcome: please sign in
location: Diff for "ParticipationRules"
Differences between revisions 9 and 10
Revision 9 as of 2010-11-22 08:33:59
Size: 6313
Comment:
Revision 10 as of 2010-11-22 08:56:08
Size: 6174
Comment:
Deletions are marked like this. Additions are marked like this.
Line 16: Line 16:
 1. The ''semantic'' recognition of the program structure is allowed (and encouraged), instead. Among allowed semantic recognition techniques, we classify the following.  1. The ''semantic'' recognition of the program structure is allowed (and encouraged), instead. Among allowed semantic recognition techniques, we classify:
Line 21: Line 21:
In the light of the above, the Model & Solve competition will be held under the following regulations:
Line 23: Line 22:
 1. The competition organizers make a set of problem specifications public, together with a set of test instances, these latter expressed in a common instance input format (basically, a set of facts in standard syntax).  1. The competition organizers select a number of problem specifications, together with a set of test instances, these latter expressed in a common instance input format (basically, a set of facts in standard syntax).
Line 25: Line 24:
 1. Per each problem, teams are allowed to submit a specific solver and a problem encoding (if any).  1. Per each problem, teams are allowed to submit a specific solver (or a solving script) and a problem encoding.
Line 27: Line 26:
 1. The competition is run over the same set of instances already known a priori? (in principle this is ok here).  1. Solutions must be objectively based on a declarative specification system.
 
=== Scoring ===
Line 29: Line 30:
=== Scoring ===

Rules & Scoring

Rules for System competition

  1. The System competition is open to general-purpose ASP systems, able to parse the two fixed formats ASP-Core and ASP-RFC (see File and Language Formats).

  2. The competition is run over a selection of problems. For each problem, a corresponding, fixed encoding in ASP-Core or ASP-RFC, together with a set of benchmarks instances, is chosen by the organizers (see Benchmark problems classification);

  3. Each participating solver will be launched with its default settings on each problem instance.
  4. Syntactic special-purpose solving techniques are strictly forbidden. Among syntactic solving techniques we classify the switch of internal solver options depending on:

    • command-line file names;
    • predicate and variable names;
    • "signature" techniques, aimed at recognizing a particular benchmark problem, such as counting the number of rules, constraints, predicates and atoms in a given encoding.
    In order to discourage such techniques the competition committee holds the right to introduce in the competition evaluation platform syntactic means for scrambling program encodings, such as, e.g., file, predicate and variable random renaming. Furthermore, the committee takes, in general, the right to replace official program encodings with syntactically changed versions.
  5. The semantic recognition of the program structure is allowed (and encouraged), instead. Among allowed semantic recognition techniques, we classify:

    • Recognition of the class the program encoding belongs to (e.g., stratified, positive, etc.) and possible consequent switch on of on-purpose evaluation techniques.
    • Recognition of general rule and program structures (e.g., common un-stratified even and odd-cycles, common join patterns within a rule body, etc.), provided these techniques are general and not peculiar of a given problem selected for the competition.

Rules for the Model & Solve competition

  1. The competition organizers select a number of problem specifications, together with a set of test instances, these latter expressed in a common instance input format (basically, a set of facts in standard syntax).
  2. Per each problem, teams are allowed to submit a specific solver (or a solving script) and a problem encoding.
  3. Solutions must be objectively based on a declarative specification system.

Scoring

The scoring system adopted in each category of the competition is basically the one used in the first and second ASP competitions (which was mainly based on a weighted sum of the number of instances solved within the given time-bound), extended by rewarding additional scores that measure time performances. In particular, the scoring system has been conceived by balancing the following factors:

  1. Problems having more instances should not have bigger scoring range. Thus, for a given problem P, a normalized score shall be awarded, obtained by averaging the score awarded to each instance.

  2. Non-sound solvers, and encodings, are strongly discouraged. Thus, if system S outputs an incorrect answer for instance I of some problem P, this shall invalidate all the score achieved for problem P.

  3. A system managing to solve a given problem instance sets a clear gap over systems not able to. Thus, per each instance I of problem P, a flat reward shall be given if I is solved in the allotted time.

  4. A system is generally perceived as clearly faster if solving time stays orders of magnitude below the maximum allowed time. Thus, (similarly to SAT competition scores) a logarithmically weighted bonus is awarded to faster systems.
  5. In the case of optimization problems, scoring should depend also on the quality of the provided solution. Thus, points have to be rewarded when finding better solutions, by taking into account the fact that small improvements in solution-quality are usually obtained at the price of strong computational efforts.

In general, 100 points per instance of a given benchmark proglem can be earned. The final score of a solver will then consist of the sum of the scores over all benchmarks.

For detailed information about scoring, see Scoring Details

Global Ranking

The global ranking is computed by awarding each ASP system a score that is the sum of its score in each category. The system with the higher score is proclaimed as winner.

In the remote case of a tie, an ex-aequo will be considered.

Policy in Case of Errors

The policy here is unchanged with respect to the previous competition. For the sake of clarity, it is next stated.

A solver that returns an incorrect answer for an instance of a particular benchmark (e.g., answers NO ANSWER SET FOUND to a satisfiable problem instance, or returns an erroneous witness, or answers OPTIMUM FOUND when the generated witness is not optimal) gets zero points as overall score for that benchmark problem. Of course, it is not disqualified from the competition, nor the score of other benchmark problems is influenced.

It is worth noting that, in some cases, the verification of some sort of answers might be untractable. Pragmatically, if a solver returns NO ANSWER SET FOUND for an instance, and no solver finds a witness for it, this is considered correct. Similarly, if a solver returns OPTIMUM FOUND for an optimization instance, and no solver finds a better witness, this is considered correct.

Parallel solvers

Given that the interest towards parallel ASP systems is legitimately increasing, we encourage the submission of parallel systems as non-competing participants to both the competition tracks.

Awards

Rankings of participants will be computed according to the Rules & Scoring, awarding a winner for the System Competition and a winner for the Model & Solve competition.

ASP Competition 2011: ParticipationRules (last edited 2011-02-25 11:24:39 by FrancescoCalimeri)