welcome: please sign in
location: Diff for "BenchmarkSubmission"
Differences between revisions 55 and 61 (spanning 6 versions)
Revision 55 as of 2012-05-10 13:04:28
Size: 6952
Comment:
Revision 61 as of 2012-05-10 16:16:45
Size: 7440
Comment:
Deletions are marked like this. Additions are marked like this.
Line 47: Line 47:
Line 50: Line 49:
At the first stage, problem descriptions are submitted and made publicly available in the competition website. The community is invited to discuss and improve the proposals.
At the second stage, the organizing committee will validate a selection of benchmarks; each selected problem will then be completed under the responsibility of the original contributor(s).
At the first stage, problem descriptions are submitted and made publicly
available in the competition website. Everybody can discuss
and improve the proposals on our discussion mailing list. At the second stage, the organizing committee
will validate a selection of benchmarks; each selected problem will then
be completed under the responsibility of the original contributor(s).
Line 53: Line 55:
Submission, discussion and selection of benchmark problems is handled via email; details are reported below. Submission, discussion and selection of benchmark problems is handled via
email: details are reported below.
Line 59: Line 62:
Line 61: Line 63:
     * '''Sep. 30th, 2012'''      * '''Sep. 30th, 2012''' - Final submission deadline
Line 67: Line 69:
''Problem specifications can be initially submitted in a'' '''preliminary form''' (abstract); authors will be asked to provide a complete specification later on. ''Problem specifications can be initially submitted in a'' '''preliminary
form''' (abstract); authors will be asked to provide a complete specification
later on.
Line 69: Line 73:
To submit a new problem just prepare a benchmark package (see [[#package-format|Package Description]]) and send it by email to [[mailto:aspcomp_benchmarks_REPLACE_WITH_AT_mat.unical.it|Benchmark Submission]] with subject: To submit a new problem just prepare a benchmark package (see
[[#package-format|Package Description]]) and send it by email to
[[mailto:aspcomp_benchmarks_submission_REPLACE_WITH_AT_mat.unical.it|Benchmark Submission]]
with subject:
Line 76: Line 83:
  * A ''' ''preliminary submission'' ''' (see the [[ProblemIOSpecification|Examples]]) can be done by providing problem name and problem specification only. (We encourage to provide an ASP encoding and sample instances also.)

  * A ''' ''complete problem submission'' ''' (see the [[ProblemIOSpecification|Examples]]) has to be enclosed in a single compressed package ({{{zip}}} and {{{tar.gz}}} are allowed formats), containing:

    1. a textual problem description where both names and arguments of input and output predicates are clearly specified (see [[ProblemIOSpecification|Problem I/O]]);

    2. a problem encoding in ASP (optional);

    3. some sample instances in the form of ASP facts (see [[ProblemIOSpecification|Instance Specification]]), added in order to help evaluating the specification (sample instances should not be used for the competition).

    4. a correctness checker, that is, a program (better if written in ASP whenever possible) or a script able to decide whether the output predicates occurring in some answer set form a solution to a given instance of the problem at hand. In case of an optimization problem, the program should be able to compute the "quality" of the answer (see [[ProblemIOSpecification|Checker Specification]]).

    5. a "demonstration" in which the author provides sufficient arguments showing that the benchmark problem can be effectively solved nor it is trivial.
  * A ''' ''preliminary submission'' ''' (see the [[ProblemIOSpecification|Examples]]) simply requires problem name and description/specification only, even if the submission of sample instances is encouraged (sample instances will not be used for the competition). In case you want to propose the problem for the System Competition, an ASP encoding is also strongly suggested.
  * A ''' ''complete problem submission'' ''' (see the [[ProblemIOSpecification|Examples]]), to be provided if the proposal has been accepted, has to be enclosed in a single compressed package ({{{zip}}} or {{{tar.gz}}}), containing:
    1. a textual problem description, with a clear specification of the input and output formats. Names and arguments of input and output predicates must be clearly specified (see [[ProblemIOSpecification|Problem I/O]]);
    2. if the problem has been included into the [[SystemCompetition|System Track]] benchmark suite, a problem encoding in ASP;
    3. at least 50 instances, or, equivalently, a script/program that is able to generate problem instances. The format of instances can be found in the [[ProblemIOSpecification|Instance Specification]]) page.
    4. a correctness checker, that is, a script or a program (preferably written in ASP whenever possible) that is able to decide whether, given an instance of a the problem at hand, the output of a system actually represent a solution to the instance. If the problem has been included into the System Competition benchmark suite, the ASP program or the script must check whether the predicates occurring in some answer set provided by the system form a solution to the instance. In case of an optimization problem, the program should be also able to compute the "quality" of the answer (see [[ProblemIOSpecification|Checker Specification]]).
    5. a "demonstration", in which the author convincingly shows that the benchmark problem both can be effectively handled/solved and it is not trivial.
Line 91: Line 92:
''The problem statement must be anyway unambiguous and include the description of input and output predicates according to the problem description format described above. We are grateful if you can already provide us with a declarative encoding and some sample instance. Especially, the submission of problems encodings often helps in disambiguating blurred specifications, so its provision is greatly appreciated.'' ''The problem statement must be unambiguous and include the description of input and output according to the problem description format described above. We would really appreciate if you can provide us with a declarative encoding and some sample instance at once: the submission of problems encodings often helps in disambiguating blurred specifications, so its provision is encouraged.''
Line 97: Line 98:
Researchers interested in discussing/improving problem proposals can send an e-mail to [[mailto:aspcomp_benchmarks_REPLACE_WITH_AT_mat.unical.it|Benchmark Discussion]] with subject:

      DISCUSSION: Benchmark Name <subject>

''The community is strongly encouraged to participate to this important moment in w
hich problems are submitted and evaluated.''
All Researchers interested in discussing/improving problem proposals can subscribe to the [[mailto:aspcomp_benchmarks_discussion_REPLACE_WITH_AT_mat.unical.it|Benchmark Discussion]] mailing list. All communities are encouraged to participate to this crucial stage in which problems are submitted and evaluated.
Line 107: Line 104:
In this phase the contribution of benchmarks authors is fundamental, submission which are incomplete at the end of this stage are unsuitable for usage in the final competition and will be rejected (see Section [[#acceptance|Final acceptance of problems]]). The organizing committee takes the right to exclude benchmark problems whose provided instance family turns out to be blatantly too ''easy'' or too ''difficult'' in terms of expected evaluation time. In this phase the contribution of benchmarks authors is fundamental; submission which results to be incomplete at the end of this stage are unsuitable fot the competition and will be rejected (see Section [[#acceptance|Final acceptance of problems]]). The organizing committee takes the right to exclude benchmark problems whose provided instance family turns out to be blatantly too ''easy'' or too ''difficult'' in terms of expected evaluation time.
Line 112: Line 110:
The final inclusion of problems in the competition is subject to to the conclusive approval of the competition organizing committee. Among the collected problems, the committee takes the right to discard problems which are not compliant with one or more of the following criteria:
The final inclusion of problems in the competition is subject to to the conclusive approval of the competition organizing committee. The committee takes the right to discard problems which are not compliant with one or more of the following criteria:
Line 115: Line 112:

2. The set of provided instances counts less than 50 instances, or the provided generator is missing or failing;

3. The provided (or generated) instances are trivial or not provably solvable when run on the competition hardware;
  2. The set of provided/generated instances counts less than 50 instances, or the provided generator is missing/failing;
  3. The provided/generated instances are trivial or not provably solvable on the competition hardware.

Call for Benchmark Problems

The 4th Open Answer Set Programming Competition is open to ASP systems and any other system based on a declarative specification paradigm.

Participants will compete on a selected collection of declarative specifications of benchmark problems, taken from a variety of domains as well as real world applications, and instances thereof.

These include but are not limited to:

  • Deductive database tasks on large data-sets
  • Sequential and Temporal Planning
  • Classic and Applicative graph problems
  • Puzzles and combinatorics
  • Scheduling, timetabling and other resource allocation problems
  • Combinatorial Optimization problems
  • Ontology reasoning
  • Automated Theorem Proving and model checking
  • Reasoning tasks over large propositional instances
  • Constraint Programming problems
  • Other AI problems

We encourage to provide help by proposing and/or devising new challenging benchmark problems. The submission of problems arising from applications having practical impact are especially encouraged; problems used in the former ASP Competitions, or variants thereof, can be re-submitted (see Problem Submission).

Benchmark authors are asked to provide the committee with a problem specification and an instance set (or a generator thereof). Problems will be classified and selected in order to obtain a fair balance between several factors (including application domain, modeling difficulty, complexity category, expressiveness, etc).

Problem Submission

The problems submission procedure will consist of two stages:

  1. Proposal and discussion;
  2. Validation and final submission.

At the first stage, problem descriptions are submitted and made publicly available in the competition website. Everybody can discuss and improve the proposals on our discussion mailing list. At the second stage, the organizing committee will validate a selection of benchmarks; each selected problem will then be completed under the responsibility of the original contributor(s).

Submission, discussion and selection of benchmark problems is handled via email: details are reported below.

Schedule

  • Problem Proposal and Discussion

    • Aug 31th, 2012 - Problem submission deadline

  • Problem Validation and Final Submission

    • Sep. 30th, 2012 - Final submission deadline

How To Submit a Problem

Problem specifications can be initially submitted in a preliminary form (abstract); authors will be asked to provide a complete specification later on.

To submit a new problem just prepare a benchmark package (see Package Description) and send it by email to Benchmark Submission with subject:

  • SUBMISSION: Benchmark Name

Package Description

  • A preliminary submission (see the Examples) simply requires problem name and description/specification only, even if the submission of sample instances is encouraged (sample instances will not be used for the competition). In case you want to propose the problem for the System Competition, an ASP encoding is also strongly suggested.

  • A complete problem submission (see the Examples), to be provided if the proposal has been accepted, has to be enclosed in a single compressed package (zip or tar.gz), containing:

    1. a textual problem description, with a clear specification of the input and output formats. Names and arguments of input and output predicates must be clearly specified (see Problem I/O);

    2. if the problem has been included into the System Track benchmark suite, a problem encoding in ASP;

    3. at least 50 instances, or, equivalently, a script/program that is able to generate problem instances. The format of instances can be found in the Instance Specification) page.

    4. a correctness checker, that is, a script or a program (preferably written in ASP whenever possible) that is able to decide whether, given an instance of a the problem at hand, the output of a system actually represent a solution to the instance. If the problem has been included into the System Competition benchmark suite, the ASP program or the script must check whether the predicates occurring in some answer set provided by the system form a solution to the instance. In case of an optimization problem, the program should be also able to compute the "quality" of the answer (see Checker Specification).

    5. a "demonstration", in which the author convincingly shows that the benchmark problem both can be effectively handled/solved and it is not trivial.

Note that: The problem statement must be unambiguous and include the description of input and output according to the problem description format described above. We would really appreciate if you can provide us with a declarative encoding and some sample instance at once: the submission of problems encodings often helps in disambiguating blurred specifications, so its provision is encouraged.

Problem Discussion and Validation

Benchmarks will be published in a specific section of the competition website.

All Researchers interested in discussing/improving problem proposals can subscribe to the Benchmark Discussion mailing list. All communities are encouraged to participate to this crucial stage in which problems are submitted and evaluated.

Validation and final submission stage

Benchmark problems submitted and discussed in the first stage are evaluated by the competition organizing committee. As for paper submissions in a conference, authors will be notified regarding the final acceptance decision.

In this phase the contribution of benchmarks authors is fundamental; submission which results to be incomplete at the end of this stage are unsuitable fot the competition and will be rejected (see Section Final acceptance of problems). The organizing committee takes the right to exclude benchmark problems whose provided instance family turns out to be blatantly too easy or too difficult in terms of expected evaluation time.

Final acceptance of problems

The final inclusion of problems in the competition is subject to to the conclusive approval of the competition organizing committee. The committee takes the right to discard problems which are not compliant with one or more of the following criteria:

  1. The instance set (or the instances generator output) does not comply with the competition input/output format (see Problem I/O and Instance Specification);

  2. The set of provided/generated instances counts less than 50 instances, or the provided generator is missing/failing;
  3. The provided/generated instances are trivial or not provably solvable on the competition hardware.

ASP Competition 2013: BenchmarkSubmission (last edited 2012-10-05 06:55:24 by GiovambattistaIanni)