welcome: please sign in
location: Diff for "BenchmarkSubmission"
Differences between revisions 29 and 67 (spanning 38 versions)
Revision 29 as of 2012-05-09 11:39:17
Size: 9147
Comment:
Revision 67 as of 2012-05-13 15:16:30
Size: 7708
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
<<TableOfContents>>
Line 4: Line 6:
to ASP systems and *any other system* to ASP systems and '''any other system'''
Line 26: Line 28:
proposing and/or devising new challenging benchmark problems. proposing and/or devising new challenging benchmark problems.
The submission of problems arising
from applications having practical impact are especially encouraged;
problems used in the former ASP Competitions, or variants thereof, can
be re-submitted (see [[#problem-submission|Problem Submission]]).
Line 28: Line 34:
The submission of problems arising
from applications having practical impact are strongly encouraged;
problems used in the former ASP Competitions, or variants thereof, can
be re-submitted. (see [[#problem-submission-procedure|Problem Submission Procedure]])
Benchmark authors are asked to provide the committee with a problem
specification and an instance set (or a generator thereof). Problems
will be classified and selected in order to obtain a fair balance between
several factors (including application domain, modeling difficulty,
complexity category, expressiveness, etc).
Line 33: Line 40:
Benchmark authors are expected to produce a problem specification and an instance set (or a generator thereof).
Problems will be classified and selected in order to obtain a fair balance between several factors (including complexity category, modeling difficulty, expressiveness, application domain, etc).
[[BenchmarkProblems|What is a problem?]]

<<Anchor(problem-submission-procedure)>>
== Problem Submission ==

The problems submission procedure will consist of two stages:

  1. Proposal and discussion;
  2. Validation and final submission.

At the first stage, problem descriptions are submitted and made publicly
available in the competition website. Everybody can discuss
and improve the proposals on our discussion mailing list. At the second stage, the organizing committee
will validate a selection of benchmarks; each selected problem will then
be completed under the responsibility of the original contributor(s).

Submission, discussion and selection of benchmark problems is handled via
email: details are reported below.
Line 39: Line 62:
     * '''Aug 30th, 2012''' - Problem submission deadline      * '''Aug 31th, 2012''' - Problem submission deadline
  * ''Problem Validation and Final Submission''
     * '''Sep. 30th, 2012''' - Final submission deadline
Line 41: Line 66:
  * ''Problem Validation and Final Submission''
     * TBA

<<Anchor(problem-submission-procedure)>>
=== Problem submission procedure ===

The problems submission procedure will consist of two stages:

  1. Proposal and discussion;

  2. Validation and final submission.

At the first stage, problem descriptions are submitted and made publicly available in the competition website. The community is invited to discuss and improve the proposals.
At the second stage, the organizing committee will validate a selection of benchmarks; each selected problem will be completed by original contributor(s).

Submission, discussion and selection of benchmark problems is handled via email,
the details are reported below.
Line 60: Line 68:
=== Problem submission === === How To Submit a Problem ===
Line 62: Line 70:
To submit a new problem just prepare a benchmark package (see [[#package-format|Package Format]]) and send it by email to [[mailto:benchmark_submission_REPLACE_WITH_AT_mat.unical.it|Benchmark Submission]] with subject: ''Problem specifications can be initially submitted in a'' '''preliminary
form''' (abstract); authors will be asked to provide a complete specification
later on.

To submit a new problem just prepare a benchmark package (see
[[#package-format|Package Description]]) and send it by email to
[[mailto:aspcomp_benchmarks_submission@yahoogroups.com|Benchmark Submission]]
with subject:
Line 66: Line 81:
Problem specifications can be initially submitted in a preliminary form (see [[#package-format|Package Format]]); Benchmark authors will be asked to provide a complete package containing the problem specification
.
Line 69: Line 82:
== Package Format == === Package Description ===
Line 71: Line 84:
  * A ''' ''complete problem submission'' ''' has to be enclosed in a single compressed package ({{{zip}}} and {{{{tar.gz}}} are allowed formats), containing (see the [[#package-example|Example]]):

    1. a textual problem description where both names and arguments of input and output predicates are clearly specified;

    2. a problem encoding;

    3. some sample instances, that is some instances for the problem, which comply with the Input specification (see [[#input-output|Problem I/O and Instance Specification]]), added in order to help evaluating the specification (sample instances should not be used for the competition).

    4. a correctness checker, that is, a program (better if written in ASP whenever possible) or a script able to decide whether the output predicates occurring in some answer set form a solution to a given instance of the problem at hand (and in case of an optimization problem, the program should be able to compute the "quality" of the answer).

  * A ''' ''preliminary submission'' ''' can be done by providing problem name and problem specification only. (We encourage to provide an ASP encoding and sample instances also.)
  * A ''' ''preliminary submission'' ''' (see the [[ProblemIOSpecification|Examples]]) simply requires problem name and description/specification only, even if the submission of sample instances is encouraged (sample instances will not be used for the competition). In case you want to propose the problem for the System Competition, an ASP encoding is also strongly suggested.
  * A ''' ''complete problem submission'' ''' (see the [[ProblemIOSpecification|Examples]]), to be provided if the proposal has been accepted, has to be enclosed in a single compressed package ({{{zip}}} or {{{tar.gz}}}), containing:
    1. a textual problem description, with a clear specification of the input and output formats. Names and arguments of input and output predicates must be clearly specified (see [[ProblemIOSpecification|Problem I/O]]);
    2. if the problem has been included into the [[SystemCompetition|System Track]] benchmark suite, a problem encoding in ASP;
    3. at least 50 instances, or, equivalently, a script/program that is able to generate problem instances. The format of instances can be found in the [[ProblemIOSpecification|Instance Specification]] page.
    4. a correctness checker, that is, a script or a program (preferably written in ASP whenever possible) that is able to decide whether, given an instance of a the problem at hand, the output of a system actually represent a solution to the instance. If the problem has been included into the System Competition benchmark suite, the ASP program or the script must check whether the predicates occurring in some answer set provided by the system form a solution to the instance. In case of an optimization problem, the program should be also able to compute the "quality" of the answer (see [[ProblemIOSpecification|Checker Specification]]).
    5. a "demonstration", in which the author convincingly shows that the benchmark problem both can be effectively handled/solved and it is not trivial.
Line 84: Line 93:
''The problem statement must be anyway unambiguous and include the description of input and output predicates according to the problem description format described above. We are grateful if you can already provide us with ASP encoding and some sample instance. Especially, the submission of problems encodings often helps in disambiguating blurred specifications, so its provision is greatly appreciated.'' ''The problem statement must be unambiguous and include the description of input and output according to the problem description format described above. We would really appreciate if you can provide us with a declarative encoding and some sample instance at once: the submission of problems encodings often helps in disambiguating blurred specifications, so its provision is encouraged.''
Line 86: Line 95:
<<Anchor(package-example)>>
==== Package Example Download ====
  * ''Sample Package Download'' [[attachment:GraphColouring.zip||&do=get]]
  * ''Preliminary Submission Example'' [[attachment:GraphColouring.txt||&do=get]]

=== Problem discussion ===
== Problem Discussion and Validation ==
Line 95: Line 99:
Researchers interested in discussing/improving problem proposals can send an e-mail to [[mailto:benchmark_submission_REPLACE_WITH_AT_mat.unical.it|Benchmark Discussion]] with subject: === Join the Discussion Group ===
Line 97: Line 101:
      DISCUSSION: First Name - Second Name - Email

''The community is strongly encouraged to participate to this important moment in which problems are submitted and evaluated.''
All Researchers interested in discussing/improving problem proposals can join the [[mailto:aspcomp_benchmarks_discussion@yahoogroups.com|Benchmark Discussion]] mailing list visiting [[http://groups.yahoo.com/group/aspcomp_benchmarks_discussion/join|the group page]] or sending a message to [[mailto:aspcomp_benchmarks_discussion-subscribe@yahoogroups.com|Benchmark Discussion Subscription Request]]. All communities are encouraged to participate to this crucial stage in which problems are submitted and evaluated.
Line 103: Line 105:
Benchmark problems submitted and discussed in the first stage are evaluated by the competition organizing committee. As for paper submissions in a conference, authors will be notified regarding the final acceptance decision. Benchmark problems that are accepted by the organizing committee (see [[#acceptance|Final acceptance of problems]]) will be then finalized by the authors. Benchmark problems submitted and discussed in the first stage are evaluated by the competition organizing committee. As for paper submissions in a conference, authors will be notified regarding the final acceptance decision.
Line 105: Line 107:
Finalized submission shall be sent by email to [[mailto:benchmark_submission_REPLACE_WITH_AT_mat.unical.it|Benchmark Submission]] enclosed in a single compressed package (zip, rar, and tar.gz are allowed formats), containing: In this phase the contribution of benchmarks authors is fundamental; submission which results to be incomplete at the end of this stage are unsuitable fot the competition and will be rejected (see Section [[#acceptance|Final acceptance of problems]]). The organizing committee takes the right to exclude benchmark problems whose provided instance family turns out to be blatantly too ''easy'' or too ''difficult'' in terms of expected evaluation time.
Line 107: Line 109:
 1. the textual problem description (same as the abstract) where both names and arguments of input and output predicates are clearly specified;
 1. the problem encoding;
 1. a correctness checker;
 1. either an instance generator or at least 50 ''hard'' instances of the problem (using only input predicates);
 1. a "demonstration" that the benchmark problem can be effectively solved on the class of instances provided, e.g. a report about tests carried out on a system of choice.

The above information shall be provided according to the [[ProblemPackageFormats|Problem Package Format]] information.

At least one among item 4 and 5 must be provided.
In this phase the contribution of benchmarks authors is fundamental, submission which are incomplete at the end of this stage are unsuitable for usage in the final competition and will be rejected (see Section [[#acceptance|Final acceptance of problems]]). The organizing committee takes the right to exclude benchmark problems whose provided instance family turns out to be blatantly too ''easy'' or ''difficult'' in terms of expected evaluation time.

<<Anchor(input-output)>>
=== Problem I/O and Instance Specification ===

work in progress


== Programs with Function symbols and integers ==

Programs with function symbols and integers are in principle subject to no restriction. For the sake of Competition, and to facilitate system implementors

  * each selected problem encoding ''P'' must provably have finitely many finite answer sets for any of its benchmark instance ''I'', that is ''AS(P U I)'' must be a finite set of finite elements. "Proofs" of finiteness can be given in terms of membership to a known decidable class of programs with functions and/or integers, or any other formal mean.

  * a bound ''k,,P,,'' on the maximum nesting level of terms, and a bound ''m,,P,,'' on the maximum integer value appearing in answer sets originated from ''P'' must be known. That is, for any instance ''I'' and for any term ''t'' appearing in ''AS(P U I)'', the nesting level of ''t'' must not be greater than ''k,,P,,'' and, if ''t'' is an integer it must not exceed ''m,,P,,''.

The values ''m,,P,,'' and ''k,,P,,'' will be provided in input to participant systems, when invoked on ''P'': problem designers are thus
invited to provide such values accordingly.
Line 138: Line 113:
The final inclusion of problems in the competition is subject to to the conclusive approval of the competition organizing committee. Among the collected problems, the committee takes the right to discard problems which are not compliant with one or more of the following criteria:

1. The instance set (or the instances generator output) does not comply with the competition input/output format (see [[#input-output|Problem I/O and Instance Specification]]);

2. The set of provided instances counts less than 50 instances, or the provided generator is missing or failing;

3. The provided (or generated) instances are trivial or not provably solvable when run on the competition hardware;
The final inclusion of problems in the competition is subject to to the conclusive approval of the competition organizing committee. The committee takes the right to discard problems which are not compliant with one or more of the following criteria:
  1. The instance set (or the instances generator output) does not comply with the competition input/output format (see [[ProblemIOSpecification|Problem I/O and Instance Specification]]);
  2. The set of provided/generated instances counts less than 50 instances, or the provided generator is missing/failing;
  3. The provided/generated instances are trivial or not provably solvable on the competition hardware.

Call for Benchmark Problems

The 4th Open Answer Set Programming Competition is open to ASP systems and any other system based on a declarative specification paradigm.

Participants will compete on a selected collection of declarative specifications of benchmark problems, taken from a variety of domains as well as real world applications, and instances thereof.

These include but are not limited to:

  • Deductive database tasks on large data-sets
  • Sequential and Temporal Planning
  • Classic and Applicative graph problems
  • Puzzles and combinatorics
  • Scheduling, timetabling and other resource allocation problems
  • Combinatorial Optimization problems
  • Ontology reasoning
  • Automated Theorem Proving and model checking
  • Reasoning tasks over large propositional instances
  • Constraint Programming problems
  • Other AI problems

We encourage to provide help by proposing and/or devising new challenging benchmark problems. The submission of problems arising from applications having practical impact are especially encouraged; problems used in the former ASP Competitions, or variants thereof, can be re-submitted (see Problem Submission).

Benchmark authors are asked to provide the committee with a problem specification and an instance set (or a generator thereof). Problems will be classified and selected in order to obtain a fair balance between several factors (including application domain, modeling difficulty, complexity category, expressiveness, etc).

What is a problem?

Problem Submission

The problems submission procedure will consist of two stages:

  1. Proposal and discussion;
  2. Validation and final submission.

At the first stage, problem descriptions are submitted and made publicly available in the competition website. Everybody can discuss and improve the proposals on our discussion mailing list. At the second stage, the organizing committee will validate a selection of benchmarks; each selected problem will then be completed under the responsibility of the original contributor(s).

Submission, discussion and selection of benchmark problems is handled via email: details are reported below.

Schedule

  • Problem Proposal and Discussion

    • Aug 31th, 2012 - Problem submission deadline

  • Problem Validation and Final Submission

    • Sep. 30th, 2012 - Final submission deadline

How To Submit a Problem

Problem specifications can be initially submitted in a preliminary form (abstract); authors will be asked to provide a complete specification later on.

To submit a new problem just prepare a benchmark package (see Package Description) and send it by email to Benchmark Submission with subject:

  • SUBMISSION: Benchmark Name

Package Description

  • A preliminary submission (see the Examples) simply requires problem name and description/specification only, even if the submission of sample instances is encouraged (sample instances will not be used for the competition). In case you want to propose the problem for the System Competition, an ASP encoding is also strongly suggested.

  • A complete problem submission (see the Examples), to be provided if the proposal has been accepted, has to be enclosed in a single compressed package (zip or tar.gz), containing:

    1. a textual problem description, with a clear specification of the input and output formats. Names and arguments of input and output predicates must be clearly specified (see Problem I/O);

    2. if the problem has been included into the System Track benchmark suite, a problem encoding in ASP;

    3. at least 50 instances, or, equivalently, a script/program that is able to generate problem instances. The format of instances can be found in the Instance Specification page.

    4. a correctness checker, that is, a script or a program (preferably written in ASP whenever possible) that is able to decide whether, given an instance of a the problem at hand, the output of a system actually represent a solution to the instance. If the problem has been included into the System Competition benchmark suite, the ASP program or the script must check whether the predicates occurring in some answer set provided by the system form a solution to the instance. In case of an optimization problem, the program should be also able to compute the "quality" of the answer (see Checker Specification).

    5. a "demonstration", in which the author convincingly shows that the benchmark problem both can be effectively handled/solved and it is not trivial.

Note that: The problem statement must be unambiguous and include the description of input and output according to the problem description format described above. We would really appreciate if you can provide us with a declarative encoding and some sample instance at once: the submission of problems encodings often helps in disambiguating blurred specifications, so its provision is encouraged.

Problem Discussion and Validation

Benchmarks will be published in a specific section of the competition website.

Join the Discussion Group

All Researchers interested in discussing/improving problem proposals can join the Benchmark Discussion mailing list visiting the group page or sending a message to Benchmark Discussion Subscription Request. All communities are encouraged to participate to this crucial stage in which problems are submitted and evaluated.

Validation and final submission stage

Benchmark problems submitted and discussed in the first stage are evaluated by the competition organizing committee. As for paper submissions in a conference, authors will be notified regarding the final acceptance decision.

In this phase the contribution of benchmarks authors is fundamental; submission which results to be incomplete at the end of this stage are unsuitable fot the competition and will be rejected (see Section Final acceptance of problems). The organizing committee takes the right to exclude benchmark problems whose provided instance family turns out to be blatantly too easy or too difficult in terms of expected evaluation time.

Final acceptance of problems

The final inclusion of problems in the competition is subject to to the conclusive approval of the competition organizing committee. The committee takes the right to discard problems which are not compliant with one or more of the following criteria:

  1. The instance set (or the instances generator output) does not comply with the competition input/output format (see Problem I/O and Instance Specification);

  2. The set of provided/generated instances counts less than 50 instances, or the provided generator is missing/failing;
  3. The provided/generated instances are trivial or not provably solvable on the competition hardware.

ASP Competition 2013: BenchmarkSubmission (last edited 2012-10-05 06:55:24 by GiovambattistaIanni)