welcome: please sign in
location: Diff for "BenchmarkSubmission"
Differences between revisions 1 and 2
Revision 1 as of 2010-11-18 17:47:30
Size: 51
Comment:
Revision 2 as of 2010-11-19 19:14:43
Size: 13376
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Inserire una descrizione per BenchmarkSubmission. == Important Dates ==

=== Tentative Schedule. ===

  * ''Problem Proposal and Discussion''
     * '''November 22th, 2010''' - EasyChair submission site is open
     * '''December 25th, 2010''' - Problem Proposal and Discussion is closed
  * ''Problem Validation and Final Submission''
     * '''January 3st, 2011''' - Notification of acceptance, list of candidate problems is out
     * '''January 3st, 2011''' - Problem finalization starts
     * '''January 10th, 2011''' - Final Acceptance, problems selection is out

== Call for Benchmark Problems ==

The first phase of the Third Answer Set Programming Competition will consist in the collection of benchmarks. Participants and researchers in the field are encouraged to provide their help by proposing and/or devising new challenging benchmark problems, as well as providing support for them. In particular, the submission of problems arising from applications having practical impact are strongly encouraged; problems used in the former ASP Competitions, or variants thereof, can be re-submitted.

Benchmark problems will be collected, selected and (possibly) refined with the help of the ASP community, by means of a, mostly informal, two-stage submission process.
Benchmarks problems and instances will be eventually published on the competition web site.

=== Problem submission procedure ===

The benchmark problems submission procedure will be articulated in the following two stages:

  1. Proposal and discussion;

  2. Validation and final submission.

At the first stage, problem descriptions are submitted and made publicly available to the ASP community, which is invited to discuss and (possibly) improve the proposals.
At the second stage, a selection of proposed benchmarks is validated by the organizing committee (see Section~\ref{sec:acceptance}), and finalized by the contributors.

==== Proposal and discussion stage ====

The selection of benchmark problems is handled via the {\em EasyChair} system. Instructions for submitting a benchmark description and participating to the discussion follow.

===== Problem submission. =====
To submit a new problem just login the EasyChair system at:

\url{http://www.easychair.org/conferences/?conf=aspcomp2011}

A problem submission is handled just like a paper submission.
The system will require to fill a form, in which
''title'' (conventional problem name), ''abstract'' (problem specification),
and ''problem classification'' (see Section~\ref{sec:problem-type}) have to be mandatorily provided
by checking appropriate buttons; EasyChair will require also to fill the keyword fields,
please provide us with some.

===== Proposal Submission. =====
A problem specification can be uploaded in EasyChair (as a paper submission) enclosed in a a single compressed package (zip, rar, and tar.gz are allowed formats), containing:

  1. a textual problem description (same as the abstract) where both names and arguments of input and output predicates are clearly specified;

  2. a problem encoding;

  3. some sample instances, that is some instances for the problem, which comply with the Input specification (see Section~\ref{sec:input-output}), added in order to help evaluating the specification (sample instances should not be used for the competition).

  4. \label{item:checker} a correctness checker, that is, a program or a script able to decide whether the output predicates occurring in some answer set form a solution to a given instance of the problem at hand (and in case of an optimization problem, the program should be able to compute the ``quality'' of the answer).
The checker reads from standard input the output of a system/call script (see the {\em Language Specification} document for details) and writes in standard output a single row of text containing the string ``OK'', and the string ``FAIL'' otherwise. The string "OK" must be followed by an integer representing the witness cost in case of optimization problems.
An exit code different from zero indicates a checker problem.
Note that the source code of the correctness checker has to be be included in the package; moreover, the provided software must be able to (build and) run on the \linux operating system.

More in detail, a problem specification has to be packaged in a single compressed archive named ''benchmark\_name-contributor\_name.zip'' (rar/tar.gz) containing:

  1. a ''benchmark\_name.txt'' file containing the textual problem description (a template is available, see below);

  2. a ''benchmark\_encoding.asp'' file containing a proposed encoding for the problem (which must match the provided language classification);

  3. a folder named "''checker''" containing the sources of a correctness checker together with a README.txt file containing the instructions for (building and) running it;

  4. a folder named "''samples''" containing some instances (one instance per text file).

In this stage, the submission of correctness checkers, as well as problem encodings and sample instances, are optional and left up to participants; nonetheless, we are grateful if you can already provide us with some. Especially, the submission of problems encodings often helps in disambiguating blurred specifications, so
its provision is greatly appreciated. The formats for specifying both input/output of problems and instances are reported in Section~\ref{sec:input-output}.
A template package for benchmark submission can be downloaded at:

\hspace{-0.5cm}{\tt \small http://www.mat.unical.it/aspcomp2011/files/myproblem-aspteam-proposal.zip}

step by step archive creation is described below:
AGGIUNGERE DESCRIZIONE

We encourage to provide from the beginning a complete submission, although abstract-only submission are accepted in the proposal stage. For abstract-only submissions, check the corresponding box at the end of the submission page, and just provide the problem statement in the ''abstract'' textfield.
The problem statement must be anyway unambiguous and include the description of input and output predicates according to the problem description format described above.

==== Problem discussion. ====

Benchmarks can be discussed by connecting to:

\url{http://www.easychair.org/conferences/?conf=aspcomp2011}

The usual commenting tools provided by EasyChair will be used to post comments and discuss the submissions.
Unlike the usual conference "submit, then discuss" procedure, the discussion will be ongoing while problems are being submitted.

Researchers interested in discussing/improving problem proposals will get the
program committee member role in the EasyChair system. In particular, authors will be added
as participants to the discussion by default; people interested only in discussing problems
can send an e-mail to {\em aspcompetion2011@mat.unical.it} with subject:

\[ DISCUSSION: First Name - Second Name - Email\]

\noindent for obtaining access to the discussions. %with the role of \quo{virtual} program committee member.
%
%The usual paper reviewing tools provided by EasyChair
%will be used to post comments and evaluate the submissions. Each participant to the discussion
%can both opt for submitting a short review with analytic values, or just leave an open comment.
%
The organizing committee will take into account comments in order
to select problems in the second stage.

\begin{center}
{\em The community is strongly encouraged to participate to this important moment in which
problems are submitted and evaluated.}
\end{center}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Validation and final submission stage}

Benchmark problems submitted and discussed in the first stage are
evaluated by the competition organizing committee.
%
As for paper submissions in a conference, EasyChair will notify the authors
with the final acceptance decision.
Benchmark problems that are accepted by the program committee
(see Section~\ref{sec:acceptance}) will be then finalized by the authors.

%The instructions for finalizing a submission are the following.
Finalized submission shall be uploaded in EasyChair enclosed in a single compressed package
(zip, rar, and tar.gz are allowed formats),
containing:
$(i)$ the textual problem description (same as the abstract) where both names and arguments of input and output predicates are clearly specified;
$(ii)$ the problem encoding;
$(iii)$ a correctness checker, that is, a program or a script able to decide whether the output predicates occurring in some answer set form a solution to a given instance of the problem at hand (and in case of an optimization problem, the program should be able to compute the ``quality'' of the answer).
The checker reads from standard input the output of a system/call script
(see the {\em Language Specification} document for details) and writes in standard output
a single row of text containing the string ``OK'' (followed by an integer representing the witness cost in case of optimization problems), and the string ``FAIL'' otherwise. An exit code different from zero
indicates a checker problem.
$(iv)$ either an instance generator or at least 30 {\em hard} ground instances of the problem (using only input predicates);
$(v)$ a \quo{demonstration} that the benchmark problem can be effectively solved on the class of instances provided, e.g.
a report about tests carried out on a system of choice.
%
The source code of correctness checker and (of the optionally-provided) instance generator has to be be included in the package; as before, the provided software must be able to (build and) run on the \linux operating system.

More in detail, a problem specification has to be packaged in a single compressed archive
named {\em benchmark\_name-contributor\_name.zip}(rar/tar.gz) containing:
\begin{enumerate}
\item a {\em benchmark\_name.txt} file containing the textual problem description (a template is available, see below);
\item a {\em benchmark\_encoding.asp} file containing a proposed encoding for the problem (that must match the provided language classification);
\item a folder named ``{\em checker}'' containing the sources of a correctness checker together with a README.txt file containing suitable instruction for (building and) running it.
\item a folder named ``{\em instances}'' containing at least 50 numbered instances (one per file) named XX-benchmark\_name-MAXINT-MAXLEVEL.asp
where: XX is the instance number, MAXINT is the maximum integer (0 if not relevant),
and the maximum function nesting level is MAXLEVEL (0 if not relevant), see Section~\ref{sec:MAXINT-MAXLEVEL};
\item a folder named ``{\em generator}'' containing the sources of an instance generator together with a README.txt file containing suitable instruction for (building and) running it; and,
\item a {\em demonstration.txt} file in which the author provides sufficient arguments
demonstrating that the benchmark problem can be effectively solved.
\end{enumerate}

In this phase the contribution of benchmarks authors is fundamental,
submission which are incomplete at the end of this stage are unsuitable for
usage in the final competition and will be rejected
(see Section~\ref{sec:acceptance}).
The organizing committee takes the right to exclude benchmark problems whose provided instance family
turns out to be blatantly too {\em easy} or {\em difficult} in terms of expected evaluation time.

A template package for benchmark submission is available at:

\hspace{-0.5cm}\url{http://www.mat.unical.it/aspcomp2011/files/myproblem-aspteam-finalized.zip}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Problem I/O and Instance Specification}\label{sec:input-output}

In the following is specified the allowed format for specifying
input and output of problems and instances. Samples are available in the competition web site.

%%%
\paragraph{Problem Input and Output.}
Benchmark problems specifications have to clearly indicate the vocabulary of
input and output predicates,
i.e., a list of predicate names for the input and a list of predicate names for the output.

%%%
\paragraph{Instance Specification.}
%Input instances and corresponding outputs are basic set of facts; a detailed specification follows.
Input instance are sequences of {\em Facts} (atoms followed by the dot ``." character)
with only predicates of the input vocabulary, possibly separated by spaces and line breaks,
entirely saved in a text file (only one instance per file is allowed).

%%%
\paragraph{Maximum Integer and maximum function nesting level,} (see Section~\ref{sec:MAXINT-MAXLEVEL})
have to be provided in a per-instance basis. In particular, they are specified
in the filename containing a given instance.
We recall that instances must be named XX-benchmark\_name-MAXINT-MAXLEVEL.asp
where XX is the instance number, MAXINT is the maximum integer (0 if not relevant),
and MAXLEVEL is the maximum function nesting level (0 if not relevant).

\medskip


%%%
\paragraph{Remark.}
Recall that, each ASP system (or solver script) will read an input instance (from standard input) and produces an output (to standard output) according to the specifications for the input and output format of call scripts reported in the {\em Competition Rules} document.

Important Dates

Tentative Schedule.

  • Problem Proposal and Discussion

    • November 22th, 2010 - EasyChair submission site is open

    • December 25th, 2010 - Problem Proposal and Discussion is closed

  • Problem Validation and Final Submission

    • January 3st, 2011 - Notification of acceptance, list of candidate problems is out

    • January 3st, 2011 - Problem finalization starts

    • January 10th, 2011 - Final Acceptance, problems selection is out

Call for Benchmark Problems

The first phase of the Third Answer Set Programming Competition will consist in the collection of benchmarks. Participants and researchers in the field are encouraged to provide their help by proposing and/or devising new challenging benchmark problems, as well as providing support for them. In particular, the submission of problems arising from applications having practical impact are strongly encouraged; problems used in the former ASP Competitions, or variants thereof, can be re-submitted.

Benchmark problems will be collected, selected and (possibly) refined with the help of the ASP community, by means of a, mostly informal, two-stage submission process. Benchmarks problems and instances will be eventually published on the competition web site.

Problem submission procedure

The benchmark problems submission procedure will be articulated in the following two stages:

  1. Proposal and discussion;
  2. Validation and final submission.

At the first stage, problem descriptions are submitted and made publicly available to the ASP community, which is invited to discuss and (possibly) improve the proposals. At the second stage, a selection of proposed benchmarks is validated by the organizing committee (see Section~\ref{sec:acceptance}), and finalized by the contributors.

Proposal and discussion stage

The selection of benchmark problems is handled via the {\em EasyChair} system. Instructions for submitting a benchmark description and participating to the discussion follow.

Problem submission.

To submit a new problem just login the EasyChair system at:

\url{http://www.easychair.org/conferences/?conf=aspcomp2011}

A problem submission is handled just like a paper submission. The system will require to fill a form, in which title (conventional problem name), abstract (problem specification), and problem classification (see Section~\ref{sec:problem-type}) have to be mandatorily provided by checking appropriate buttons; EasyChair will require also to fill the keyword fields, please provide us with some.

Proposal Submission.

A problem specification can be uploaded in EasyChair (as a paper submission) enclosed in a a single compressed package (zip, rar, and tar.gz are allowed formats), containing:

  1. a textual problem description (same as the abstract) where both names and arguments of input and output predicates are clearly specified;
  2. a problem encoding;
  3. some sample instances, that is some instances for the problem, which comply with the Input specification (see Section~\ref{sec:input-output}), added in order to help evaluating the specification (sample instances should not be used for the competition).
  4. \label{item:checker} a correctness checker, that is, a program or a script able to decide whether the output predicates occurring in some answer set form a solution to a given instance of the problem at hand (and in case of an optimization problem, the program should be able to compute the quality of the answer).

The checker reads from standard input the output of a system/call script (see the {\em Language Specification} document for details) and writes in standard output a single row of text containing the string OK, and the string FAIL otherwise. The string "OK" must be followed by an integer representing the witness cost in case of optimization problems. An exit code different from zero indicates a checker problem. Note that the source code of the correctness checker has to be be included in the package; moreover, the provided software must be able to (build and) run on the \linux operating system.

More in detail, a problem specification has to be packaged in a single compressed archive named benchmark\_name-contributor\_name.zip (rar/tar.gz) containing:

  1. a benchmark\_name.txt file containing the textual problem description (a template is available, see below);

  2. a benchmark\_encoding.asp file containing a proposed encoding for the problem (which must match the provided language classification);

  3. a folder named "checker" containing the sources of a correctness checker together with a README.txt file containing the instructions for (building and) running it;

  4. a folder named "samples" containing some instances (one instance per text file).

In this stage, the submission of correctness checkers, as well as problem encodings and sample instances, are optional and left up to participants; nonetheless, we are grateful if you can already provide us with some. Especially, the submission of problems encodings often helps in disambiguating blurred specifications, so its provision is greatly appreciated. The formats for specifying both input/output of problems and instances are reported in Section~\ref{sec:input-output}. A template package for benchmark submission can be downloaded at:

\hspace{-0.5cm}{\tt \small http://www.mat.unical.it/aspcomp2011/files/myproblem-aspteam-proposal.zip}

step by step archive creation is described below: AGGIUNGERE DESCRIZIONE

We encourage to provide from the beginning a complete submission, although abstract-only submission are accepted in the proposal stage. For abstract-only submissions, check the corresponding box at the end of the submission page, and just provide the problem statement in the abstract textfield. The problem statement must be anyway unambiguous and include the description of input and output predicates according to the problem description format described above.

Problem discussion.

Benchmarks can be discussed by connecting to:

\url{http://www.easychair.org/conferences/?conf=aspcomp2011}

The usual commenting tools provided by EasyChair will be used to post comments and discuss the submissions. Unlike the usual conference "submit, then discuss" procedure, the discussion will be ongoing while problems are being submitted.

Researchers interested in discussing/improving problem proposals will get the program committee member role in the EasyChair system. In particular, authors will be added as participants to the discussion by default; people interested only in discussing problems can send an e-mail to {\em aspcompetion2011@mat.unical.it} with subject:

\[ DISCUSSION: First Name - Second Name - Email\]

\noindent for obtaining access to the discussions. %with the role of \quo{virtual} program committee member. % %The usual paper reviewing tools provided by EasyChair %will be used to post comments and evaluate the submissions. Each participant to the discussion %can both opt for submitting a short review with analytic values, or just leave an open comment. % The organizing committee will take into account comments in order to select problems in the second stage.

\begin{center} {\em The community is strongly encouraged to participate to this important moment in which problems are submitted and evaluated.} \end{center} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Validation and final submission stage}

Benchmark problems submitted and discussed in the first stage are evaluated by the competition organizing committee. % As for paper submissions in a conference, EasyChair will notify the authors with the final acceptance decision. Benchmark problems that are accepted by the program committee (see Section~\ref{sec:acceptance}) will be then finalized by the authors.

%The instructions for finalizing a submission are the following. Finalized submission shall be uploaded in EasyChair enclosed in a single compressed package (zip, rar, and tar.gz are allowed formats), containing: $(i)$ the textual problem description (same as the abstract) where both names and arguments of input and output predicates are clearly specified; $(ii)$ the problem encoding; $(iii)$ a correctness checker, that is, a program or a script able to decide whether the output predicates occurring in some answer set form a solution to a given instance of the problem at hand (and in case of an optimization problem, the program should be able to compute the quality of the answer). The checker reads from standard input the output of a system/call script (see the {\em Language Specification} document for details) and writes in standard output a single row of text containing the string OK (followed by an integer representing the witness cost in case of optimization problems), and the string FAIL otherwise. An exit code different from zero indicates a checker problem. $(iv)$ either an instance generator or at least 30 {\em hard} ground instances of the problem (using only input predicates); $(v)$ a \quo{demonstration} that the benchmark problem can be effectively solved on the class of instances provided, e.g. a report about tests carried out on a system of choice. % The source code of correctness checker and (of the optionally-provided) instance generator has to be be included in the package; as before, the provided software must be able to (build and) run on the \linux operating system.

More in detail, a problem specification has to be packaged in a single compressed archive named {\em benchmark\_name-contributor\_name.zip}(rar/tar.gz) containing: \begin{enumerate} \item a {\em benchmark\_name.txt} file containing the textual problem description (a template is available, see below); \item a {\em benchmark\_encoding.asp} file containing a proposed encoding for the problem (that must match the provided language classification); \item a folder named {\em checker} containing the sources of a correctness checker together with a README.txt file containing suitable instruction for (building and) running it. \item a folder named {\em instances} containing at least 50 numbered instances (one per file) named XX-benchmark\_name-MAXINT-MAXLEVEL.asp where: XX is the instance number, MAXINT is the maximum integer (0 if not relevant), and the maximum function nesting level is MAXLEVEL (0 if not relevant), see Section~\ref{sec:MAXINT-MAXLEVEL}; \item a folder named {\em generator} containing the sources of an instance generator together with a README.txt file containing suitable instruction for (building and) running it; and, \item a {\em demonstration.txt} file in which the author provides sufficient arguments demonstrating that the benchmark problem can be effectively solved. \end{enumerate}

In this phase the contribution of benchmarks authors is fundamental, submission which are incomplete at the end of this stage are unsuitable for usage in the final competition and will be rejected (see Section~\ref{sec:acceptance}). The organizing committee takes the right to exclude benchmark problems whose provided instance family turns out to be blatantly too {\em easy} or {\em difficult} in terms of expected evaluation time.

A template package for benchmark submission is available at:

\hspace{-0.5cm}\url{http://www.mat.unical.it/aspcomp2011/files/myproblem-aspteam-finalized.zip}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Problem I/O and Instance Specification}\label{sec:input-output}

In the following is specified the allowed format for specifying input and output of problems and instances. Samples are available in the competition web site.

%%% \paragraph{Problem Input and Output.} Benchmark problems specifications have to clearly indicate the vocabulary of input and output predicates, i.e., a list of predicate names for the input and a list of predicate names for the output.

%%% \paragraph{Instance Specification.} %Input instances and corresponding outputs are basic set of facts; a detailed specification follows. Input instance are sequences of {\em Facts} (atoms followed by the dot ." character) with only predicates of the input vocabulary, possibly separated by spaces and line breaks, entirely saved in a text file (only one instance per file is allowed).

%%% \paragraph{Maximum Integer and maximum function nesting level,} (see Section~\ref{sec:MAXINT-MAXLEVEL}) have to be provided in a per-instance basis. In particular, they are specified in the filename containing a given instance. We recall that instances must be named XX-benchmark\_name-MAXINT-MAXLEVEL.asp where XX is the instance number, MAXINT is the maximum integer (0 if not relevant), and MAXLEVEL is the maximum function nesting level (0 if not relevant).

\medskip

%%% \paragraph{Remark.} Recall that, each ASP system (or solver script) will read an input instance (from standard input) and produces an output (to standard output) according to the specifications for the input and output format of call scripts reported in the {\em Competition Rules} document.

ASP Competition 2011: BenchmarkSubmission (last edited 2011-01-11 16:23:42 by GiovambattistaIanni)