welcome: please sign in
location: Diff for "SystemCompetition"
Differences between revisions 3 and 4
Revision 3 as of 2010-11-19 10:33:02
Size: 5407
Comment:
Revision 4 as of 2010-11-19 10:47:32
Size: 4627
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
{{{#!html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
'''Solver Competition'''
Line 4: Line 3:
<HTML>
<HEAD>
<TITLE>Competition</TITLE>
<META NAME="description" CONTENT="Competition">
<META NAME="keywords" CONTENT="prova">
<META NAME="resource-type" CONTENT="document">
<META NAME="distribution" CONTENT="global">

<META NAME="Generator" CONTENT="LaTeX2HTML v2008">
<META HTTP-EQUIV="Content-Style-Type" CONTENT="text/css">

</HEAD>

<BODY >

<H2><A NAME="SECTION00001000000000000000">
Competition</A>
</H2>

<P>
The regulations of the Competition are conceived taking into
The regulations of the \solver Competition are conceived taking into
Line 27: Line 6:
<P>   1. Many families of formalisms, which can be considered to a large extent neighbors of the ASP community, have reached a significant level of language standardization, ranging from the Constraint Handling Rules (CHR) family \cite{fruh-2009-CHR}, the Satisfiability Modulo Theories SMT-LIB format \cite{barr-etal-2010-SMT-LIB}, the Planning Domain Definition Language (PDDL)\cite{pddl-resources-web}, to the TPTP format used in the Automated Theorem Proving System Competition (CASC) \cite{tptp-web}. The above experiences witness that the availability of the common ground of a standard language, possibly undergoing continuous refinement and extension, has usually boosted the availability of resources, the deployment of the technology at hand into practical applications, and the effectiveness of systems. Nonetheless, ASP is missing a standard, high-level input language. We think, however, that the ASP community is mature enough for starting the development of a common standard input format: an ASP system can be roughly seen as composed of a front-end input language processor and a model generator. The first module is usually (but not necessarily) named \quo{grounder}, for it produces a propositional program obtained from an higher-level specification of the problem at hand. Incidentally, currently developed ASP grounder systems have recently reached a good degree of maturity, and, above all, they have reached a fairly large degree of overlap in their input formats. This paves the way for taking the very first serious step towards the proposal of a common input language for ASP solvers. It thus makes sense to play (part of) the Third ASP competition on the grounds of a common draft input format, in order to promote the adoption of a newly devised standard, and foster the birth of a new standardization working group. In order to met the above goals, the competition input format should be large enough to embed all of the basic constructs included in the language originally specified in \cite{gelf-lifs-91} (and lifted to its non-ground version), yet conservative enough to allow all the participants to adhere to the standard draft with little or no effort.
Line 29: Line 8:
<OL>
<LI>Many families of formalisms, which can be considered to a large extent
    neighbors of the ASP community, have reached a
    significant level of language standardization, ranging from the
    Constraint Handling Rules (CHR) family
    [#!fruh-2009-CHR!#], the Satisfiability Modulo
    Theories SMT-LIB format [#!barr-etal-2010-SMT-LIB!#],
    the Planning Domain Definition Language (PDDL)
    [#!pddl-resources-web!#], to the TPTP format used in
    the Automated Theorem Proving System Competition (CASC)
    [#!tptp-web!#].
    The above experiences witness that the availability of
    the common ground of a standard
    language, possibly undergoing continuous refinement and extension,
    has usually boosted the availability of resources, the deployment
    of the technology at hand into practical applications, and the effectiveness
    of systems. Nonetheless, ASP is missing
    a standard, high-level input language.
  2. Performance of a given ASP system $S$ might greatly vary on problem $P$, if fine-tuning on $P$ is performed either by improving the problem encoding or by tuning internal ad-hoc optimizations techniques. Although, on the one hand, it is important to encourage system developers to fine-tune their systems, and then compete on this basis, on the other hand it is similarly important to put in evidence how a solver performs with its default behavior: indeed, the user of an ASP system has generally little or no knowledge of the system internals, and might not be aware of which program rewritings and system optimization methods pay off in terms of performance. The \solver competition should thus put in evidence the performance of a system when used as an \quo{off-the-shelf black box} on a supposedly unknown problem specification. Rankings on the \solver competition should give a fairly objective measure of what one can expect when switching from a system to another, while keeping all other conditions fixed (problem encoding and default solver settings). The formula of the \solver competition thus aims at measuring the performance of a given solver when used on a generic problem encoding, and not when this is fine-tuned for the specific problems selected for the current competition.
Line 48: Line 10:
<P>
We think, however, that the ASP community is mature enough
for starting the development of a common standard input
format: an ASP system can be roughly seen as composed of a
front-end input language processor and a model generator.
The first module is usually (but not necessarily) named
grounder, for it produces a propositional program
obtained from an higher-level specification of the problem
at hand.
Line 58: Line 11:
<P>
Incidentally, currently developed ASP grounder systems have
recently reached a good degree of maturity, and, above all,
they have reached a fairly large degree of overlap in their
input formats. This paves the way for taking the very first
serious step towards the proposal of a common input
language for ASP solvers.
Given the above considerations, the \solver competition will be held on the basis of fthe following principles:
Line 66: Line 13:
<P>
It thus makes sense to play (part of) the Third
ASP competition on the grounds of a common draft input
format, in order to promote the adoption of a newly devised
standard, and foster the birth of a new
standardization working group.
\begin{enumerate}
\item The \solver competition is open to general-purpose ASP
    systems, able to parse a language in a fixed format
    (which are \aspcore and \asprfc, see the File and Language Format document \cite{languageformat}).
Line 73: Line 18:
<P>
In order to met the above goals, the competition input
format should be large enough to embed all of the basic
constructs included in the language originally specified
in [#!gelf-lifs-91!#] (and lifted to its non-ground version),
yet conservative enough to allow
all the participants to adhere to the standard draft with
little or no effort.
\item The competition is run over a selection of problems.
    For each problem, a corresponding, fixed encoding in
    \aspcore or \asprfc, together with a set of benchmarks
    instances, is chosen by the organizers (see the Call
    for Problems document \cite{callforproblems});
Line 82: Line 24:
<P>
</LI>
<LI><P>
Performance of a given ASP system <I>S</I>
<tex2html_verbatim_mark>mathend000# might greatly vary on
problem <I>P</I>
<tex2html_verbatim_mark>mathend000#, if fine-tuning on <I>P</I>
<tex2html_verbatim_mark>mathend000# is performed either by
improving the problem encoding or by tuning internal ad-hoc
optimizations techniques.

<P>
Although, on the one hand, it is important to encourage system
developers to fine-tune their systems, and then compete on
this basis, on the other hand it is similarly important to
put in evidence how a solver performs with its default
behavior: indeed, the user of an ASP system has generally
little or no knowledge of the system internals, and might
not be aware of which program rewritings and system
optimization methods pay off in terms of performance.

<P>
The competition should thus put in evidence the
performance of a system when used as an off-the-shelf
black box on a supposedly unknown problem specification.
Rankings on the competition
should give a fairly objective measure of what one can
expect when switching from a system to another, while
keeping all other conditions fixed (problem encoding and
default solver settings). The formula of the competition thus aims
 at measuring the
performance of a given solver when used on a generic
problem encoding, and not when this is fine-tuned
for the specific problems selected for
the current competition.
</LI>
</OL>

<P>
Given the above considerations, the competition will be held on the basis of the following
principles:

<P>

<OL>
<LI>The competition is open to general-purpose ASP
    systems, able to parse a language in a fixed format
    (which are and , see the File and Language Format document [#!languageformat!#]).

<P>
</LI>
<LI>The competition is run over a selection of problems.
    For each problem, a corresponding, fixed encoding in
    or , together with a set of benchmarks
    instances, is chosen by the organizers (see the Call
    for Problems document [#!callforproblems!#]);

<P>
</LI>
<LI>Each participant system will be launched with its
\item Each participant system will be launched with its
Line 144: Line 27:
<P>
</LI>
<LI><EM>
Syntactic</EM> special-purpose solving techniques,
\item {\em Syntactic} special-purpose solving techniques,
Line 149: Line 30:
</LI>
</OL>
\end{enumerate}
Line 152: Line 32:
<P>
Detailed rules can be found in the <EM>Competition Rules</EM> document [#!participationrules!#].


</BODY>
</HTML>

}}}
Detailed rules can be found in the {\em Competition Rules} document \cite{participationrules}.

Solver Competition

The regulations of the \solver Competition are conceived taking into account the following considerations:

  1. Many families of formalisms, which can be considered to a large extent neighbors of the ASP community, have reached a significant level of language standardization, ranging from the Constraint Handling Rules (CHR) family \cite{fruh-2009-CHR}, the Satisfiability Modulo Theories SMT-LIB format \cite{barr-etal-2010-SMT-LIB}, the Planning Domain Definition Language (PDDL)\cite{pddl-resources-web}, to the TPTP format used in the Automated Theorem Proving System Competition (CASC) \cite{tptp-web}. The above experiences witness that the availability of the common ground of a standard language, possibly undergoing continuous refinement and extension, has usually boosted the availability of resources, the deployment of the technology at hand into practical applications, and the effectiveness of systems. Nonetheless, ASP is missing a standard, high-level input language. We think, however, that the ASP community is mature enough for starting the development of a common standard input format: an ASP system can be roughly seen as composed of a front-end input language processor and a model generator. The first module is usually (but not necessarily) named \quo{grounder}, for it produces a propositional program obtained from an higher-level specification of the problem at hand. Incidentally, currently developed ASP grounder systems have recently reached a good degree of maturity, and, above all, they have reached a fairly large degree of overlap in their input formats. This paves the way for taking the very first serious step towards the proposal of a common input language for ASP solvers. It thus makes sense to play (part of) the Third ASP competition on the grounds of a common draft input format, in order to promote the adoption of a newly devised standard, and foster the birth of a new standardization working group. In order to met the above goals, the competition input format should be large enough to embed all of the basic constructs included in the language originally specified in \cite{gelf-lifs-91} (and lifted to its non-ground version), yet conservative enough to allow all the participants to adhere to the standard draft with little or no effort.
  2. Performance of a given ASP system $S$ might greatly vary on problem $P$, if fine-tuning on $P$ is performed either by improving the problem encoding or by tuning internal ad-hoc optimizations techniques. Although, on the one hand, it is important to encourage system developers to fine-tune their systems, and then compete on this basis, on the other hand it is similarly important to put in evidence how a solver performs with its default behavior: indeed, the user of an ASP system has generally little or no knowledge of the system internals, and might not be aware of which program rewritings and system optimization methods pay off in terms of performance. The \solver competition should thus put in evidence the performance of a system when used as an \quo{off-the-shelf black box} on a supposedly unknown problem specification. Rankings on the \solver competition should give a fairly objective measure of what one can expect when switching from a system to another, while keeping all other conditions fixed (problem encoding and default solver settings). The formula of the \solver competition thus aims at measuring the performance of a given solver when used on a generic problem encoding, and not when this is fine-tuned for the specific problems selected for the current competition.

Given the above considerations, the \solver competition will be held on the basis of fthe following principles:

\begin{enumerate} \item The \solver competition is open to general-purpose ASP

  • systems, able to parse a language in a fixed format (which are \aspcore and \asprfc, see the File and Language Format document \cite{languageformat}).

\item The competition is run over a selection of problems.

  • For each problem, a corresponding, fixed encoding in \aspcore or \asprfc, together with a set of benchmarks instances, is chosen by the organizers (see the Call for Problems document \cite{callforproblems});

\item Each participant system will be launched with its

  • default settings on each problem instance.

\item {\em Syntactic} special-purpose solving techniques, specialized on a per problem basis,

  • are forbidden.

\end{enumerate}

Detailed rules can be found in the {\em Competition Rules} document \cite{participationrules}.

ASP Competition 2011: SystemCompetition (last edited 2010-11-22 08:32:45 by GiovambattistaIanni)