This is the website for the classical (sequential, deterministic) track of the IPC 2018. This is the 9th IPC containing classical tracks making it the oldest part of IPC. In addition to the classical tracks, the IPC also has temporal and probabilistic tracks. You can find information about them on ipc2018.bitbucket.io.
Mailing List: https://groups.google.com/forum/#!forum/ipc2018
Event | Date |
---|---|
Call for domains / expression of interest | June 22, 2017 |
Domain submission deadline | November 30, 2017 |
Demo problems provided | December 7, 2017 |
Initial planner submission | January 14, 2018 |
Feature stop (final planner submission) | March 6, 2018 |
Planner Abstract submission deadline | May 17, 2018 |
Contest run | April - May, 2018 |
Results announced | June 29, 2018 |
Result analysis deadline | July, 2018 |
There are four classical tracks: optimal, bounded-cost, satisficing, and agile. (The multi-core track was canceled due to a low number of participants.) The tracks differ in the amount of resources available and in the scoring method. The tracks focus on coverage (optimal and cost-bounded), plan quality (satisficing), and solving time (agile). We encourage participants to enter their planner in all tracks.
IPC 2018 will use a subset of PDDL 3.1, as done in IPC 2011 and IPC 2014. As in previous classical tracks planners must support the subset of the language involving STRIPS, action costs, negative preconditions, and conditional effects (possibly in combination with forall, as done in IPC 2014).
Despite our compromise to not require a larger subset of PDDL than in previous IPCs, some of the domains are more naturally expressed when more advanced PDDL features like derived predicates or quantified preconditions. For planners natively supporting such encodings, we will consider providing alternative encodings and only consider the results obtained with the best encoding for the planner. This gives an small advantage to planners supporting larger subsets of PDDL.
Most planners in previous IPCs rely on a grounding procedure to instantiate the entire planning task prior to start solving it. In most benchmarks from previous IPCs this could be done easily, compared to the difficulty in solving the problem. However, this is not the case in some planning applications that require dealing with predicates and action schemas of big arity, even if the problem is easy to solve. Previous IPCs dealt with this by reformulating the problem to a representation more suitable for the participant planners. We will consider introducing some domains where grounding is challenging. As an example, you may consider this planning domain (note: this is a testing domain and it won't be used in the competition).
As in previous editions, the competitors must submit the source code of their planners that will be run by the organizers on the actual competition domains/problems, unknown to the competitors until this time. This way no fine-tuning of the planners will be possible.
All competitors must submit an abstract (max. 300 words) and a 4-page paper describing their planners. After the competition we encourage the participants to analyze the results of their planner and submit an extended version of their abstract. An important requirement for IPC 2018 competitors is to give the organizers the right to post their paper and the source code of their planners on the official IPC 2018 web site.
We will use the container technology "Singularity" this year to promote reproducibility and help with compilation issues that have caused problems in the past. More details on Singularity can be found below.
To register your planner create a repository (mercurial and git repositories are accepted) on bitbucket and give read access to ipc2018-classical-bot. Then create one branch per track you want to participate in and name it according to the following list.
We will build all planners once a day and run them on a number of test cases. You can see the results for your planner on the build status page. Test your Singularity file locally (see below) and make sure it passes our automated tests.
A planner is officially registered in a track if it has a green box for that track on the build status page on January 14. You can still make any code changes you want until March 6. The build status on the website will update (once a day) when you push new changes the registered branches.
We will fork your repository on March 6. If you find any bugs in your code afterwards (or if we detect any while running your code), you can create a pull request to our fork with a patch fixing the bug. Only bug fixes will be accepted after the deadline (in particular, we will not accept patches modifying behavior or tuning parameters).
In an effort to increase reproducibility and reduce the effort of running future IPCs, we are using software containers that contain the submitted planner and everything required to run it. We are using Singularity which is an alternative to the well-known Docker. Singularity (in contrast to Docker) is specifically designed for scientific experiments on HPC clusters and has low overhead.
Singularity containers can be viewed as light-weight alternatives to virtual machines that carry a program and all parts of the OS that are necessary to run it. They can be based on any docker image. We created an example submission ( Singularity file) that uses the latest Ubuntu as a basis and uses apt-get to install required packages for Fast Downward. It then builds the planner from the files that are next to the Singularity file in the repository.
In the following, we collect and answer frequently asked questions about Singularity. We'll update this section as we get more questions. If you run into problems using Singularity and your problem is not answered here, let us know.
To test your Singularity script, please install Singularity (see above) and run the following commands (replacing our demo submission with your repository and ipc-2018-seq-opt with the track you want to run):
wget https://bitbucket.org/ipc2018-classical/demo-submission/raw/ipc-2018-seq-opt/Singularity
sudo singularity build planner.img Singularity
mkdir rundir
cp path/to/domain.pddl rundir
cp path/to/problem.pddl rundir
RUNDIR="$(pwd)/rundir"
DOMAIN="$RUNDIR/domain.pddl"
PROBLEM="$RUNDIR/problem.pddl"
PLANFILE="$RUNDIR/sas_plan"
COSTBOUND=42 # only in cost-bounded track
ulimit -t 1800
ulimit -v 8388608
singularity run -C -H $RUNDIR planner.img $DOMAIN $PROBLEM $PLANFILE $COSTBOUND
The last command also shows how we will call the container during the competition: the parameter "-H" mounts the directory containing the PDDL files into the file system of the container and uses it as the user's home directory. The parameter "-C" then isolates the container from the rest of the system. Only files written to the mounted directory will be stored permanently. Other created files (for example in /tmp) only persist for the current session and are cleaned up afterwards. When running the container on two instances at the same time, their run directories and sessions will be different, so the two runs cannot interact. The container itself is read-only after its creation.
We will also build your code about once per day and show the results for all planners on the build status page
/tmp
but files written there
will be deleted after the run. See the question above
for how to set up Singularity in this way for testing.
Please contact us if your license does not permit you to package the library into the container.
If we can acquire a license, we will mount the
installation files for the library while building the
container. You can then copy the installation file into
the container in the %setup
step and
install it in the %post
step.
We currently have a license for CPLEX and make the installer for version 12.7.1 (64 bit) available during setup. You can see an example of the installation in the multi-core track of our demo submission. For all other libraries, please get in touch with us as soon as possible.
Bootstrap:
docker
is then parsed as Bootstrap:
docker\r
and not recognized. Using Linux-style
line endings should fix the issue.
Input: The runscript specified in the Singularity file should accept 3 parameters (4 in the cost bounded track):
./runscript <domain> <problem> <plan-output-file> [<cost-bound>]
Output: The planner must write the plan file(s) in the location specified in <plan-output-file>, except in the satisficing track where plans can also be written in files <plan-output-file>.1, <plan-output-file>.2, <plan-output-file>.3, etc.
Random Seed: We won't provide any random seed as part of the input to the planners. However, for the sake of replicability of the experiments, we ask all participants to fix a random seed and make their planner behaviour as deterministic as possible.
Optional Log Output: As part as the competition results we would like to publish detailed data about the participant planners that helps to understand the competition results in more detail. For that, we would like your collaboration in printing some stats about your planner that can be parsed afterwards. This is completely optional and we will not rely on this data to determine the winner. We will parse the following information:
Please forward the following calls to all interested parties.
Eleven domains were used in the competition. A repository containing the PDDL files, the selected instances, reference plans, and our best known bounds for each instance is available on Bitbucket. The instances are also available through the API of planning.domains. We also have detailed descriptions of all domains, best known bounds, and generators for some domains on a separate page.
Including planner variants, 35 planners participated in the classical tracks. Since most planners participated in multiple tracks, the total number of entries was 73. Planner abstracts for each planner are available as a planner abstract booklet or as individual files linked below.
The source code of all entries is publicly available on bitbucket. To build a planner, download its Singularity file and use singularity to build its image. For example, this is how you'd get the entry of team 1 in the satisficing track:
wget https://bitbucket.org/ipc2018-classical/team1/raw/ipc-2018-seq-sat/Singularity
sudo singularity build planner.img Singularity
Note that some planners require the LP solver CPLEX to compile.
To build such a planner, acquire a CPLEX license, download the installation files
to the directory where you build the image.
Since building a singularity image requires root privileges, we recommend building the
image inside a virtual machine. For example, you may use vagrant
Vagrant with this
Vagrantfile. It expects a directory ./input
where the planner is located and an empty directory ./output
where the image
will be created.
For instructions how to run the generated image,
please refer to the FAQ. Note that some
planners expect that an external time limit is set
with ulimit -t
and will not work
otherwise.
Some planners had minor bug fixes after the competition. The revision at the tip of the track's branch (e.g., ipc-2018-seq-opt) includes all fixes. We recommend to use this version in all experiments. To see the changes compared to the competition, compare to the tag ipc-2018-seq-opt-competition (replace opt according to the branch). If this tag does not exist, there were no bug fixes in this track.
Except for Fast Downward Stone Soup 2018, the planners participating in the agile track are the same as those participating in the satisficing track. For accessing the code use the branch ipc-2018-seq-agl instead of ipc-2018-seq-sat. THe LAMA 2011 baseline planner was adapted to stop after discovering the first solution.
The results were presented at the 28th International Conference on Automated Planning and Scheduling on June 29 in Delft. The presentation slides of this talk contain additional detail.
An overview of the scores is available online. Detailed results for all planners are available in two forms: a small repository per track contains an HTML table and a JSON file containing the parsed values for all instances (optimal track, satisficing track, agile track, cost-bounded track). These JSON files are compatible with downward lab but can also be used without it. If you require more detail about individual planner runs, the raw logs of all runs including all meta data generated by our scripts is available as well:
Based on these results, we are proudly presenting the following awards.
Contact us: ipc-2018-organizers@googlegroups.com