International Planning Competition 2018
Classical Tracks

This is the website for the classical (sequential, deterministic) track of the IPC 2018. This is the 9th IPC containing classical tracks making it the oldest part of IPC. In addition to the classical tracks, the IPC also has temporal and probabilistic tracks. You can find information about them on ipc2018.bitbucket.io.

Mailing List: https://groups.google.com/forum/#!forum/ipc2018

Schedule

Event Date
Call for domains / expression of interest June 22, 2017
Domain submission deadline November 30, 2017
Demo problems provided December 7, 2017
Initial planner submission January 14, 2018
Feature stop (final planner submission) March 6, 2018
Planner Abstract submission deadline May 17, 2018
Contest run April - May, 2018
Results announced June 29, 2018
Result analysis deadline July, 2018

Tracks

There are four classical tracks: optimal, bounded-cost, satisficing, and agile. (The multi-core track was canceled due to a low number of participants.) The tracks differ in the amount of resources available and in the scoring method. The tracks focus on coverage (optimal and cost-bounded), plan quality (satisficing), and solving time (agile). We encourage participants to enter their planner in all tracks.

Optimal Track

  • single CPU core
  • 8Gb memory limit
  • 30min time limit
  • Plans must be optimal
  • The score of a planner is the number of solved tasks
  • If a suboptimal or invalid plan is returned, all tasks in the domain are counted as unsolved.
  • If that happens in more than one domain, the entry is disqualified.

Bounded-Cost Track

  • single CPU core
  • 8Gb memory limit
  • 30min time limit
  • Plans must have a cost not greater than a given bound
  • The score of a planner is the number of solved tasks
  • If an invalid plan or a plan exceeding the cost bound is returned, all tasks in the domain are counted as unsolved.
  • If that happens in more than one domain, the entry is disqualified.

Satisficing Track

  • single CPU core
  • 8Gb memory limit
  • 30min time limit
  • Multiple plans can be returned, the one with the lowest cost is counted.
  • The score of a planner on a solved task is the ratio C*/C where C is the cost of the cheapest discovered plan and C* is the cost of a reference plan. The score on an unsolved task is 0. The score of a planner is the sum of its scores for all tasks.
  • If an invalid plan is returned, all tasks in the domain are counted as unsolved.
  • If that happens in more than one domain, the entry is disqualified.

Agile Track

  • single CPU core
  • 8Gb memory limit
  • 5min time limit
  • The cost of the discovered plan is ignored, only the CPU time to discover a plan is counted.
  • The score of a planner on a solved task is 1 if the task was solved within 1 second and 0 if the task was not solved within the resource limits. If the task was solved in T seconds (1 ≤ T ≤ 300) then its score is 1 - log(T)/log(300). The score of a planner is the sum of its scores for all tasks.
  • If an invalid plan is returned, all tasks in the domain are counted as unsolved.
  • If that happens in more than one domain, the entry is disqualified.

PDDL Fragment

IPC 2018 will use a subset of PDDL 3.1, as done in IPC 2011 and IPC 2014. As in previous classical tracks planners must support the subset of the language involving STRIPS, action costs, negative preconditions, and conditional effects (possibly in combination with forall, as done in IPC 2014).

Despite our compromise to not require a larger subset of PDDL than in previous IPCs, some of the domains are more naturally expressed when more advanced PDDL features like derived predicates or quantified preconditions. For planners natively supporting such encodings, we will consider providing alternative encodings and only consider the results obtained with the best encoding for the planner. This gives an small advantage to planners supporting larger subsets of PDDL.

Most planners in previous IPCs rely on a grounding procedure to instantiate the entire planning task prior to start solving it. In most benchmarks from previous IPCs this could be done easily, compared to the difficulty in solving the problem. However, this is not the case in some planning applications that require dealing with predicates and action schemas of big arity, even if the problem is easy to solve. Previous IPCs dealt with this by reformulating the problem to a representation more suitable for the participant planners. We will consider introducing some domains where grounding is challenging. As an example, you may consider this planning domain (note: this is a testing domain and it won't be used in the competition).

Registration

As in previous editions, the competitors must submit the source code of their planners that will be run by the organizers on the actual competition domains/problems, unknown to the competitors until this time. This way no fine-tuning of the planners will be possible.

All competitors must submit an abstract (max. 300 words) and a 4-page paper describing their planners. After the competition we encourage the participants to analyze the results of their planner and submit an extended version of their abstract. An important requirement for IPC 2018 competitors is to give the organizers the right to post their paper and the source code of their planners on the official IPC 2018 web site.

Registration Process

We will use the container technology "Singularity" this year to promote reproducibility and help with compilation issues that have caused problems in the past. More details on Singularity can be found below.

To register your planner create a repository (mercurial and git repositories are accepted) on bitbucket and give read access to ipc2018-classical-bot. Then create one branch per track you want to participate in and name it according to the following list.

  • ipc-2018-seq-opt (optimal track)
  • ipc-2018-seq-sat (satisficing track)
  • ipc-2018-seq-agl (agile track)
  • ipc-2018-seq-cbo (cost-bounded track)
Up to two versions of the same planner are allowed to participate. To submit two different versions of the same planner, simply create two different repositories. In each branch, add a file called Singularity to the root directory of your repository. This file is used to bootstrap a singularity container and to run the planner (an example can be found in our demo submission. For more details on Singularity see below).

We will build all planners once a day and run them on a number of test cases. You can see the results for your planner on the build status page. Test your Singularity file locally (see below) and make sure it passes our automated tests.

A planner is officially registered in a track if it has a green box for that track on the build status page on January 14. You can still make any code changes you want until March 6. The build status on the website will update (once a day) when you push new changes the registered branches.

Bug fixing policy

We will fork your repository on March 6. If you find any bugs in your code afterwards (or if we detect any while running your code), you can create a pull request to our fork with a patch fixing the bug. Only bug fixes will be accepted after the deadline (in particular, we will not accept patches modifying behavior or tuning parameters).

Details on Singularity

In an effort to increase reproducibility and reduce the effort of running future IPCs, we are using software containers that contain the submitted planner and everything required to run it. We are using Singularity which is an alternative to the well-known Docker. Singularity (in contrast to Docker) is specifically designed for scientific experiments on HPC clusters and has low overhead.

Singularity containers can be viewed as light-weight alternatives to virtual machines that carry a program and all parts of the OS that are necessary to run it. They can be based on any docker image. We created an example submission ( Singularity file) that uses the latest Ubuntu as a basis and uses apt-get to install required packages for Fast Downward. It then builds the planner from the files that are next to the Singularity file in the repository.

In the following, we collect and answer frequently asked questions about Singularity. We'll update this section as we get more questions. If you run into problems using Singularity and your problem is not answered here, let us know.

We used version 2.4 for the competition. In April 2020, we updated the planner Singularity files to be compatible with version 3.5 as well.
We recommend to install the current version of Singularity following their installation guide.

To test your Singularity script, please install Singularity (see above) and run the following commands (replacing our demo submission with your repository and ipc-2018-seq-opt with the track you want to run):


    wget https://bitbucket.org/ipc2018-classical/demo-submission/raw/ipc-2018-seq-opt/Singularity
    sudo singularity build planner.img Singularity
    mkdir rundir
    cp path/to/domain.pddl rundir
    cp path/to/problem.pddl rundir
    RUNDIR="$(pwd)/rundir"
    DOMAIN="$RUNDIR/domain.pddl"
    PROBLEM="$RUNDIR/problem.pddl"
    PLANFILE="$RUNDIR/sas_plan"
    COSTBOUND=42 # only in cost-bounded track
    ulimit -t 1800
    ulimit -v 8388608
    singularity run -C -H $RUNDIR planner.img $DOMAIN $PROBLEM $PLANFILE $COSTBOUND
                    

The last command also shows how we will call the container during the competition: the parameter "-H" mounts the directory containing the PDDL files into the file system of the container and uses it as the user's home directory. The parameter "-C" then isolates the container from the rest of the system. Only files written to the mounted directory will be stored permanently. Other created files (for example in /tmp) only persist for the current session and are cleaned up afterwards. When running the container on two instances at the same time, their run directories and sessions will be different, so the two runs cannot interact. The container itself is read-only after its creation.

We will also build your code about once per day and show the results for all planners on the build status page

Yes but only to certain directories. The runscript of your container is started from the home directory of the container which is also the directory that contains the input files. You have write access to this directory and files written here will be persistent. However, the home directory will be different in each run, so each run will start from the same clean container. You also have write access to the directory
/tmp
but files written there will be deleted after the run. See the question above for how to set up Singularity in this way for testing.
Yes only the Bitbucket user ipc2018-classical-bot needs access. However, the build status page will show logs for all registered planners. We consider this information public.

Please contact us if your license does not permit you to package the library into the container.

If we can acquire a license, we will mount the installation files for the library while building the container. You can then copy the installation file into the container in the %setup step and install it in the %post step.

We currently have a license for CPLEX and make the installer for version 12.7.1 (64 bit) available during setup. You can see an example of the installation in the multi-core track of our demo submission. For all other libraries, please get in touch with us as soon as possible.

This is technically possible but please don't do this. Your submission has to include your source code and should be built from that code. For increased reproducibility please make your repository as self-contained as possible.
During the competition, Singularity files used an environment variable to copy the source code into the container.\ Unfortunately, this is no longer possible and we updated the Singularity files to clone the code from Bitbucket instead.
No, we will get an automated notification about this and will add you to the list of teams. Your planner should show up on the build status page after one or two days. If your planner doesn't show up on the build status page, or if you have any other questions or problems, you are of course welcome to contact us. Please add the list of authors, name and description of the planner and other meta data to your Singularity file according to our example Singularity file.
Singularity images can be based on any docker image. We used a basic Ubuntu image as the basis of our demo but you are welcome to use other images. The Singularity image must run on CentOS but most unix-based images will work. Windows and OS X are not supported. If you are trying to generate a small image, a lightweight OS such as Alpine Linux might be an option. However, be aware that Alpine Linux uses musl instead of glibc.
After you compile your planner, you may remove the planner source code and packages that are only required for building your planner. Our demo submission shows this in the multi-core track.
It is not necessary to reduce the image size as much as possible but we appreciate any effort to keep the images small.
This can be caused by Windows line endings in the Singularity file. The line Bootstrap: docker is then parsed as Bootstrap: docker\r and not recognized. Using Linux-style line endings should fix the issue.

Input/Output Rules

Input: The runscript specified in the Singularity file should accept 3 parameters (4 in the cost bounded track):


              ./runscript <domain> <problem> <plan-output-file> [<cost-bound>]
            

Output: The planner must write the plan file(s) in the location specified in <plan-output-file>, except in the satisficing track where plans can also be written in files <plan-output-file>.1, <plan-output-file>.2, <plan-output-file>.3, etc.

Random Seed: We won't provide any random seed as part of the input to the planners. However, for the sake of replicability of the experiments, we ask all participants to fix a random seed and make their planner behaviour as deterministic as possible.

Optional Log Output: As part as the competition results we would like to publish detailed data about the participant planners that helps to understand the competition results in more detail. For that, we would like your collaboration in printing some stats about your planner that can be parsed afterwards. This is completely optional and we will not rely on this data to determine the winner. We will parse the following information:

  • Time format: all times in the following output are optional and can be given in different formats.
    • Absolute (time passed since the planner started): "[732.736s]"
    • Relative (relative to last printed time): "[+32.736s]"
    • Absolute reset (all absolute times afterwards are relative to this time): "[=+732.736s]"
  • Plan cost bounds: print a line as soon as new lower/upper bound is found
    • "proven lower bound: 124 TIME"
    • "proven upper bound: 250 TIME"
  • Portfolio planner techniques: print when a technique is started/ended (can also be used for preprocess/translate) and which technique solved the problem
    • "starting technique: NAME TIME"
    • "technique finished: NAME TIME"
    • "most useful technique: NAME"
  • Task size: print statistics on the problem (potentially after preprocessing it)
    • "Translator variables: 62"
    • "Translator facts: 133"
    • "Translator goal facts: 3"
    • "Translator operators: 358"
  • Search statistics:
    • "Expanded 85 state(s)."
    • "Reopened 0 state(s)."
    • "Evaluated 108 state(s)."
    • "Evaluations: 108"
    • "Generated 214 state(s)."
    • "Dead ends: 0 state(s)."
    • "Expanded until last jump: 77 state(s)."
    • "Reopened until last jump: 0 state(s)."
    • "Evaluated until last jump: 101 state(s)."
    • "Generated until last jump: 200 state(s)."
  • Others: Every type of planner has different relevant data to report and it is impossible for us to give a complete list of all the attributes that are relevant. The list above is based on the two most common kind of planners: portfolios and heuristic search planners (based on Fast Downward). However, this list is not closed. If your planner reports other useful statistics, or they are in a different format, we would like to parse them as well. Please, write us an email with a description of the information that your planner provides in the logs and, if possible, a Python regular expression that parses it.

Calls for Participation and Domains

Please forward the following calls to all interested parties.

Domains

Eleven domains were used in the competition. A repository containing the PDDL files, the selected instances, reference plans, and our best known bounds for each instance is available on Bitbucket. The instances are also available through the API of planning.domains. We also have detailed descriptions of all domains, best known bounds, and generators for some domains on a separate page.

Planners

Including planner variants, 35 planners participated in the classical tracks. Since most planners participated in multiple tracks, the total number of entries was 73. Planner abstracts for each planner are available as a planner abstract booklet or as individual files linked below.

The source code of all entries is publicly available on bitbucket. To build a planner, download its Singularity file and use singularity to build its image. For example, this is how you'd get the entry of team 1 in the satisficing track:

wget https://bitbucket.org/ipc2018-classical/team1/raw/ipc-2018-seq-sat/Singularity
sudo singularity build planner.img Singularity
Note that some planners require the LP solver CPLEX to compile. To build such a planner, acquire a CPLEX license, download the installation files to the directory where you build the image. Since building a singularity image requires root privileges, we recommend building the image inside a virtual machine. For example, you may use vagrant Vagrant with this Vagrantfile. It expects a directory ./input where the planner is located and an empty directory ./output where the image will be created. For instructions how to run the generated image, please refer to the FAQ. Note that some planners expect that an external time limit is set with ulimit -t and will not work otherwise.

Some planners had minor bug fixes after the competition. The revision at the tip of the track's branch (e.g., ipc-2018-seq-opt) includes all fixes. We recommend to use this version in all experiments. To see the changes compared to the competition, compare to the tag ipc-2018-seq-opt-competition (replace opt according to the branch). If this tag does not exist, there were no bug fixes in this track.

Optimal Track

Satisficing Track

Agile Track

Except for Fast Downward Stone Soup 2018, the planners participating in the agile track are the same as those participating in the satisficing track. For accessing the code use the branch ipc-2018-seq-agl instead of ipc-2018-seq-sat. THe LAMA 2011 baseline planner was adapted to stop after discovering the first solution.

Cost-bounded Track

Multi-core Track (canceled)

  • IBaCoP-2018 (code) and IBaCoP2-2018 (code)
    by Isabel Cenamor, Tomas de la Rosa and Fernando Fernandez
  • ArvandHerd 1 (code) and ArvandHerd 2 (code)
    by Richard Valenzano, Hootan Nakhost, Martin Müller, Jonathan Schaeffer, and Nathan Sturtevant

Results

The results were presented at the 28th International Conference on Automated Planning and Scheduling on June 29 in Delft. The presentation slides of this talk contain additional detail.

An overview of the scores is available online. Detailed results for all planners are available in two forms: a small repository per track contains an HTML table and a JSON file containing the parsed values for all instances (optimal track, satisficing track, agile track, cost-bounded track). These JSON files are compatible with downward lab but can also be used without it. If you require more detail about individual planner runs, the raw logs of all runs including all meta data generated by our scripts is available as well:

Even more detailed data that includes files generated by the planner during runtime are available on request.

Based on these results, we are proudly presenting the following awards.

Optimal Track

  • Winner: Delfi 1
    by Michael Katz, Shirin Sohrabi, Horst Samulowitz, and Silvan Sievers
  • Runner-Up: Complementary
    by Santiago Franco, Levi H. S. Lelis, Mike W. Barley, Stefan Edelkamp, Moisés Martínez, and Ionut Moraru

Satisficing Track

  • Winner: Fast Downward Stone Soup 2018
    by Jendrik Seipp and Gabriele Röger
    and Fast Downward Remix
    by Jendrik Seipp
  • Runner-Up: LAPKT-DUAL-BFWS
    by Nir Lipovetzky, Miquel Ramírez, Guillem Francès, and Hector Geffner

Agile Track

  • Winner: LAPKT-BFWS-Preference
    by Nir Lipovetzky, Miquel Ramírez, Guillem Francès, and Hector Geffner
  • Runner-Up: Saarplan
    by Maximilian Fickert, Daniel Gnad, Patrick Speicher, and Jörg Hoffmann

Cost-Bounded Track

  • Winner: Fast Downward Stone Soup 2018
    by Jendrik Seipp and Gabriele Röger
    and Fast Downward Remix
    by Jendrik Seipp
  • Runner-Up: Saarplan
    by Maximilian Fickert, Daniel Gnad, Patrick Speicher, and Jörg Hoffmann

Outstanding Domain Submission Award

  • Winner: Organic Synthesis
    by Hadi Qovaizi, Arman Masoumi, Anne Johnson, Russell Viirre, Andrew McWilliams, and Mikhail Soutchanski (Faculty of Science, Ryerson University, Toronto, Canada)

Organizers

Contact us: ipc-2018-organizers@googlegroups.com