The ACRV Picking Benchmark (APB)

Benchmark  Details  Tasks  Rankings  FAQs 

Benchmark Description

We propose a physical benchmark for robotic picking: overall design, objects, configuration, and guidance on appropriate technologies to solve it. Challenges are an important way to drive progress but they occur only occasionally and the test conditions are difficult to replicate outside the challenge. This benchmark is motivated by experience in the recent Amazon Picking Challenge and contains a commonly-available shelf, 42 objects, a set of stencils and standardized task setups.
A major focus through the design of this benchmark was to maximise reproducibility: a number of carefully chosen scenarios with precise instructions on how to place, orient, and align objects with the help of printable stencils are defined. To make the benchmark as accessible as possible to the research community, a white IKEA shelf is used for all picking tasks. Furthermore, we carefully curated a set of 42 objects to ensure global availability and reduced chance of import restrictions.

Download: the paper,   the baseline code,   the object dataset!

How to get started

Have a look at the manual then Have a look at our baseline system!

All that is needed to get your picking robot ranked with the ACRV Picking Benchmark is to get an IKEA Kallax shelf (buy from your local IKEA). Then acquire the object dataset consisting of 42 items (see shopping list here).
Once you are finished setting up, chose a task, run it 3 consecutive times and report the success rate!

Check the FAQs and feel free to contact us if you have questions!

Benchmark Details

This page is continuously updated and should provide room for discussion in the wider robotics research community! Feel free to contact us!

Shelf

Our benchmark uses the commonly available white IKEA Kallax shelf with eight 33 x 33 x 39 cm bins (w x h x d). The shelf can be ordered from IKEA and is conveniently available worldwide, unlike the proprietary Kiva shelf used during the APC.
As the shelf is required to be moved (within a square of 2cm per corner) before each scored run the robot needs to localize the shelf. A 3D model of the shelf is available to facilitate (stl file, pcd collected).




Setups

The baseline system was tested on four setups with increasing complexity. For the easy task, our systems results were quite consistent, while for the more complicated tasks, our depth sensor and perception system were unable to produce object segments for the reflective or black objects and so resulted in a poor score. In addition to system success rate over three runs, we encourage to report the quickest time-to-first-pick (to foster research into faster robot systems).






Objects

This benchmark consists of 42 unique objects (Object List [PDF], Shopping List (Google Sheet)). The items chosen present various challenges for both perception and manipulation. Some items, like the tissue box, are both easy to manipulate and relatively easy to detect. Other items, like the nail, are both difficult to detect (small, reflective) and difficult to manipulate. The set includes items similar in appearance to the background, items similar in appearance to others and items invisible to low-cost depth sensors. Transparent, reflective, deformable and odd-shaped items are also included.
Labelled Image Dataset
Talk about the dataset available (for download) and create link to the Images for the APB Dataset are available for download: raw APB Dataset (42 GB), bounding boxes (.mat files), and labelled image proposals (extracted bounding boxes).
The dataset contains images collected on the IKEA shelf with the objects in various configurations plus labels (bounding boxes) and the segments for each object:
Datset Example Dataset Segment Example



Stencils

To precisely control the placement of objects in the shelf bins, we created a set of stencils. Stencils can be printed on A4 paper and feature light-grey, numbered markers ( 1 - 9 ). Objects are placed on these markers according to a layout specification.
Four stencils are available so far to help place the objects:
Stencil Crux Stencil Cassiopeia Stencil Dipper Stencil Orion
Placing objects with the help of stencil: (left) the Crux stencil is placed in the bin, then the object is placed on the markers (right)
Stencil in the shelf Box placed with the help of a stencil
We encourage the community to design furhter stencils and scenarios. If you have an idea or a stencil you want to share please contact us!



Evaluation Guidelines

The robot is not to be touched or tele-operated during the entire setup and scoring. Each separate run begins with the setup phase. The following guidelines are proposed for fair comparing and ranking of submissions.
To submit you score, please read the following carefully!

The robot shall be placed within the 2x2m workspace infront of the shelf. Any starting position within the workspace can be chosen by the team. The tote where the objects will be placed can also be positioned manually anywhere within that space. The tote is not allowed to touch the shelf or be rigidly fixed to the ground.
The shelf is moved slightly (each corner’s position can change within a 2cm square). By moving, we hope to foster development of more generic, more robust perception solutions (and limiting scripted solutions). No part of the robot is allowed to be closer than 0.5m to any part of the shelf during setup.
The stencils required for the task are then placed in the shelf bins before placing the objects aligned with the correct makers.
The objects are placed in the specified bins facilittate by the stencils placed in the previous step. An image is provided with the task description for verification of each object’s rotation and placement.
After the objects are placed the robot is turned into autonomous operation and the clock is started. As there is no pre-defined order, the system can choose in which order to pick.
A run is over when the work order is fulfilled, that is all specified items from the workorder are picked, or the maximum time has elapse (usually 15 mins). In addition the run is stopped if human intervention is necessary (either through the e-stop or by sending commands).
Then the environment is reset (go back to step 1). The results for 3 consecutive runs shall be reported.



Video Submission

A video to verify the robot’s score and show the robot in operation will be required. The recording allows the verification of the results, and the adding of scoring metrics retrospectively. In addition, the teams can provide links to systems descriptions, publications, and code.

Task Descriptions

The following four benchmark tasks were proposed by us in the original APB paper [Leitner et al, 2016].
Each setup consits of a work order (and state of the shelf) defined in a JSON file, a description on how to place the objects in the shelf with the help of stencils, and an image for reference (to check that your shelf is set up correctly). The robot should update the JSON file during execution and it should represent the correct state of the shelf after the robot finishes operation.

Task 1 (easy)

This ‘easy’ setup is meant as the entry level task. Most (state of the art) robotic system should be capabable of achieving high and even perefect score on this setup. (Even thought there were teams that scored rather low or DNFs during the APC 2016 competition) The easy level is also included to emphasis the second metric we use, the quickest pick.
Download work order (zip)
(JSON file, setup description, and reference image)

Task 2

The setups are ordered by complexity. Task 2 adds complexity by introducing deformable objects to the list of picks (cherokee_easy_tee_shirt from Bin C) and occlusions (platinum_pets_dog_bowl from Bin B and kleenex_tissue_box from Bin D). In Bin A the elmers_washable_no_run_school_glue is to be picked.
Download work order (zip)
(JSON file, setup description, and reference image)

Task 3

TBA
Download work order (zip)
(JSON file, setup description, and reference image)

Task 4 (tough)

TBA
Download work order (zip)
(JSON file, setup description, and reference image)

To submit your score, just run one experiment! Follow the rules in the manual and send a video!
Interested? Download the paper! And become part of our growing community!






Ranking

Task 1


Task 2


Task 3


Task 4



FAQs

How to setup? How to run an experiment?


How does the scoring work?


How do I setup the experiment?


What is the requirement on the shelf movement between scored runs?

The shelf is rquired to be moved randomly within a 2cm square per corner of the shelf. This will lead to translation AND rotation, requiring the robot to localisate the exact positoin of itself wrt. the shelf again.

Can I modify the shelf or the objects? Can I glue the tote to the floor?


Why did you chose those objects?


How to get started?

This is simple :) go and buy an IKEA shelf, pick a specific task, and get (a subset) of the objects required. Then make your robot pick! If you have a Baxter and you feel a bit overwhelmed, you can start from our baseline! Download the baseline code the hardware is described here!


Follow up of our experience with the

Amazon Picking Challenge

Leipzig, Germany | July 3-4, 2016

Contact