Among all of the teams, a total of 36 correct items were
picked, seven incorrect items were picked, and four items were
dropped. About half of the teams scored zero points, including
two who set up their robot, but did not get it working well
enough to attempt the trials. There appeared to be a variety of
reasons that teams did not perform well. For example, Team
A.R. looked very promising in warm-ups, but the particular
product arrangement they drew for the trial had the glue bottle
alone in the lower left bin. Their system’s planner computed
a grasp plan that involved rotating the end-effector in such
a way that the vacuum hose wound around the arm. They
had not adequately modeled the hose behavior, and this one
product in this particular bin exposed a corner case they had
not seen during development and testing. Other teams failed
because of last minute software changes, or failures to model
the lip of the shelf such that the gripper had trouble finding a
way into the bins. Lighting in the convention hall also proved
to be a problem for some teams. For example, the Duke team
resorted to taping an umbrella to the top of their robot to block
overhead light.
With so few products picked overall, it is perhaps too early
to draw meaningful conclusions. But we will offer up some