table grape

Robot Table Grape Harvesting

The Situation Motivates

The growing demand for high-value crops and growing labour shortage in the agriculture sector are driving forces for developing autonomous robots for harvesting high-value crops. Despite past and ongoing developments of crop harvesting robots, a commercially viable solution available is not yet available as the solutions end/ed up being not scalable and single-purpose devices, i.e. device can be used only for picking a single variety of one fruit grown in a specific way.

Selective harvesting of high-value crops is increasingly considered as a routine pipeline to deliver the standard quality, size and weight on the shelf in supermarkets and to remove fruits with pest, mould, and any other defects from the packages. Nonetheless, this may result in an increased production cost. Selective harvesting of high-value crops can maximise the shelf lifetime, assures the standard quality necessary in the supermarkets and results in reduced waste and post-harvesting labour cost. However, this imposes a huge challenge to the agriculture sector in EU which has already faced a labour shortage issue. 

This project, as part of the “VINUM” agri-food robotics project, funded by IIT-Unicatt Joint Lab, identifies this shortcoming and develops a scalable robotic harvesting system which consists of picking, perception, manipulation, simulation and mobility modules, where each of the modules is reusable for other non-harvesting purposes.  

The Robot Works

The developed robot is equipped by a mobile platform, a panda arm, a customized clipper, and various sensors for perception and navigation thanks to our previous experience of developing mobile manipulation systems [1] [2].

A simple but effective clipper has been developed and tested. This clipper is working for havesting any kind of fruits or vagetables by cutting its stem while preventing the objects dropping, thanks to our previous experience for customizing end-effectors [3].

The Simulation Helps

Simulated environments help a faster and efficient robotic technology development. Despite the huge initial effort to develop table grape harvesting simulation, the long-term return motivates the effort. For efficient development and deployment of the technology, a high-fidelity simulation environment that provides reliable interaction between Roberry and fruits and leaves has been developed, thanks to the “Learn-Real” project the APRIL lab is running. Advanced rendering and physics modelling from a modern game engine, e.g. Unity3D, Unreal Engine, Nvidia Isaac, Blender 15, and robot modelling tools in ROS have been used for this purpose. It allows to train the robot perception and manipulation skill by easily manipulating the environment paramenters, e.g., the fruits color, texture, wind, and light. Two simulated environments are created for two typical table grape vineyards.

The Perception Guides

A deep neural network has been trained to detect the grape cluster and its stem.

The Demonstration Proves

Harvesting the table grape is a seasonal work. We train the robot perception in the simulator by manipulating the environmental parameters, e.g., weather, light, wind. We also build up a mock-up environment using artificial grapes and leaves to test the robot system.

This system shows and proves the use of mobile manipulator for table grape harvesting. However, there are still much work to do to improve the effectiveness, for example, to reduce the flying time of the robot arm either by adding other mechanism on board to hold the cluster or better motion planning of the arm at high velocity.


Dr. Fei Chen, Prof. Darwin Caldwell, Active Perception and Robot Interactive Learning Laboratory, ADVR, IIT

Dr. Claudio Semini, Dynamic Legged Systems, IIT

Prof. Stefano Poni, Prof. Matteo Gatti, Department of Sustainable Crop Production, UNIVERSITÀ CATTOLICA DEL SACRO CUORE, Italy

Note: Dr. Semini and Prof. Gatti are also coordinating the IIT-Unicatt Joint Lab Agri-Food Robotics Projects.


[1] Fei Chen, Mario Selvaggio, Darwin Caldwell, “Dexterous grasping by manipulability selection for mobile manipulator with visual guidance”, IEEE Transactions on Industrial Informatics 15 (2), 1202-1210, 2018.

[2] Mario Selvaggio, Gennaro Notomista, Fei Chen, Boyang Gao, Francesco Trapani, Darwin Caldwell, “Enhancing bilateral teleoperation using camera-based online virtual fixtures generation”, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1483-1488, 2016.

[3] Fei Chen, Boyang Gao, Mario Selvaggio, Zhijun Li, Darwin Caldwell, Keith Kershaw, Alessandro Masi, Mario Di Castro, Roberto Losito, “A framework of teleoperated and stereo vision guided mobile manipulation for industrial automation”, in IEEE International Conference on Mechatronics and Automation, 1641-1648, 2016.


fei.chen -at-