**3.1 Robot with position-based visual feedback (RPBVF)**

RPBVF was developed to act and observe the crop in NFT hydroponic systems. The focus is on the implementation of a position-based visual feedback (PBVF) algorithm in combination with a Microsoft Kinect. AmHydro 612 NFT production unit was 1.8 m × 3.65 m × 0.9 m production unit that stored 144 plants and 144 seedlings and used a closed loop water system. Above the NFT system were placed artificial lights to improve the lettuce growth. The gullies laid on an inclined table, which angle was θ, so that water flows passively to the end of the gullies. Water was collected at the end of the gullies and directed to the water reservoir, where a water pump propelled water to the top of the gullies. To manipulate the plants, the robot (**Figure 24**) was designed as a gantry with four v-groved wheels running on two inverted angle iron tracks (x-axis). On top of the gantry was a carriage that can move back and forth over the gantry (y-axis), this was perpendicular to the x-axis. On the carriage was a mechanism to move an arm up and down (z-axis), down being the negative direction. At the end of the arm was placed a two degrees of freedom gripper which opened, closed and rotated around the y-axis.

The structure is made primarily from aluminum that allows the robot to be adjusted to accommodate different sizes of NFT hydroponic systems. The x-axis was driven by a stepper motor and a chain. A timing belt transmitted the power from the carriage on the gantry to the stepper motor. The arm on the carriage was balanced by a counterweight and was driven by a stepper motor and a chain. Two linear actuators were used to open and close the gripper and the other linear actuator was used to rotate the gripper around the y-axis. All three linear actuators were driven by a 12 DCV relay board that communicates with a Phidgets interface board, which was connected to the main computer, which was running Ubuntu Server 11.04 × 64. The Kinect vision system was mounted on the carriage so that the optical axis was along the negative z-axis. All software was programmed in C++. Every hardware component communicated with its own ROS (Robotic Operating System) node. The main hardware nodes were stepper motor node, gripper node, interface board node, position node and Kinect node. The position node keeps track of the x, y and z-position of the robot and a graphic user interface was designed to provide low level control of the system. A Microsoft Kinect camera was added to the system, which produced

**Figure 24.** *Robot manipulator arm.*

two kinds of images, a 640 × 480-pixel RGB color image and a 640 × 480 pixel 11-bit (0–2047) gray scale depth image that was provided by an Infra-Red (IR) sensor. The extraction of the plants required combining classical 2D image analysis techniques and IR-based depth measurement the 3D position. The computer language used was C++ using the Open Computer Vision library. The Kinect was placed on the carriage and is facing downwards (negative z-axis) to ensure the plants in its field of view are at a maximum distance of 1.5 m, because the accuracy of the Kinect decreases quadratically with distance. Up to 1.5 m the accuracy was 10 mm and the precision of the Kinect was 1 mm. The field of view was of 0.8 m × 1.15 m in x and y-direction. It was used a RPBVF algorithm was used to detect plants on the hydroponic system and placed the robot to manipulate plants (**Figure 25**).

In the algorithm, first were detected the gullies, because the plants are only located on them. All gullies are oriented along the x-axis and are straight. A probabilistic Hough Transform was used for straight line detection. By filtering the detected lines, the edges of the gullies were identified. After the identification of the edges, the lines were grouped, resulting in a segmentation of the gullies. After filtering, the coordinates of the plants in the image frame were known. Point was defined as the top left corner of the image. The depth information was extracted from the depth image by getting the value at point. The OpenNI driver transforms the IR sensor values into distances in meters by using a fitting function. To reduce the noise, multiple consecutive frames were averaged to calculate the plant coordinates. The plant coordinates form the control input for the robot. The output only depends on the current state and the control input. The open loop control algorithm was used. To be able to pick up a plant, the image frame coordinates had to be transformed to gantry coordinates. To transform the image frame coordinates to gantry coordinates a Garstka and Peters modified transformation was used. Because the Kinect is not located on the gripper, all coordinates have to be offset. These offsets are dependent on the position of the Kinect relative to the gripper. The z-coordinate has to be offset by an extra value, because the NFT table is under an angle of 2.2°. In this transformation, the point is the principal point in pixels of the depth sensor and the focal lengths in pixels were calculated. The values were quantified by calibrating the Kinect. The position bias was removed by a linear scaling of the x and y-coordinates. To evaluate the performance of the positioning and control algorithm, the x, y and z-position error between the final position of the gripper and the plant coordinate was measured. The final position of the gripper was defined as 20 mm above the center of the cup. On the top of the cup a cross-hair is drawn to mark the center. The initial position of the gripper is defined as the middle between the

**47**

*Automation and Robotics Used in Hydroponic System DOI: http://dx.doi.org/10.5772/intechopen.90438*

points of the gripper so the x, y and z-position error can be measured. Each image was analyzed to detect the plants. With these coordinates the robot is heading to the plant. The position error was measured with a ruler at the final stopping position and robot then returned to the same starting position. The gripper must be ±15 mm in x-direction, ±20 mm in y-direction and ±10 mm in z-direction from the center of the cup to allow the robot to pick up the plant. From the images with detected plants the gantry coordinates of the plants were calculated and inputted for the positioning algorithm so that the robot can be positioned to pick up the plants. There were 25 samples evaluated. The performance of the system is within the requirements

It is expected that future developers can to detect acidity levels of pH solution, viscosity, oxygen and other variables. The future work will be collecting environmental data, which are obtained from sensors and implanting an artificial intel-

It is also expected that in future research make the hydroponics systems and robots able to make information panel with other operating systems that can be

AILM was supported by PAICYT (CT696-19) and JMMJ was supported by PAICYT (CT571-18). We thank the Universidad Autónoma de Nuevo León, Mexican Ministry of Education, as well as Mexican Council for Science and Technology for

and the plants could be manipulated on an NFT system [22].

*Hardware layout of the robot (a) and software layout of the robot (b).*

ligence in robots and in hydroponics systems.

**4. Conclusions**

**Figure 25.**

used as a standard system.

**Acknowledgements**

their support.

*Automation and Robotics Used in Hydroponic System DOI: http://dx.doi.org/10.5772/intechopen.90438*

#### **Figure 25.**

*Urban Horticulture - Necessity of the Future*

**Figure 24.**

*Robot manipulator arm.*

placed the robot to manipulate plants (**Figure 25**).

two kinds of images, a 640 × 480-pixel RGB color image and a 640 × 480 pixel 11-bit (0–2047) gray scale depth image that was provided by an Infra-Red (IR) sensor. The extraction of the plants required combining classical 2D image analysis techniques and IR-based depth measurement the 3D position. The computer language used was C++ using the Open Computer Vision library. The Kinect was placed on the carriage and is facing downwards (negative z-axis) to ensure the plants in its field of view are at a maximum distance of 1.5 m, because the accuracy of the Kinect decreases quadratically with distance. Up to 1.5 m the accuracy was 10 mm and the precision of the Kinect was 1 mm. The field of view was of 0.8 m × 1.15 m in x and y-direction. It was used a RPBVF algorithm was used to detect plants on the hydroponic system and

In the algorithm, first were detected the gullies, because the plants are only located on them. All gullies are oriented along the x-axis and are straight. A

probabilistic Hough Transform was used for straight line detection. By filtering the detected lines, the edges of the gullies were identified. After the identification of the edges, the lines were grouped, resulting in a segmentation of the gullies. After filtering, the coordinates of the plants in the image frame were known. Point was defined as the top left corner of the image. The depth information was extracted from the depth image by getting the value at point. The OpenNI driver transforms the IR sensor values into distances in meters by using a fitting function. To reduce the noise, multiple consecutive frames were averaged to calculate the plant coordinates. The plant coordinates form the control input for the robot. The output only depends on the current state and the control input. The open loop control algorithm was used. To be able to pick up a plant, the image frame coordinates had to be transformed to gantry coordinates. To transform the image frame coordinates to gantry coordinates a Garstka and Peters modified transformation was used. Because the Kinect is not located on the gripper, all coordinates have to be offset. These offsets are dependent on the position of the Kinect relative to the gripper. The z-coordinate has to be offset by an extra value, because the NFT table is under an angle of 2.2°. In this transformation, the point is the principal point in pixels of the depth sensor and the focal lengths in pixels were calculated. The values were quantified by calibrating the Kinect. The position bias was removed by a linear scaling of the x and y-coordinates. To evaluate the performance of the positioning and control algorithm, the x, y and z-position error between the final position of the gripper and the plant coordinate was measured. The final position of the gripper was defined as 20 mm above the center of the cup. On the top of the cup a cross-hair is drawn to mark the center. The initial position of the gripper is defined as the middle between the

**46**

*Hardware layout of the robot (a) and software layout of the robot (b).*

points of the gripper so the x, y and z-position error can be measured. Each image was analyzed to detect the plants. With these coordinates the robot is heading to the plant. The position error was measured with a ruler at the final stopping position and robot then returned to the same starting position. The gripper must be ±15 mm in x-direction, ±20 mm in y-direction and ±10 mm in z-direction from the center of the cup to allow the robot to pick up the plant. From the images with detected plants the gantry coordinates of the plants were calculated and inputted for the positioning algorithm so that the robot can be positioned to pick up the plants. There were 25 samples evaluated. The performance of the system is within the requirements and the plants could be manipulated on an NFT system [22].
