**3. Control of physical interaction**

During physical interaction using the robot's gripper with the objects, usage of tactile sensors enables to acquire the contact force and pressure information. Forces from the hand can be observed by placing a force sensor in the wrist of the robot [8]. It ensures to give force feedback always, whenever some kind of forces at any part of the hand is sensed. Force feedback is a vital observation in physical interaction to detect jointly the object contact at the end effector and the motion of the hand simultaneously. **Figure 1** shows the generic control scheme imparted for physical interaction using force and tactile sensors for feedback. For the improved manipulation, vision-based sensors can be combined with tactile sensors.

### **3.1 Grasp and force control for object positioning and handling**

To stipulate few dynamic behaviors of robot in its environmental workspace, its contact with the objects can be controlled using impedance control method coupled with active stiffness. Position of the gripper with respect to the contact force can be obtained using reconfigurable mechanical impedance in the workspace. It can be derived based on second order transfer function equivalent to mass spring damper. **Figure 2** shows how the desired impedance can be used to modify the position by generating appropriate position control signals. The impedance transfer function generates a position error with respect to the difference between the reference force and the force feedback from the robot. The position controller uses the error feedback computed from the reference position and current Cartesian position of the robot, to generate the control signal for moving the robot's gripper actuators.

**Figure 1.** *The general force control mechanism for physical interaction of the robots.*

*Physical Interaction and Control of Robotic Systems Using Hardware-in-the-Loop Simulation DOI: http://dx.doi.org/10.5772/intechopen.85251*

**Figure 2.**

*General control approach of the position based on impedance.*

The interaction forces and its control can be performed using indirect method by controlling the contact forces explicitly or by implementing hybrid techniques. Those hybrid approaches combine the force control and position control on the same directions. **Figure 3** shows the generic method of the hybrid position force control by including position and force error filters. A frame fixed to the task handling part of the end-effector can be specified using a matrix in which the value of 1 represents axis of force control direction and value of 0 indicates the axis of position control. This particular approach very spontaneously allows the control law implementation for the physical interaction tasks. With precise knowledge of the frame and the environment, it ensures that no disturbance appears between the directions of position and force [9]. If the physical interaction is deployed on unstructured type of environment, this kind of control will be quite challenging. This approach utilizes a position error filter and force error filter, which filters the unwanted errors to drive the position and force controller. Generated hybrid signal from the deployed control law drives the gripper actuator for task handling.

**Figure 3.** *General approach of the hybrid position force control.*

Simultaneous control of position and force along the same direction can be achieved with the support of hybrid external position force control in physical interactions tasks. In this architecture, the inner position control loop will be driven by an outer force control loop as shown in **Figure 4**. The new reference position is computed from the force control signal estimated through the reference force and feedback force error. The force error is added to the existing reference of the position [10]. This provides simplicity in the architecture and implemented above the existing position controller of the robot. The hybrid control law is finally based on the position control signal generated in the control system using computed position error [11].

#### **3.2 Visual control for object positioning and handling**

Vision-based sensors play a most vital role during the physical interaction phase. When the robot does not interact with the environment, information from force and tactile sensors will not be available. In such circumstances, vision sensor data can be acquired in order to trace the target object in the environment. This enables to guide the robot's hand toward the object in the robot's workspace. For grasping the objects in the workspace, pose estimation methods can offer an approximate position and orientation of the target object. The adopted algorithm complexity is of polynomial time, for which the accuracy varies with the chosen optimization techniques [12]. The physical look-based pose estimation method does not need familiarity about the 3D model of the object. In model-based methods, the pose accuracy is better than physical look-based pose estimation, but it usually needs a suitable initial estimate in addition to the model of the object. The task programmer has to choose the most suitable approach depending on the available previous knowledge.

During physical interaction phase, vision-based pose estimation algorithms can be used to investigate the properties of the physical interaction on the object. When the robotic system executes the task motion, desired vital measures can be detected, such as the angle of gripper opening, reachability of the object, failure of grasping, and many more However, the chief impact of vision sensors during physical interaction tasks, is that it is promising to track the progress of the frames fixed in the environment. This can be accomplished by a direct observation on the hand and the object in the workspace. This completely avoids the necessity of force-based techniques for object tracking. Grasping of objects can be detected directly with

**Figure 4.** *General approach of the hybrid external position and force control.*

*Physical Interaction and Control of Robotic Systems Using Hardware-in-the-Loop Simulation DOI: http://dx.doi.org/10.5772/intechopen.85251*

#### **Figure 5.**

*The general vision and force control scheme for the physical interaction.*

the aid of vision sensors, by detecting a specific sequence, without formulation of a predefined path for the arm and gripper. It is absolutely mandatory to perform sensor fusion of the visual signal with force feedback, to manage unforeseen forces due to vision sensor calibration and preprocessing errors.

**Figure 5** outlines the main impact of vision sensors combined with tactile and force sensors for grasping objects during physical interaction. They together can trace the objects in the physical environment, thus empowering accurate reaching and alignment between the object and robotic hand, before and after physical contact. After physical contact, alignment task needs to be performed with the support of force controller, in order to consistently handle and stabilize the misalignments caused between object and hand due to deployment of vision sensors.

## **4. Hardware-in-the-loop simulation**

Certain physical systems are prone to possess complex design, unsafe to test in real time, subjected to operate in certain real time systems, and not economical. Robots also belong to one such category of physical systems. For such robotic systems, development of the embedded control system and testing are made feasible with the support of HIL simulation technique [1].

As a robotic system design requires multidisciplinary mastering, partitioning the design tasks into various subsystems simplifies their analysis and synthesis. So, by utilizing the real hardware modules in the loop of real time simulation enables the detailed analysis of sensor noises and actuator limitations of robotic systems [2]. This can be accomplished by HIL technique, where the control algorithms are implemented in the actual hardware, and it controls the simulated model of the robotic system.

In the adopted HIL methodology, complexity of the robots to be controlled is modeled by including all its related dynamics by equivalent mathematical representation of the systems included in test and development. The embedded target runs the control algorithm for the control of joint actuators and they interact with the simulated model of the robot. **Figure 6** shows the HIL simulation setup of the

**Figure 6.**

*Hardware-in-the-loop simulation setup of the robotic systems.*

robotic systems [3]. It is implemented with the control algorithms running on the embedded target board. The board is interfaced to the PC, which runs a model of robotic system via Input/Output interface. Certain modeling tools possess simulation of the complex mechanisms in the robotic systems.

The usage of HIL simulation techniques increases the safety in operating the robots and enhances the production quality after analyzing the performance in HIL platform and later on switching to actual physical robotic system. Another striking feature of using HIL is that it saves time and money to a larger extent.
