*3.3.2* �*86 PC*

The required image processing software, i.e., OpenCV and TensorFlow, is installed and used for processing the captured images. Here the language used is Python. Similarly the Arduino IDE is used to program the Arduino board. Arduino board is used for the movement of wheels and robotic arm which is shown in **Figure 10**.

#### *3.3.3 Robotic base*

The robotic base consists of an acrylic sheet which has two pair of wheels whose movement is done via a DC geared motor. The base also has a robotic arm.

#### *3.3.4 Robotic arm*

The arm consists of a gripper which is connected to an actuator that moves up and down. There is another actuator connected to the previous one which moves back and forth; this movement is done using DC geared motors. The below given is the flow diagram which shows the work flow of the system.

• On the software side, we have made some major improvements like we have

• Since the robot moves by itself, the number of hardware and software used is reduced which means there is no need for Zigbee and the joystick application

• When it comes to recognizing objects, the Recyclebot only recognizes a small

• But our robot could almost recognize more than 5000 images which prove to

• Also the speed of processing is improved. The accuracy of our robot is far

Skin diseases are becoming the most common health issues worldwide. In this paper we propose a method that detects four types of skin disease using computer vision [10, 11]. The proposed approach involves convolutional neural networks with specific focus on skin disease. The convolutional neural network used in this paper has utilized around 11 layers, namely, convolution layer, pooling layer, activation layer, fully connected layer, and softmax classifier. Images from the DermNet database are used for validating the architecture. The database comprises all types of skin diseases out of which we have considered four different types of skin diseases like acne, keratosis, eczema herpeticum, and urticaria with each class containing around 30–60 different samples. The challenges in automating the process includes the variation of skin tones, location of the disease, specifications of the

• Thus this shows that our robot is much better than its predecessor.

*3.3.5 Skin cancer classification using convolutional neural network*

set of images which is around 1000 to 2000.

be a larger image set than the previous one.

better for an automated robot.

image acquisition system, etc.

**41**

used OpenCV and TensorFlow for processing the obtained image.

*Convolutional Neural Network Demystified for a Comprehensive Learning with Industrial…*

• Also we have used an algorithm called YOLO for object detection and classification which provides better accuracy when compared with the

previous method.

*DOI: http://dx.doi.org/10.5772/intechopen.92091*

anymore.

Classification of object is done using image processing and TensorFlow. Camera is used to capture the image of the object present on the surface which is then matched with the pre-trained dataset. Here the object detection is achieved using YOLO object detection, and the detected object is directed to its respective class. There are several modes that are made available such as plastic, glass, metal, degradable, and nondegradable. Based on the information of the selected mode, either the object is picked up or ignored. The object is picked up using a robotic arm which is controlled using the pre-programmed Arduino board. Once the desired object is picked up in accordance with the selected mode, object classification is completed successfully. Advances are done in proposed method in comparison with the existing method:


*Convolutional Neural Network Demystified for a Comprehensive Learning with Industrial… DOI: http://dx.doi.org/10.5772/intechopen.92091*


#### *3.3.5 Skin cancer classification using convolutional neural network*

Skin diseases are becoming the most common health issues worldwide. In this paper we propose a method that detects four types of skin disease using computer vision [10, 11]. The proposed approach involves convolutional neural networks with specific focus on skin disease. The convolutional neural network used in this paper has utilized around 11 layers, namely, convolution layer, pooling layer, activation layer, fully connected layer, and softmax classifier. Images from the DermNet database are used for validating the architecture. The database comprises all types of skin diseases out of which we have considered four different types of skin diseases like acne, keratosis, eczema herpeticum, and urticaria with each class containing around 30–60 different samples. The challenges in automating the process includes the variation of skin tones, location of the disease, specifications of the image acquisition system, etc.

Classification of object is done using image processing and TensorFlow. Camera

• Some areas in the design of the robot in existing method have limitations; these

• Thus we have made some improvements in the design of the robot by making it a bit smaller, adding a webcam which is weightless and making the wheel

• We have also removed the bins available on top of the robot as the previous one only classifies between two classes but our robot classifies almost between

• The above robot mentioned in the existing method uses manual pickup of the classified waste, but we have added a robotic arm gripper which automatically

• Also the Recyclebot is controlled manually over a Zigbee powered joystick application, but our robot is fully automated and uses Arduino to achieve it.

limitations may be overcome by improving upon the design.

is used to capture the image of the object present on the surface which is then matched with the pre-trained dataset. Here the object detection is achieved using YOLO object detection, and the detected object is directed to its respective class. There are several modes that are made available such as plastic, glass, metal, degradable, and nondegradable. Based on the information of the selected mode, either the object is picked up or ignored. The object is picked up using a robotic arm which is controlled using the pre-programmed Arduino board. Once the desired object is picked up in accordance with the selected mode, object classification is completed successfully. Advances are done in proposed method in comparison with

the existing method:

**40**

**Figure 10.**

*Flow diagram of the system.*

*Dynamic Data Assimilation - Beating the Uncertainties*

base strong and stable.

more than three classes.

picks up the classified object.

*Dynamic Data Assimilation - Beating the Uncertainties*

**References**

[1] Sainath TN, Mohamed AR, Kingsbury B, Ramabhadran B. Deep convolutional neural networks for LVCSR. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE; 2013. pp. 8614-8618. Available from: http://www.cs.toronto.edu/ asamir/papers/icassp13\_cnn.pdf

*DOI: http://dx.doi.org/10.5772/intechopen.92091*

[9] Anand R, Shanthi T, Nithish MS, Lakshman S. Face recognition and classification using GoogleNET architecture. In: Soft Computing for Problem Solving. Singapore: Springer;

[10] Sabeenian RS, Paramasivam ME, Anand R, Dinesh PM. Palm-leaf manuscript character recognition and classification using convolutional neural networks. In: Computing and Network Sustainability. Singapore: Springer;

[11] Shanthi T, Sabeenian RS, Anand R. Automatic diagnosis of skin diseases using convolution neural network. Microprocessors and Microsystems.

2020. pp. 261-269

*Convolutional Neural Network Demystified for a Comprehensive Learning with Industrial…*

2019. pp. 397-404

2020;**76**:103074

[2] Simard PY, Steinkraus D, Platt JC. Best practices for convolutional neural networks. In: International Conference on Document Analysis and Recognition

[3] Lawrence S, Giles CL, Tsoi AC, Back AD. Face recognition: A convolutional neural-network approach. IEEE

Transactions on Neural Networks. 1997;

[4] Liu L, Shen C, van den Hengel A. The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.

(ICDAR). 2003. p. 958

**8**(1):98-113

pp. 4749-4757

pp. 1097-1105

(1-3):67-80

**43**

[5] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems. 2012.

[6] Graham B. Fractional max-pooling. 2014. arXiv preprint arXiv:1412.607

[7] Anand R, Shanthi T, Sabeenian RS, Veni S. Real time noisy dataset implementation of optical character identification using CNN. International Journal of Intelligent Enterprise. 2020;**7**

[8] Anand R, Kalkeseetharaman PK, Naveen Kumar S. Automatic facial expressions and identification of different face reactions using convolutional neural network
