**3. Method**

### **3.1 Building deep learning algorithm for unsupervised robotics for construction industries**

In building deep learning algorithm for unsupervised robotics for construction industries, we described as particular instances of a fairly simple recipe: combine a specification of a dataset, a cost function, an optimization procedure, and a model. For example, the linear regression algorithm combines a dataset consisting of *x* and *y*, the cost function

$$\mathbf{J}\left(\mathbf{w}, \mathbf{b}\right) = -\mathbf{E}, \mathbf{p} \text{ log } \mathbf{p}\left(\mathbf{y}|\mathbf{x}\right) \tag{3}$$

However, the model specification *p* (*y|x*) = *N* (*y; x w + b, 1*), and, in most cases, the optimization algorithm defined by solving Eq. (3) for where the gradient of the cost is zero. By recognizing that these components can be replaced independently, a broad range of algorithms can be obtained. The cost function typically includes at least one term that causes the learning process to perform statistical estimation. Thus, the most common cost function is the negative log-likelihood, so that minimizing the cost function causes maximum likelihood estimation. The cost function may also include additional terms, such as regularization terms. For example, we can add weight decay to the linear regression cost function to obtain

$$\mathbf{J}(\mathbf{w}, \mathbf{b}) = \lambda ||\mathbf{w}|| - \mathbf{E}, \ \_\mathbf{p} \log \ \_\mathbf{p} \mathbf{(y|x)}.\tag{4}$$

This still allows closed-form optimization. For example, if we change the model to be nonlinear, then most cost functions can no longer be optimized in closed form. This requires us to choose an iterative numerical optimization procedure, such as gradient descent. The recipe for constructing a learning algorithm by combining models, costs, and optimization algorithms supports both supervised and unsupervised learning. The linear regression example shows how to support supervised learning. Unsupervised learning can be supported by defining a dataset that contains only *x* and providing an appropriate unsupervised cost and model. However, we can obtain the first PCA vector by specifying that our loss function is

$$\mathbf{J(w) = E, \ \_\text{p}} \parallel \mathbf{x} - \mathbf{r(x; w)} \parallel \tag{5}$$

while our model is defined to have *w* with norm one and reconstruction function *r (x*) = *w xw*. In some cases, the cost function may be a function that we cannot actually evaluate, for computational reasons. In these cases, we can still approximately minimize it using iterative numerical optimization so long as we have some way of approximating its gradients. Most machine learning algorithms make use of this recipe, though it may not immediately be obvious. If a machine learning algorithm seems especially unique or hand-designed, it can usually be understood as using a special-case optimizer. Some models such as decision trees or k-means require special-case optimizers because their cost functions have flat regions that make them inappropriate for minimization by gradient-based optimizers. Recognizing that most machine learning algorithms can be described using this recipe helps to see the different algorithms as part of a taxonomy of methods for doing related tasks that work for similar reasons, rather than as a long list of algorithms that have separate justifications.

### **4. Discussion**

These feature learning approaches are one of the major strengths of modern deep learning methods. Since these algorithms are able to learn good features from data, they are much less sensitive to input representations than other conventional learning algorithms such as support vector machines, Gaussian processes, and others. Deep learning algorithms are able to learn good representations and solve problems even from basic representations such as raw pixels, avoiding the need to hand-design features as with other learning algorithms, saving significant engineering effort for many of the complex problems encountered in robotics, where features can be unintuitive and hard to design.

### **5. Conclusion**

The branch of computer science that is now permeating the construction sector is robotics. It is crucial to automate construction sites so that robots can perform risky tasks for workers in dangerous environments like high altitudes, deep water, high radiation zones, inclement weather, and deep oceans. It is also beneficial in terms of avoiding the disruptive effects of strikes, issues with administration and motivation, safety and health regulations, a lack of skilled labor, and the need to perform

*Deep Neural Networks for Unsupervised Robotics in Building Constructions: A Priority Area… DOI: http://dx.doi.org/10.5772/intechopen.111466*

repetitive, dirty, and dangerous work as well as the completion of projects or tasks with quality control, on schedule, and economically. Despite the fact that this technology is helping the construction industry, much research is still needed in the areas of sensing and control, human factors, task flexibility, and the software support to integrate robots into a larger construction-based management. Our feature learning approach is one of the major strengths to achieve this. Since this algorithm is able to learn good features from data, they are much less sensitive to input representations than other conventional learning algorithms such as support vector machines, Gaussian processes, and others. Our deep learning algorithm is able to learn good representations and solve problems even from basic representations such as raw pixels, avoiding the need to hand-design features as with other learning algorithms, saving significant engineering effort for many of the complex problems encountered in robotics, where features can be unintuitive and hard to design.
