**3. Challenges affecting adoption of existing approaches**

Several code analysis and vulnerability detection surveys have categorized tools in the literature [7, 21–23]. While surveys are essential in advancing research, many of them do not focus on tools found on websites. It must be noted that the average programmer does not look for tools in research papers. To that end, we conducted a Google search and found several popular websites that present various tools that programmers may use to scan their code for vulnerabilities. **Figure 1** shows a bar chart highlighting the number of tools found on these websites. As shown in the figure, GitHub and Wikipedia list the most tools and are often the top websites returned in search results due to their popularity. We further grouped the most popular static analysis tools found on these websites by language as shown in

**Figure 1.** *The large number of code analysis tools found on popular websites.*

**Figure 2.** *Static analysis tools categorized by programming language.*

*Conversational Code Analysis: The Future of Secure Coding DOI: http://dx.doi.org/10.5772/intechopen.98362*

**Figure 2**. As can be seen, this non-exhaustive list could overwhelm many programmers in determining the best tools for their projects.

In addition, the ability to combine code analysis approaches coupled with the number of programming languages that exist result in a large number of tools from which coders can choose to analyze their code. This makes it onerous for a programmer or organization to decide on a particular code analysis tool. Further, tools often require special configuration, which may take time to fine tune for best results. Many tools also suffer from usability issues, lengthy vulnerability reports, and false positives, making programmers avoid them altogether [24–26].

Another challenge affecting adoption of code analysis tools is monopolization of the market by certain companies. For-profit companies usually have the resources to improve tools by adding more state-of-the-art approaches such as cloud-based scanning, IAST support, and report generation. While these developments often advance the field of code analysis, they sometimes discourage small organizations and individuals from investing the effort and resources required to procure stateof-the-art tools. Thus, a streamlined, modern, cost-effective approach is needed to help encourage programmers to produce more secure code.

### **4. The future of code analysis**

We believe that the future of code analysis lies in hybrid systems that combine several approaches to achieve useful analyses and actionable reports that will encourage programmers to produce more secure software. Based on current trends in machine learning, especially in deep learning, and natural language processing (NLP) (e.g., virtual assistants), it is safe to say that future code analysis will rely heavily on AI, ontologies, NLP, and machine learning. For example, when discussing the trends and challenges of machine learning, the authors in [27] "envision a fruitful marriage between classic logical approaches (ontologies) with statistical approaches which may lead to context-adaptive systems (stochastic ontologies) that might work similar to the human brain" [27].

Our projection is that code analysis frameworks will facilitate plug-and-play (PnP) models. **Figure 3** illustrates a generalized PnP model that uses virtual assistants to manage the analysis process. Using this plug-and-play model, programmers may select the code analyzer that best fits their project based on factors such as project type, project size, speed, efficiency, security, etc. This is similar to the

#### **Figure 3.**

*A suggested model showing code analysis as part of a plug-and-play paradigm that facilitates the inclusion of any analysis tool and the use of a virtual assistant to manage the analysis process.*

current landscape with virtual assistants and recommender systems. Currently, a person may use a virtual assistant like the Google Assistant to navigate a list of restaurants based on price, location, menu, reviews, etc. The virtual assistant may update the users preferences based on selections over time. This concept can also apply in code analysis where the chosen scanner used in the PnP model could be based on past scans or popularity.

The code analyzer featured in the model in **Figure 3** may use any combination of approaches including SAST, DAST, and IAST, which could be cloud-based or localized to the user's computer. These approaches could be backed by any algorithm

*Conversational Code Analysis: The Future of Secure Coding DOI: http://dx.doi.org/10.5772/intechopen.98362*

that results in significant performance gains. It has been shown in the literature that deep learning and other ensemble methods perform very well in a large number of contexts including infected host detection [28], intrusion detection systems [29, 30], and malware analysis [31, 32], to name a few. Interestingly, many of these approaches can be used to create or improve code analyzers in an effort to help programmers produce more secure software.

Another feature of code analyzers of the future is a deep reliance on data analytics, visualizations and state-of-the-art interfaces. As discussed in the literature [8, 33], the interface of a code analyzer can have a negative or positive impact on its use and adoption. Therefore, for a system to be adopted in any project or organization, users must be able to gain insights from the way it presents its results. **Figure 4** shows a mockup of what we believe the interface of future code analyzers will look like. These interfaces will be in the form of dashboards instead of the customary lengthy bug reports displayed in a console.
