**2. Related work**

This work falls in the area of hybrid analysis. In this section, we summarize works in this area.

In 2006, Aggarwal and Jalote [10] combined static and dynamic analysis to detect buffer overflow in C programs. Both static and dynamic approaches have advantages and disadvantages. One of the disadvantages of dynamic analysis is the requirement of a large number of test cases, which present an overhead. Some dynamic analysis tools use a feature know as generate-and-patch or generate-andvalidate in an effort to auto-fix vulnerabilities. In 2015, the authors of [11] analyzed reported patches for several DAST tools including GenProg, RSRepair, and AE, and found that the overwhelming majority of reported patches did not produce correct outputs. The authors attributed the poor performance of these tools to weak proxies (bad acceptance tests), poor search spaces that do not contain correct patches, and random genetic search that does not have a smooth gradient for the genetic search to traverse to find a solution [11].

In 2012, [12] proposed a hybrid approach that uses source code program slicing to reduce the size of C programs while performing analysis and test generation. The authors used a minimal slicing-induced cover and alarm dependencies to diminish the costly calls of dynamic analysis [13].

<sup>1</sup> Google, Google Assistant, and Dialogflow are registered trademarks of Google, Inc. The use of these names or tools and their respective logos are for research purposes and does not connote endorsement of this research by Google, Inc. or any of its partners.

*Conversational Code Analysis: The Future of Secure Coding DOI: http://dx.doi.org/10.5772/intechopen.98362*

In 2014, [14] implemented a hybrid architecture as the JSA analysis tool, which is integrated into the IBM AppScan Standard Edition product. The authors augmented static analysis with (semi-)concrete information by applying partial evaluation to JavaScript functions according to dynamic data recorded by the Web crawler. The dynamic component rewrites the program per the enclosing HTML environment, and the static component then explores all possible behaviors of the partially evaluated program.

In 2015, [15] applied a program slicing technique, similar to [12], to create a tool called *Flinder-SCA*. The authors also implemented their program using the *Frama-C* platform. The main difference between [12, 15] is that [15] performs abstract interpretation and taint analysis via a fuzzing technique wheres [12] does not perform taint analysis or fuzzing.

Also, in 2015, [16] proposed a hybrid malicious code detection scheme that was designed using an AutoEncoder and Deep Belief Networks (DBN). The AutoEncoder deep learning method was used to reduce the dimensionality of data. The DBN was composed of a multilayer Restricted Boltzmann Machines (RBM) and a layer of BP neural network. The model was tested on the KDDCUP'99 dataset but not on actual program code.

In 2019, [17] proposed SapFix, a static and dynamic analysis tool which combines a mutation-based technique, augmented by patterns inferred from previous human fixes, with a reversion-as-last resort strategy for fixing high-firing crashes. This tool is built upon Infer [18] and a localization infrastructure that aids developers in reviewing and fixing errors rapidly. Currently, SapFix is targeted at null pointer exception (NPE) crashes, but has achieved much success at Facebook [18].

In a dissertation produced in 2021, [19] proposed a code generation technique for Synchronous Control Asynchronous Dataflow (SCAD) processors based on a hybrid control-flow dataflow execution paradigm. The model is inspired by classical queue machines that completely eliminates the use of registers. The author uses satisfiability (SAT) solvers to aid in the code generation process [19].

To the best of our knowledge, our work is the first to employ modern virtual assistants to conversationally scan and fix vulnerabilities in program code. In [20], the authors established a voice user interface (VUI) for controlling laboratory devices and reading out specific device data. The results of their experiments produced benchmarks of established infrastructure and showed a high mean accuracy (95% 3.62) of speech command recognition and reveals high potential for future applications of a VUI within laboratories. In like manner, we propose the integration of personal assistants with code analysis systems to encourage programmers to produce more secure code.
