**4.2 Deployment models**

When assessing the suitability of a given cloud service, you will need to decide on the service model and deployment you adopt. This is the essential first step towards determining whether the separation measures needed for your intended use are in place. For enterprises cloud computing provides access to agile, robust and scalable solutions. These could be in the form of SaaS products or IaaS products that allow enterprises to add or remove servers with ease as they are needed.


different projects so it is common for an organisation to choose to use more than one platform to get the benefits of each.

Having read and understood to a certain degree the impact of cloud computing on businesses and institutions, the remainder sections of this chapter is meant to contribute to a further understanding of the overarching lock-in parameters.

## **5. Understanding cloud lock-in**

In Refs. [14, 24], the author(s) have addressed several misconceptions regarding the cloud computing lock-in effect for the widespread adoption of cloud services. Organisations must approach the cloud with the understanding that they have to change providers in the future. It is advisable to do business continuity planning, to help to minimise the impact of a worst-case scenario. Various businesses will in the future suddenly find themselves with urgent needs to switch cloud providers for varying reasons. Companies seeking to adopt DevOps practices like continuous integration (CI) could face cloud lock-in due to the complexity of the required tools and effort needed to integrate them into their workflows. Even those companies that have already transitioned to DevOps could encounter lock-in, as the environments and tools are changing fast and constantly [31]. In the study by Opara-Martins [16], the author believes vendor lock-in appears when software companies become dependent on the tools they are using, not being able to substitute them when they need to is an issue that relates to the flexibility that is incompatible with DevOps. García-Grao and Carrera [32] concur that the DevOps paradigm is taking over software development systems, helping businesses increase efficiency, accelerate production and adapt quickly to market changes. A report by the UK Government (2019) states that there are generally two different types of lock-in. Many organisations have experience with commercial lock-in where long and inflexible contracts with providers can prevent organisations from changing their technology strategy when circumstances change. The opposite is true for public cloud services, where providers frequently use rolling, pay-as-you-go agreements. Although technically speaking you are free to discontinue utilising their services at any moment, in practice, this can be challenging and is referred to as technological lock-in. The lack of comparable services from other providers, technical architecture that depends on doing things a certain way, excessive integration with provider-specific services or products, and a lack of technical architecture expertise within the organisation are the main causes of this.

The popularity and use of cloud computing have largely been driven by the reported benefits on firm performance [33]. In this chapter, the author confirms that cloud adoption decisions [16] are complex and therefore require creativity, seeing as managers are advised to consider mindfulness as a criterion for job selection [34]. While several initiatives have been taken to prevent vendor lock-in risks [16], federated access control policies and identity management are too important features to design and implement inter-cloud security solutions [35]. In contrast to the aforementioned, Oulaaffart et al. [36] stipulate that the stakeholder may be reluctant to share information related to security with each other. This is to show that inter-cloud migration efforts have thus far been faced with several major solutions in terms of interoperability and security management.

In that context, moving resources across different cloud providers still frequently involves high prices, legal restrictions or even voluntary incompatibilities in technology, which promotes effective management of cloud resources [16]. But the integration of these resources and the development of cloud composite services depend heavily on the portability and interoperability qualities. The study by Opara-Martins [37] has been discussed by previous and recent researchers on the current state of the adoption process and associated effects of cloud computing by SMEs means they are very much interested because of the cost savings, flexibility and scalability of ICT that a cloud provides. SMEs have helped to take advantage of technology to facilitate and improve business [38]. Recently, cloud computing and application environments have evolved from monolithic to microservice architectures and platform support [39]. Cloud lock-in which is a user difficulty of switching from one vendor to another is regarded as one of the major motivations in the adoption of cloud by developers and SMEs [37]. Cloud computing offers good tools for organisations to conduct business efficiently.

Individuals and ICT organisations have begun to profit from cloud providers such as Amazon Web Services (AWS), Google cloud platform (GCP), Microsoft windows azure and others based on their demand for IaaS, PaaS and SaaS resources with payas-you-go pricing model. Cloud lock-in is now a well-known phenomenon [40, 41] in spite of its involvement with big automation companies. The proper deployment of novel methods can greatly reduce vendor lock-in. Although cloud provisioning has a dark side, it too often prioritises economic gain over the cost of long-lasting sustainability [42]. Moreover, vendor lock-in is economically unsustainable for cloud users because it makes it difficult for them to react if a provider does not deliver the promised service, reduces their bargaining power and even puts their company assets at risk in the event of data breach or cyberattack on the cloud provider's end [16]. To recall, the author reiterates here that cloud lock-in is characterised by a time-consuming procedure to migrate one application, data or service to another competitive cloud or establish communication among distinct cloud entities. Several solutions have been proposed to overcome lock-in situations, and middleware platforms are one of them [43]. The main solution identified in the composition of services for supporting the lifecycle of digital products is less dependency on services, infrastructure, platform, programming language or third-party services [16].

From the dimension of services computing, the cloud provides techniques for the construction, operation and management of large-scale internet service systems. It represents the frontier development direction of software engineering and distributed computing [44]. Due to the fact that cloud computing is built on many pre-existing technologies, hence to understand the complexities in cloud adoption, there are various factors such as knowledge management, technology interoperability, business operations, system integration, ICT infrastructure update, etc., that need to be considered during cloud computing adoption [45]. Likewise, there is a body of research that has general factors that influence the adoption of cloud-based services [46, 47], but these factors do not specifically address complexity dimensions [16]. Standards are a critical topic in the field of cloud computing [48] as they allow customers to compare among and evaluate cloud providers [49]. Proprietary technologies make cloud migration hard for end-users, and some providers note that standards to support interoperability between devices are needed [50]. As more applications make use of the cloud and more providers appear, vendor lock-in becomes an increasingly important factor.

The need for a common and interoperable standard is further augmented due to the appearance of Fog computing [51]. Lack of understanding of cloud technology and lack of confidence in cloud security are the major risks of applying cloud

#### *Perspective Chapter: Cloud Lock-in Parameters – Service Adoption and Migration DOI: http://dx.doi.org/10.5772/intechopen.109601*

solutions [52]. The quest for supremacy among major players enhances their unwillingness to settle for a universal standard and thus upholding their incompatible cloud standards and design configurations [53]. One of the main hurdles in the cloud adoption of data-intensive applications is the absence of mature data management solutions that address vendor lock-in [54]. The risk of vendor lock-in can occur in any public-private collaboration, yet ICT products trigger particularly strong lockin effects as a vendor can create a monopoly position by closing its technologies. Dependencies make the process of changing cloud providers or even the collaboration of processes between different providers a very difficult task. However, cloud service providers may also offer non-compatible solutions with proprietary interfaces, complicating the cloud landscape [16].

Cost of migration, integration, interoperability and customisation needs are attributed to a lack of skills in the effective implementation and management of a cloud solution. In Ref. [55], vendor lock-in is addressed by multi-cloud resource management (MCRM). To support the MCRM and exhibit a suitable automation level, different cloud modelling languages (CMLs) have been identified in many research projects and prototypes. The adoption of cloud computing and its implementation depend upon a variety of technical and non-technical factors. Cloud computing and the services that cloud providers offer are expanding much beyond simply the bare minimum of computation, storage and networking. These larger capabilities include edge caches, workflow managers, functions-as-a-service microservices, database services on demand and a variety of additional capabilities that are located higher up in the system stack. A group of remote users may also share these features from several suppliers. At the software-as-a-service level, this may likewise be done for any arbitrary, application-level services. When working with heterogeneous clouds, synchronisation of access, capabilities and resources is crucial. A multi-cloud strategy is possible when standard exchange mechanisms are accessible for services.

Interoperability and portability for data systems, and services are crucial factors facing consumers in cloud adoption. Consumers need confidence in moving their data and services across multiple cloud environments. A cloud system is a collection of network-accessible computing resources that customers (i.e. cloud consumers) can access over a network. The cloud system and its consumers employ the client-server model, which means that consumers (the clients) send messages over a network to server computers which then perform work-in response to the messages received. Cloud computing requires consumers to give up (to providers) two important capabilities: (1) control—the ability to decide with high confidence and what is allowed to access consumer data and programs and the ability to perform actions have been taken and that no additional actions were taken that would subvert the consumers intent. (2) Visibility—the ability to monitor, with high confidence, the status of a consumer's data programs and how consumer data and programs are being accessed by others. In order to guarantee proper security and privacy protection, new problems in cloud design, construction and operation must be overcome. The implementation of the required controls turns into a collaborative effort between suppliers and consumers.

The ownership of the computing resources within a cloud is determined by cloud business models. Cloud service offerings that rent traditional computing resources (such as VMs or disk storage, i.e. IaaS) are closely related to existing standards, and hence, some usage scenarios illustrating portability can be expressed using existing standard terminology. Portability relies on standardised interfaces and data formats, while cloud computing relies on both consensus and *de facto* standards such as TCP/

IP, XML, WSDL, IA-64, X509, PEM, DNS, SSL/TLS, SOAP, ReST. Moreover, most substantial applications are using the Internet today regardless of whether cloud computing is employed. Therefore, the reader should not assume that by avoiding a cloud a user automatically avoids risks associated with Internet outages. Cloud systems have been conceptualised through a combination of software/hardware components and virtualisation technologies. Managing various sorts of access to the service components is necessary for various service delivery models. These service delivery approaches may be seen as hierarchical. As a result, the same functional components in a higher service model can use the access control guidance of functional components in a lower-level service model.

Cloud systems offer application services, data, storage, data management, networking and computing resources management to consumers over a network. Access control (AC) dictates the subject (i.e. users and processes) can access objects based on defined AC policies to protect sensitive data and critical computing objects in the cloud systems. Cloud interoperability has emerged as a crucial business concern as business use cloud-based solutions at an increasing rate. ICT departments are aware of the need of being able to use cloud metadata to guarantee data protection and data portability, giving end users a safe way to remove their data from the cloud. This is especially useful in the event that you want to switch cloud service providers or if one of them goes out of business. ICT departments, therefore, anticipate providers to adhere to cloud data interoperation standards.

The academic literature pinpoints two issues as the two most important determining factors in this respect, with security ranking first, and vendor lock-in (specifically in PaaS and SaaS context) second. The current business methods of cloud service providers obstruct innovation and a free and open market, which has an effect on how data is used throughout the economy. Particularly, users are now prevented from migrating from one provider to another by porting their digital assets across due to contractual, financial and technical barriers. Over the past 10 years, this vendor lockin has grown significantly more severe. It is made worse by the current trend, in which providers are increasingly offering a variety of cloud services within an integrated cloud ecosystem, preventing customers from switching providers. Such ecosystems frequently devolve into 'data-silos' that hinder the adoption of cutting-edge data-sharing tools and the market's open nature for data processing. Achieving data portability will depend on the standardisation of the import and export functionality of data and its adoption of "data acts" by the providers. The next subsection describes some obstacles encountered during service migration.

#### **5.1 Cloud migration hurdles**

Advances in cloud computing have in recent years resulted in a growing interest for migration towards the cloud environment [16]. The transition to cloud computing frequently involves unforeseen, additional expenditures. While these costs are manageable and do not jeopardise the benefits of adopting cloud, some activities may prove to be quite expensive, especially if they are not planned for in a timely manner. The frequent movement of data between the company and the cloud can also rack up costs, particularly in terms of bandwidth consumption where transfer times are lengthy. As things currently stand, lock-in is a perceived risk that there is more flexibility available in the cloud and users can become dependent on the products and services from a particular provider. In this case, switching from one technology or provider to another is difficult, time-consuming and disproportionately expensive. In

#### *Perspective Chapter: Cloud Lock-in Parameters – Service Adoption and Migration DOI: http://dx.doi.org/10.5772/intechopen.109601*

the cloud, the benefits of lock-in frequently outweigh the drawbacks. Using a cloud provider, for instance, to handle your storage lifecycle may result in lock-in, but it will also need less work to manage your service. Consumers of cloud services should be able to unilaterally provision computing capabilities such as server time and network storage as needed without requiring human interaction with service providers. Unless we implement new technologies, we keep an eye on fundamental goals and values. We will not be able to fulfil our consumers' increasing expectations, and we will not be ready for even more significant changes that are sure to occur as we deal with constantly increasing data quantities and a proliferation of devices and sensors, says [56].

The fact that, when selecting cloud services, engineers must consider heterogeneous sets of criteria and complex dependencies between infrastructure services and software images which are complex. Cloud providers such as Amazon Web Services (AWS), Salesforce.com or Google App Engine (GAE) give users the option to deploy their application over a network of infinite resource pools with low capital investment and with very modest operating costs proportional to the actual use. A migration strategy defines migration procedure in means of order and data transfer [57]. The following five steps outline a migration of an organisations web application to a cloud infrastructure service (IaaS), whereas migration of a company's asset to a software application/applistructure or SaaS involves six holistic decision steps [16] and the steps of a migration to a Platform-as-a-Service (PaaS) offering would differ in several regards [58, 59]. PaaS migration is the process of moving from the use of one software operating and deployment environment to another environment. In order to develop (or adapt) software for cloud-based development and deployment, cloud-specific architecture and programming techniques need to be followed. Cloud migration can be categorised in terms of the cloud stack layers [60].

Cloud migration is a process of partially or completely deploying an organisation's digital assets, services, ICT resources or application to the cloud. The cloud migration process may involve retaining some ICT infrastructure onsite. However, the migration process involves the risk of accidentally exposing sensitive business-critical information. Thus, cloud migration requires careful analysis, planning and execution to ensure the cloud solutions' compatibility with organisational requirements, while maintaining the security and integrity of the organisation's ICT system. A cloud migration process involves many concept variants and several ways of instantiation. As with any software development project, migration projects should be planned carefully and have a good methodology to guarantee successful execution. There is a need for live migration of virtual machines (VMs) at IaaS because the current cloud provider ecosystem is heterogeneous and, hence, hinders the live migration of VMs [61]. In spite of the aforesaid, with the combination of different paradigms, live migration can be conducted between edge servers, physical hosts in the local area network (LAN) and data centre sites through the wide area network (WAN). In the next subsection, security is presented as a bottleneck for service adoption.

#### **5.2 Security lock-in**

Cloud environments challenge many fundamental assumptions about the application and data security. Cloud-based software applications require design rigour similar to applications residing in classic DMZ. With cloud computing, application dependencies can be highly dynamic, even to the point where each dependency represents a discrete third-party service provider. The cloud security environment embodies shared security and joint responsibility, which produces a form of lock-in with the cloud services providers [59]. However, this form of lock-in differs from using security and tamper-resistance to explicitly hinder users' ability to switch cloud service providers. This article's author has confirmed in the works of [16, 24] that such lock-in is anti-competitive and motivates consumers to adopt anti-lock-in solutions such hybrid clouds, cloud management providers or brokers, and routine manual data exports [17]. In this aspect, functional misalignment with business needs and technical limitations in areas including integration, security or extensibility are major inhibitors to data switchability from one vendor to another [14]. If cloud service providers and their customers are fully informed of human error as a major root cause of security risks encountered in the cloud, both parties can fully benefit from the advantages this model of computing offers [62].

Previously, various vendors have introduced different types of clouds with heterogeneous resources available in each, varying with respect to computation (CPU/ GPU) memory and telecommunication network capabilities. Quite recently, cloud providers like IBM cloud have offered multi-cloud capabilities to improve interoperability and facilitate data or computation portability between clouds [63]. The cloud industry has recently moved into a hybrid of cloud-edge computing, but this creates a whole set of new risks regarding interoperability and APIs, managing heterogeneous capacities, workload offloading data integrity and privacy, storage decentralisation application restructuring [64]. In the process of digital transformation, organisations respond to changes in the surrounding environment by exploiting digital technologies [65]. However, combining new Internet of Things (IoT) solutions can be challenging and technology with legacy systems aiming to exploit heterogeneous data from these different sources [66, 67]. Interoperability plays a prominent role in multi-vendor ICT platforms where various systems need to interact efficiently making standardisation crucial for collaboration [68].

The cloud lock-in is less when applications are dynamically developed in an agnostic cloud platform environment such as Google App Engine (GAE) or Microsoft Windows Azure and it is removed with substantial cost to a different platform. Moreover, both PaaS and SaaS as platforms create network effects that enable the growth of users across both supply and demand sides. But at the SaaS layer, the hurdle of data integration requires a combination of technical and business processes used to combine data from disparate sources into heterogeneously meaningful, valuable and reusable information [69]. Respectively, service developers making use of cloud services can use SaaS providers and APIs as building blocks to develop composite services by integrating data and composing functionality provided by different SaaS resources [70]. Security, usability and vendor characteristics are the three main areas of lock-in risks in the cloud computing environment when adopting enterprise-class software like cloud ERP systems. Effectively integrating cloud ERP into existing cloud computing infrastructure will allow suppliers to determine organisations and business owners' expectations and implement appropriate tactics [71]. Orchestration of cloud services is important for companies and institutions that need to design complex cloud-native applications or to migrate their existing services to the cloud. Tools such as Chef, Ansible and Puppet provide infrastructure as code (IaC) language to automate the installation and configuration of cloud applications. Clarity about security tasks and responsibilities is a crucial consideration in the procurement process. In this regard, it should be stressed that the responsibility for security cannot be outsourced [72].

Adaptation of containerisation and serverless technologies is the most trending microservices research area for practitioners focused on cloud-related domains. Solutions that provide cloud computing with end-to-end security reduce vendor

#### *Perspective Chapter: Cloud Lock-in Parameters – Service Adoption and Migration DOI: http://dx.doi.org/10.5772/intechopen.109601*

lock-in risks as it relates to stored data. The information must be protected in cloud storage and transmission to reduce this risk, so only the data provider and the final consumer can access or modify it [73]. Regarding stored data, cloud service providers integrate cryptographics mechanisms based on encryption protocols such as advanced encryption standard (AES) or Rivest-Shamir-Adleman (RSA). To a certain degree, cloud service providers practise security-induced lock-in when employing cryptography and tamper-resistance to limit the portability and interoperability of users' data and applications, says Satzger et al. [17]. This security-induced lock-in and users' anti-lock-in strategies intersect within the context of platform competition. Thus, cloud services providers, therefore, favour security-induced lock-in over price leadership. Continued advancement of computing and digital technologies is transforming markets, economics and society. Migration to the cloud is strongly affecting corporate ICT strategies.

The security benefits of moving to a cloud-based system are often overlooked. A good cloud service will mitigate some existing risks and bring new benefits as well. Cloud provider is the vendor and operator of the cloud services. Cloud services vary substantially in size, from an entire e-business suite to a single component within a software development ecosystem (such as a storage or cryptographic key management service). It is simple for the cloud service to use a text template (like IaC) that outlines the desired configuration and contacts the APIs to make the necessary modifications because cloud infrastructure is typically maintained through APIs. As a result, IaC enables you to track your configuration in text documents where changes can be examined, and analysis can be carried out automatically. The use of automated processes to enforce security or policy requirements is known as guardrails and is an emerging technique in the cloud. This could entail a stipulation that only a select group of authorised operating system images may be utilised for computing services, or that all bulk data storage encrypts data at rest. Cloud services have been designed with an array of security benefits for your organisation and it is worth taking the time to find out what is available (and how to apply it to your specific needs) in order to gain the most benefits. The more you take the time to understand the cloud services available, the bigger the benefits will be. When selecting a cloud service, make sure that it meets your needs and helps you to secure your data. The process of digital transformation involves adopting technologies that enhance operational and customer experiences. Evaluating cloud and business risk together provides a better understanding of its impact on enterprises' overall risk maturity, including adopting a shared fate partnership between cloud service provider and customers [74]. This chapter affirms that an organisation's best path to viable risk management involves ICT modernisation into the cloud or cloud-like on-premise infrastructure. The CSA [74] report further confirms that there is no consistency of data classification across the use of cloud platforms and services. Tripathi and Mishra [75] note that the cloud is becoming less of a risk to manage and more of a means to manage these risks and modernisation. The approach helps both businesses and providers to improve their cloud adoption. The next subsection presents DevSecOps as a philosophy to combat security lock-in issues in the cloud environment.

#### **5.3 DevSecOps mitigates lock-in**

A number of additional problems regarding the tools and services needed to develop and maintain running applications are brought on by cloud computing. These consist of program administration utilities, coupling to external services, development and testing tools, libraries and operating system dependencies, some of which may come from cloud providers. It takes a lot of work to design, integrate and deliver software in the modern software engineering process. Continuous integration (CI) is the process of automatically adding new code from several developers to the same version of the software while simultaneously checking it for bugs. When deploying new software to production using continuous delivery (CD), the frequency of the deployments differs from traditional software deployment being the frequency of deployment, which can happen multiple times every day. DevSecOps is a software engineering culture that guides, breaks down silos, and unify software development, security and operations. IaC evolved to solve a real-world problem referred to as environmental drift in the release pipeline. It is important to consider vendor lock-in versus product lock-in when selecting technology or IaC formats.

While the debate on cloud lock-in lies on the heavy reliance on the single cloud provider or perhaps the inability to use services of multiple vendors, closed proprietary software or systems purposely encourage technology lock-in, ensuring long-term customers and revenues while discouraging innovation. Abu-Libdeh et al. [76] strongly note that going all-in with a single cloud provider may allow organisations to simplify things and become more agile, potentially achieving better quality as single vendor solutions are often better integrated. Since no application is platform-specific containerisation can help isolate software from its environment, while DevOps helps to maximise code portability and makes it easier to deploy to different environments. However, applications built for the cloud have developed into a standardised architecture made up of many small, loosely linked parts known as micro-services (implemented as containers), supported by a program known as mesh that runs services. A container orchestration and resource management platform, such as Kubernetes, is home to both of these components and is referred to as a reference platform. Now, DevSecOps have been found to be a facilitating paradigm for these applications with primitives such as continuous integration, continuous delivery and continuous deployment (CI/CD) pipelines for providing continuous authority to operate (C-ATO) using risk-management tools and dashboard metrics [77]. DevSecOps puts security at the forefront of requirements to avoid the costly mistakes that come from treating security as an afterthought. Traditional security has been about exclusion and using the security policy to prevent people from disclosing secrets. DevSecOps is about inclusion and working as a team. Successful implementation of DevSecOps happens when the security team provides knowledge and tools and the DevOps team runs them. However, there is no reason for a security team to run tooling as a completely out-of-band management process. Before concluding that DevSecOps is a methodology or framework for agile application development, deployment and operations for cloud-native applications, DevOps uses a forward process with a delivery pipeline and a reverse process with a feedback loop that forms a recursive workflow. The role of automation in these activities is to improve this workflow *via* the following tools for automation (e.g. Ansible [78], and Terraform [79]), DevOps stack (e.g. Maven [80] and Jenkins [81]) and programming languages like Kotlin [82].

For example, maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), maven can change a project's build, reporting and documentation from a central piece of information to sharing jars across several projects. In other words, it can be used for enabling and managing any Java-based project and one of the goals of maven is allowing transparent migration to new features as it has become the *de facto* build system

#### *Perspective Chapter: Cloud Lock-in Parameters – Service Adoption and Migration DOI: http://dx.doi.org/10.5772/intechopen.109601*

for Java applications. When migrating an application to the cloud and selecting a cloud service provider, the cost weighs heavily on the mind of every ICT manager in the automation industry. As a result, using an open-source system such as Jenkins can integrate with any cloud provider. Selecting tools based on strengths rather than vendor merit will help to avoid using one cloud provider for everything (which often leads to a single point of failure). Designing a solution using wellknown patterns decouples its functional characteristics from the underlying cloud implementation, making it easier to avoid lock-in or go multi-cloud. By adopting standardisation, automation, cross-platform programming languages and containerisation, organisations are flexible and adaptable. In the next subsection, SDN is presented as a means to surpass some of the incumbent networking challenges caused by technical lock-in parameters.

#### **5.4 Software-defined networks**

A more open standards-driven approach to networking is necessary for the cloud and digital transformation era as opposed to the proprietary network architectures and Application Specific Integrated Circuits (ASIC). Software-defined Networking (SDN), built on OpenFlow protocol, enables an organisation to virtualise their network, automate operations, enable efficient network configuration and integrate network functions across dozens of switches creating a unified network architecture [83], that is programmable and dynamically definable. SDN as an emerging paradigm is set to logically centralise the network control plane and automate the configuration of individual network elements. In cloud data centres, however, network and server resources are collocated and managed by a single administrative entity; still, disjoint control mechanisms are used for their respective management. While unified server-network resource management is ideal for such a converged ICT environment, machine virtualisation can have a negative effect on cloud systems, resulting in drastic changes in performance and cost that mostly relate to networking constraints rather than software limitations. For example, network congestion caused by consolidation itself, particularly at the core levels of data centre topologies, has a substantial impact on the infrastructure as a whole and becomes the primary bottleneck, impeding effective resource utilisation and, as a result, provider's income. SDN runs on the principle of centralising control-plane intelligence while maintaining the separation of the data plane in order to allow open usercontrolled administration of the forwarding hardware of a network component. The switching fabric (data plane) is retained by the network hardware devices, while the controller receives the intelligence (switching and routing functionalities). Because the entire network is under centralised control, the administrator can configure the hardware right from the controller, which gives the network a high degree of flexibility. SDN is monitored and implemented using a variety of tools and languages. A developing platform called Onix has been the focus of some SDN attempts in order to instal SDN controllers as a distributed system for flexible network management. Veriflow, a network debugging tool has been introduced in other studies, is capable of finding the flaws in SDN application rules and preventing them from impairing network performance. With the help of additional initiatives, the routing architecture Routeflow was created. It is based on SDN concepts and allows for interaction between the performance of commercial hardware and adaptable open-source routing stacks. As a result, it makes it possible to switch from traditional IP deployments to SDN.

By separating the control plane and forwarding plane, SDN provides centralised topology discovery and networking management, which enables the capability of managing resource contentions in finer granularity [84]. This gives academics and industry more options in a variety of network virtualisation-related areas, including novel LAN and WAN networking protocols, optimised virtualised data planes, traffic and flow management, software function chaining *via* virtualised network functions, etc. As a result, OpenFlow and the OF-CONFIG management and configuration protocol are accepted as the *de facto* standard SDN communication and control protocols. With SDN, policies, configuration and network resource management can be implemented quickly, and a single control protocol may handle a variety of tasks such as access control, routing and traffic engineering. The majority of open-source SDN controllers (Ryu, POX, FloodLight, OpenDaylight) expose APIs to manage firewalls, configure network components and obtain traffic counters, among other things. Additionally, they have been widely employed for other network-related applications, including QoS management, participatory networking, new management interfaces and for complete network migration. Minimising vendor lock-in has become important due to the degree and pace of network transformation that is required to keep up with business modernisation, to reduce hardware manufacturer lock-in, the network must be made programmable, and control and other functionality must be abstracted using software-driven strategy. It is crucial for ICT experts to choose the appropriate network operating system when utilising such a strategy in order to maximise cost-effectiveness and prevent problems with system integration and network availability [85]. The right approach to avoiding vendor lock-in is to counteract it strategically from the outset, instead of relying on one vendor, focus on several different ones. Internal systems should be built with the goal that subcomponents may be replaced later. Where a technology or vendor may seem like a riskier choice in terms of vendor lock-in, an exit strategy should be defined, obtain cloud services from several rather than a single provider, avoid using proprietary solutions, APIs, and formats and reduce the cost to switch. The next subsection highlights the need for effective strategies to mitigate the concerns of vendor lock-in.

#### **5.5 Strategies**

Organisations are under pressure to find and implement new strategic ideas at an even faster pace to gain a competitive edge over rivals within the global market. Towards this goal, it is fair to highlight herein that the absence of standardisation may also bring disadvantages, when migration, integration or exchanges of resources are required [3]. Strategies can be understood by referencing cloud lock-in taxonomies [24], which illustrate various components from which a cloud environment can be composed. Combining components into a solution introduces boundaries between the various components of a cloud system such as operational boundaries and trust boundaries. During the course of ordinary business processing or as data and applications migrate to new providers or platforms, data and application processing commonly crosses boundaries. A crucial issue that can be solved by portability and interoperability is ensuring operational integrity across boundaries as processing demands migrate to the cloud. Moreover, it must be enunciated that customers must be aware that they might need to switch service providers due to unacceptably high contract renewal costs, service provider business operations ceasing, partial cloud service closures without migration plans, unacceptably low service quality and

*Perspective Chapter: Cloud Lock-in Parameters – Service Adoption and Migration DOI: http://dx.doi.org/10.5772/intechopen.109601*

business disputes between cloud customers and providers, among other reasons. Again, as part of risk management and security assurance for any cloud initiative, portability and interoperability should be taken into account upfront. It is also the core strategy in the process of migrating towards cloud technologies, both within the public and private sectors. Companies will be responsible for evaluating their sourcing strategy to fully consider cloud computing solutions as viable. An example of policy measures to consider in this respect is presented in the next subsection.

#### **5.6 Policy measures**

According to Lewis [86], the cloud computing community typically uses the term interoperability interchangeably with portability. Herein, the author makes a clear distinction to specify that the former refers to the ability to easily move workloads and data from one cloud service provider to another or between private and public clouds. On the other hand, the latter states the ability to move a system from one platform to another. While these two separate terms are pertinent to the enlisted policy measures below, the author also draws the readers' attention to the role of open standards in the cloud with an emphasis on mitigating potential areas of lock-in effect across the cloud ecosystem (whether in domestic or international settings). Standards will be critical for the successful adoption and delivery of cloud computing, both within the public sector and more broadly. Standards encourage competition by making applications portable across providers, allowing federal governments (e.g. G-Cloud) to switch service providers in order to benefit from cost-saving measures or cutting-edge new product features. Furthermore, standards are essential to ensuring that cloud platforms are interoperable so that services offered by various providers can coexist, regardless of whether they use public, private, community or hybrid delivery models [87].


While data protection attracts much attention and debate in current literature, other contractual clauses between cloud service providers and the clients including choice of law, intellectual property (IP) issues, terms of service and acceptance use also impact the adoption of cloud computing and are discussed herein [11]. Therefore, gaining the benefits of this more elastic environment requires appropriate planning to avoid being 'locked' into a cloud solution that may not measure up to the goals for moving to the cloud in the first place. For additional and supplemental policy measures, please refer to the study by Opara-Martins et al. [13]. The next section concludes this research output and contribution by maintaining that this chapter should be rated high as it has linked academia and industry with cutting-edge research to create new knowledge and innovation that converts ideas into wealth creation, jobs and human progress. Thus, researchers lacking adequate knowledge, dexterity and self-transformation cannot be helpful to society nor will they be useful to themselves.
