“Every cloud has its silver lining but it is sometimes a little difficult to get it to the mint”
To the cloud!
One of the first things you may be lead to believe about migrating applications to the cloud is the potential for substantial savings. The idea of a driving down IT costs by leveraging virtualization and third party providers can be quite compelling for budget stressed organizations.
The benefits of the cloud are quite real. However, migration must be done responsibly. By nature, applications operating in the cloud have a different architectural footprint. Without due planning, cloud migrated applications can lack resiliency and potentially have higher cost than they did before migration.
Legacy designed applications are typically built with the assumption that they will be run within the bounds of corporate data centers, protected by company firewalls and resourced by hardware sized to meet the applications peak loads. Under these circumstances, code and middleware operate together happily knowing that they live in an infrastructure with costly resources pre-allocated for them. Also, connections to other applications and systems are made because they have controlled and static access to data that they need.
Over time, additional applications become tightly-coupled together. Dependencies between applications become constraints to agility and can prevent or complicate changes to application infrastructures.
One of the primary fundamentals that cloud based applications leverage is the ability to scale elastically. As application load increases and decreases, the resources allocated to them are dynamically adjusted by the addition or removal of virtualized servers. This allows application owners to pay only for resources as they are needed and save on excess resources that may not be in use. To accomplish this, applications that use cloud environments work optimally when they are designed to be more loosely-coupled with the servers that they are on and any dependent applications they use or are used by.
All about dat base
When transactions within applications are not considered complete until they are committed and all data is up-to-date, they are referred to as being ACID (dependent on Atomicity, Consistency, Isolation and Durability). This is fine within the walls of highly controlled corporate data centers. However, when working with third party cloud providers, these dependencies or assumptions call for a different paradigm.
For success with cloud environments, transactions should operate under the paradigm that services are Basically Available, Soft-state and Eventually consistent (BASE). Application developers acknowledge through their code that resources that they use will periodically fail and that the transactions in their applications will eventually become consistent. Servers supporting the application can fail, come and go based on overall load, or be taken out temporarily for maintenance. Coding for BASE transactions encourage the resiliency of applications operating in cloud environments.
Service Oriented Architecture
Another key enabler for cloud-based applications is designing them to call on functionality provided by pre-defined reusable services. When data or functionality is used by multiple areas of an application or called upon by dependent applications, efficiency and resilience is obtained by pre-defining access methodology and leveraging an integration mechanism that assures intended manner of use. SOA is often referred to as an application integration mechanism, however Service Oriented Architecture at its heart describes a way to design applications in a manner that breaks the necessity to maintain static connections between inter-dependent systems. (Here is a good video that describes Service Oriented Architecture) The way SOA does this is through the implementation of an integration mechanism based on self-defining and standardized services.
With these concepts understood, most applications will benefit from some level of reconstruction to allow them to reap the advantages of the cloud. Application teams should be prepared to handle some redevelopment and re-architecting. The cloud can also provide opportunity to streamline the software development lifecycle. Not only does the application code need consideration, but now that the infrastructure can be dynamically provisioned using code, it should also be included in the SDLC process.
DevOps refers to the ability to provisioning servers and compute environments dynamically and leveraging the speed and repeatability of this provisioning being based on code. When brought into the SDLC, individual development, test, or staging environments may be instantiated as needed. Creating these environments is incredibly fast compared with building physical legacy ones. Plus, the only cost incurred is during the actual time that the environments are being used. Once the tasks are completed, they can be destroyed rather than sit idle.
On-Demand provisioning of environments is a major feature of the cloud that can pull cost out of the application lifecycle. Additionally, they allow development teams greater flexibility and agility that can speed up cycle times during sprints.
Building on the strengths of DevOps, several methodologies are possible which allow for further acceleration of the development process. Automated testing and migration tools can be wrapped with workflow mechanisms. Elements of service management and traditional ITIL can be worked in to allow for continuous integration, deployment and delivery of entire applications or changes to existing applications. This frees up developers to focus their efforts on code rather than process.
Aside from the development lifecycle impact of the cloud, other items must be taken into consideration when planning migrations. It should be kept in mind that not all applications are good candidates for the cloud. Depending on organizations compliance requirements, many applications may not lend themselves to deployment on public clouds. Applications containing information that is sensitive due to existence of personal information or intellectual property must do a thorough analysis of how security may be handled. Private clouds provided within the walls of a company may be more suitable.
Also, applications should be architected keeping in mind that developers and application teams should not have OS level access to cloud provisioned servers. A logging strategy should be incorporated to facilitate troubleshooting and obtaining output for developers as well as system administrators. This assures that provisioned servers remain in a state that will allow them to be dropped and recreated without modification beyond the code-based deployment.
There are immense benefits to migrating legacy applications to cloud based environments. However, the level of benefit is highly dependent on the diligence that is used in planning the migration. A straight “Lift and Shift” of many applications to cloud-based environments can potentially be counter-productive and result in higher costs for the application provider. Although your initial impulse is to get there fast, understand the potentials and plan your migration to ensure your long term success.