A CIO Primer for Cloud Transformation Strategy

Almost every IT leader you speak with that has has a sense of pride when discussing the cloud strategy they have for their business. We listen to presentations that extol the advantages of CapEx vs OpEx and read white papers and articles explaining the benefits of transforming the footprint of IT at the business level. This often results in the assumption that ANY strategy of dumping the data center in favor of cloud services, will be a substantial win. Although this very often can be the case, greater benefit can be realized when not focusing on being in the cloud, but rather how you get there.

I have had a number of experiences with IT leaders looking to advance their own agenda by pushing cloud migration. That is the nature of people. Unfortunately. their goal is very often to just get there for the sake of the short-term win rather than looking to improve the overall value position of IT at an enterprise level. Don’t get me wrong…This works…but the net gain can be small compared to a true enterprise level strategy.

Enterprise Context

IT product managers and application owners have a deep understanding of all of the parts needed for their application supply chain. This is at the heart of their responsibilities and they are ultimately accountable to upper management for successful delivery. However, the vantage point of these individuals is often based on a granular part of overall IT. Typically, single applications or even subset of the application portfolio cannot truly represent the complete enterprise. Enterprise level IT decisions must be directed with context wider-scale aspects of the business as a whole and the mission of IT as a whole. Tread carefully if cloud aspirations are initiated from a single IT component. Leadership and direction for cloud transformation is best served directly from a higher organizational perspective. In my experience, I have seen considerable rework and unnecessary expense resulting from well-intended application owners looking for a political “win”. This can be quite painful.

Another pitfall to be aware of is over-emphasizing efforts to eliminate infrastructure concerns. Any business having an IT department or is concerned with delivery of applications to users is ALWAYS in the business of infrastructure at some level. Even with eliminating all business funded data centers, the only way to obtain optimal benefit from the cloud is by careful consideration of how the newly provided compute resources are architected, configured and provisioned. A better way to think of this would be that it is advantageous to the business to minimize hardware investments. When you spend big on compute hardware and everything needed to keep it running, the reality is you are paying a premium for resources that are seldom used to capacity. Properly managed, cloud transformation reduces the expense to only the amount of resources that are actually used.

Additionally, how the new resources are provisioned to your solution architects will have an impact on the Time-to-market for their solutions. The greatest assets a company has is its ability to differentiate from the competition. Maximizing the agility your teams have in creating new solutions provides competitive advantage.

Setting the Stage for the “Business” of IT with Enterprise Context

The Triple Constraint

An often used pictorial model used in project management is the “Triple Constraint”. Basically this demonstrates the key attributes that must be handled effectively for successful completion and closure of any project. Usually, these are presented using the terms Time, Cost & Quality or Fast, Cheap and Good. The same perspective holds true for the relationships between these attributes when creating a strategy for IT. The terms I prefer vary slightly and are as follows:

  • Quality – the level of completeness that the strategy provides with respect to delivery of applications to the users and the level at which the application is of benefit to the business mission
  • Speed – the level at which any IT activity is able to design, build and deliver solutions which differentiate the business from competition
  • Cost Mitigation – the ability to minimize financial impact of resourcing IT solutions or the value that the business places on the return from IT


The triangle in the “Triple Constraint” is mostly immutable. Neither the sides or the angles may change in overall proportion. The product of these attributes (the space in the center) represents ITs ability to leverage control over delivery of solutions. It is the overall quantity of the 3 attributes that the IT enterprise has to work with.

I also add an outer circle representing the overall business advantage gained. The size of the outer circle should always be greater than that of the triangle. When an IT strategy does not provide a larger business advantage then it becomes a cost center rather than a value add.

I mentioned earlier that the triangle in the model is “mostly” immutable. The static ratio of the sides does not constrain the length of the sides as long as each side is equal to the others. This means that you do have control over the overall size of the triangle. Optimizing the ability to leverage and control the size of the triangle is done through strategic management of the elements affecting time, cost and quality.

To add additional meaning, different areas of the “business advantage” circle are called out:

  • Market Position – This is the area of gain from the strategy around speed and cost mitigation. The benefit of solutions delivered more rapidly and at lower cost.
  • Reputation – The positive perception that IT service delivery is contributing. This comes from a combination of how much it meets their needs/wants and how fast it is delivered to them.
  • Agility & Strength – By name you would think this would somehow be related to speed. Instead, this represents the businesses commitment to quality combined with the financial investment they are able to back it up with.

Applying this to Cloud Transformation

To begin developing a strategy for application migration to the cloud, the first step is determining what the businesses commitment and/or capability to commit to the effort. Basically, the amount of resources (Time, cost, quality) that they can provide measured against the point of diminishing return (When the circle becomes smaller than the triangle).

To determine an optimal path, the business should consider the applications within their portfolio that IT delivers. An exercise in “Application Rationalization” will help build a strategy for how much infrastructure ownership and control an IT organization can sacrifice in favor of the economies of the cloud.

There are three common options offered by cloud service providers:

  • Infrastructure as a Service (IaaS) – With this option, the cloud provider offers a virtual data center housed within their location. This option offers applications the greatest agility with regard to configuration of components providing compute services. Here the provider’s responsibility ends at the hardware level and does not include operating system or component configuration.
  • Platform as a Service (PaaS) – The cloud provider takes additional responsibility for server operating systems and application platforms. Flexibility of configuration for these components is limited however availability and scalability are assured.
  • Software as a Service (SaaS) – The entire tech stack as well as application logic is delivered. Control over the system is limited to functional configuration of the application alone.


The responsibility matrix for each of these service options shows varying levels of control and responsibility that you can maintain. As responsibility and control is passed to the service provider, direct costs may decrease. However, technical agility is sacrificed and can lead to loss of some of the financial benefits of opportunity.

There are several models floating around for rationalizing investment in applications within a companies portfolio. Evaluation of applications is based on quantifying the impact that each application has on competitive advantage. Applications that differentiate you from competitors are ideal for investing resources to maintain their advantage. Conversely, applications that provide more commoditized utility (like email or chat systems) do not yield any benefit from in-house development or infrastructure expense.

At the bottom of the responsibility matrix I have added labels to each option (P.I.R.O.). These labels correspond to categories used during the application rationalization effort and represent strategies for maintaining the applications as assets.

  • Protect – This option is only be used when loss of mission critical IP requires complete internal control.
  • Invest – Applications that differentiate the business warrant extra management to optimize them.
  • Refactor – As active development of former differentiators decreases, look for ways to economize.
  • Outsource – Non-business critical applications can sacrifice control in favor of managed solutions.

Carefully evaluate at each application in your companies portfolio to determine what amount of differentiation they provide to your enterprise competitive advantage. Overlaying these classifications on the different cloud delivery models will offer a solid starting point in determining an optimal strategy.

Posted in CIO, Cloud, Cloud Migration, Cloud Transformation, CTO, Information Technology, Leadership | Tagged , , , , , , , , , , , , , , | 1 Comment

Data Sovereignty and Cloud Migration

The value proposition of cloud computing is highly compelling. The cloud offers rapid provisioning of low cost compute resources, decreased burden of capital expenditures and geographically dispersed workloads. Many global companies are eager to shift and often secure_dataleap forward with migration projects only to be blocked when jurisdictional compliance rules for data are encountered. Shifting cloud strategies to address data sovereignty requirements can result in unplanned additional investments.

Data sovereignty regulations can complicate the delivery model that has made cloud computing attractive, presenting new concerns for companies operating in multiple countries. It’s often assumed that some workloads cannot leverage the benefits of the cloud without being impacted by jurisdictional concerns, so it helps to understand how best to address the issues of jurisdiction and computing across geographical borders in a way that supports varied demands from different applications.

Data sovereignty rules generally focus around the idea that digital data is subject to the laws or legal jurisdiction of the country where it is stored. However, many countries have concerns extending beyond basic export control of data, and will also look at where creation/processing occurs on that data as well as where and how it is encrypted.

What about governmental bodies attempting to protect information while utilizing the cloud? Germany, Israel and South Korea and a growing list of other countries all have highly restrictive data sovereignty laws & practices. When planning cloud migrations, it is wise to consider the implications up-front and, whenever possible, “Bake-in” steps for remediation to prevent unexpected rework and cost.

Step 1

Know your data – Awareness of what will be stored in the cloud can require considerable analysis. Companies best prepared to be agile with their compute environments are those having a firm grasp on the nature of their data before migration.

  • Data classification – Does your data include information that may have implications around personal privacy, sensitive financials, security, or company intellectual property?
  • Scope of each classification – To what level or range of impact does the specific data to be migrated contain for each classification category?

Step 2

Determine your risk tolerance – Depending upon the potential impact if a breech were to occur, companies should be ready to make judgements as to what they are prepared to handle for each data classification.

  • Map classifications to include intended as well as potential use (or misuse)
  • Weigh data use against international views and rules on privacy

Step 3

Identify tolerable solutions for each classification – Develop and standardize acceptable methods of securing each type of data in a manner that will meet any sovereignty laws that may be applicable to you.

  • Understand the sovereignty rules as well as any activity within the specific geography that may impact these rules.
  • Architect to provide electronic evidence if breaches occur. In the event of an issue, preparedness for dealing with the problem can lessen its impact.

Technology advances at an ever increasingly rapid rate. However, jurisdictional regulation will always need to play catch-up. Data is the life-blood of many large organizations. Due diligence in understanding how it can be used and protected should not be an afterthought.

Posted in CIO, Cloud, CTO, Data Leadership, Information Technology, Uncategorized | Tagged , , , , , , | Leave a comment

Technical Debt…A Fickle Mistress

It is not uncommon for IT leadership to loose awareness of the true cost of application ownership. Financial challenges and budget constraints make easy targets out of any effort that is not delivering immediate results. As teams look at ways to become more resilient, it is important to consider what factors may have resulted in any shortcomings in terms of resiliency to begin with.

When making investments in IT, leadership should be considering the lifecycle of the investment they are making. How long should it last? What needs to be done to maximize the viability of the investment? Just like a car needs regular oil changes and tune-ups, applications, hardware and other IT investments need care and feeding to assure they able to continue to deliver what they are intended. Similarly, if these items are not given due consideration during the planning of the investment, solutions for fixing resulting issues can be much more expensive.

Much like your credit card, if you neglect regular payments or only pay the minimum due, the interest on the remainder adds to your principle and the ultimate cost of what was originally purchased becomes larger… your debt increases. In IT this is referred to as “Technical Debt”.

At one point in my IT career, I had the opportunity to play a key role in the migration of a legacy financial system to a large-scale ERP solution. At the time the trade journals were full of stories about lawsuits concerning failed ERP implementations. I remember reading about a lot of finger pointing, most of which was only to cover poor estimation of effort on behalf of the contractors engaged to manage the transition.

Although our implementation was ultimately successful, the perspective my position as a technical lead allowed me to gain insight into some of the factors that both contributed to our success as well others that limited our momentum and slowed progress.

This project was to implement a number of functional modules, each related to a corresponding area in our legacy system. Time and again the software vendor would explain the benefits of remaining as close to a vanilla, out-of-the-box implementation as possible. Although customization was facilitated by their design, they indicated that we were better off avoiding it wherever possible. This was to be our project norm going forward.

As you might guess, during requirements gathering, many areas that were previously identified as applicable to a vanilla installation, would not meet the needs of our functional areas. The out-of-the-box software that was initially a good match for our functional processes, did not quite meet the more granular aspects of what our functional people expected. What was discovered was that over time, non-standard use of the legacy system was undocumented but became accepted process. As business processes shifted, use of the system had been modified and expanded from its original conception. This use became engrained in their common work processes and was now necessary to everyday functionality. Our problem now was that we had to first invest time and resources into catching up on all of the newly needed requirements. This was a challenge because none of it was documented; it was just understood by the functional teams who were using it every day.

We also learned of many shadow systems that were created to supplement the legacy system. Integration points, data feeds to and from different systems. Over time, supplemental functionality had been pieced together that had little and sometimes no resiliency or life-cycle management.

As an example…I recall discovering that in order to close our books each month, we needed to import data from an outdated system that one of our functional areas maintained. Although it was old, it worked and because it worked, several other functional areas began piggybacking off of this system as a means of getting their work done with minimal investment. Of course, this outdated system was absolutely not compatible with the new ERP. It was written using code libraries that were not considered secure by IT Risk and were not even supported by the operating systems on our new machines. Since it was working, nobody ever considered maintaining it with patches or upgrades. After all … the business unit owning it was only concerned with the viability that it provided for their small area. Even though its use had grown beyond their group, no investment was made to bring the system to a level of resilience or compliance suitable for its expanded role.

If due attention was given based on that system’s role in the enterprise, then the organization would have made sure due diligence was done to see that system was given appropriate care. Instead, the system went unattended until an event happened that required modernization. Although costly to play catch-up with patching and upgrades, this would even have been more expensive to the organization if the triggering event was the result of some failure that prevented the system from delivering its business value.

The hard fact is that this debt must eventually be paid for. If regular payments (resiliency maintenance) are not planned or considered as a standard part of doing the business of IT, then eventually, a larger expense results. Unfortunately, this debt is very easily overlooked or considered optional when budgeting efforts or challenges are made. Payment of technical debt is too easily “leaned-out”, but eventually snowballs into greater expense down the line.

This can be a difficult lesson for application owners. Even worse is that often the lesson is learned at the expense of an application owner that was not part of the application implementation when initial budgets or resiliency models were planned.

Some IT Management Gurus look at technical debt as the result of two things… choice and level of prudence. Choice can be intentional or inadvertent. Prudence can be reckless or discerning. Here are some examples of how these play together:

Technical Debt

  • When you look at your IT budget, do you consider the future effect of your decisions with regard to resilience?
  • Is application life-cycle longevity and financial risk avoidance “baked” into your budgetary planning?
  • Are short-term savings or personal recognition more important than the exponentially larger expenses that can occur later or the overall financial well-being of the organization?

Responsible application ownership involves examination and planning for all phases within the life-cycle of their systems. It is easy to put off application vitality and resilience and often tempting because of ever existing pressure to lower expenses. Just like in your personal finances, living off of credit will add exponential overhead to your bottom line.

Posted in CIO, CTO, Information Technology, Tacit IT Leadership | Tagged , , , , , | Leave a comment

IT 2.0 – Transformation Perspectives

BT Method

Not too long ago I wrote a blog post (I.T. 2.0 – The Death of Design>Build>Run) that discussed the value to business of moving away from operational IT mindsets and towards a newer IT Paradigm. Although generally well received, I was not surprised to find that there were some who recoiled at the idea of shifting away from the traditional stability of operating under what has been an institution of IT Methodology.

As much as we may wish to avoid change, evolution refuses to give in. Even more difficult…the rapidly changing IT landscape drives the need for constant reexamination and does not afford the luxury of passivity. Embracing change is key to the IT supply framework’s potential for delivering value.

Cloud computing has driven an upheaval in the strategic IT landscape. Where IT once was considered a necessary cost of doing business now is positioned to take its place as more of a value center.

Transformation is a forward moving effort. Evolving the way IT is supplied to the business is not well served by framing change in based on past delivery styles. Agility will struggle if constrained by compartmentalizing efforts into phases like Design, Build and Run.

A major component of cloud methodology is achieving a level of automation that will free the development life cycle for applications from being hindered by provisioning and deployment operations. This is DevOps.

There are a number of incorrect opinions around exactly what DevOps is.

DevOps is not:

  • A set of automation tools
  • DEVelopment taking on the OPerational function
  • OPerations taking responsibility for DEVeloping code
  • Architects taking responsibility for code migration

DevOps is merely a culture shift that reduces cycle time by automating redundant orchestration activity.

  • Architects still drive the blueprint for IT
  • Developers still code
  • Operations still execute and monitor

What was before considered “Design” now is focused on the blueprinting efforts of enterprise architects and conceptual efforts of application design teams.

Build” becomes a development effort to code both their application functionality as well as the commands to provision their infrastructures, plus maintain repositories of reusable code reducing cycle times.

Run” becomes the operational component that consumes the code by executing it and monitoring its level of successful operation.

IT shapes itself to promote an “Assemble-to-order” structure leveraging services and micro-services.

Another aspect of IT Transformation involves embracing a “Start-up” mentality. Over time, organizations tend to create monolithic icons of IT projects that are cumbersome to evolve as needed. Replacing large development efforts with smaller ones that can be combined to deliver more comprehensive functionality not only builds components that can be made available for reuse, but more importantly provides the ability to shift or pivot if priorities or business climate warrants. Additionally, it allows faster recovery from failure.

A “start-up” mentality takes the position that you only know what you know right now and need to ensure that you retain value when the unknowns of tomorrow present themselves.

Commitment to IT transformation is by nature diametrically opposed to traditional IT. Transformation cannot succeed by simply re-forming existing methodologies. With transformation comes risk of failure. Small failures lead to organic growth. Effectively managing scope during transformation will bring longer-term viability and success.

Posted in CIO, Cloud, Cloud Development, CTO, Information Technology | Tagged , , , , , , , , , | Leave a comment

Considering DBaaS (Database as a Service)

dbaas_300x75When planning to use a DBaaS solution, customers will likely need help with tasks such as deployment, migration, support, off-site backup, system integration and disaster recovery. Then there are the applications that connect to the database and the databases themselves that need to be designed, developed, deployed, tuned and monitored. And what about organizations that are implementing hybrid systems or need help managing multiple cloud service providers?

DBaaS is an excellent solution for both DBAs and Application Architects to increase their level of agility, efficiency and cost effectiveness. Lean IT strategies benefit from potentials for lower infrastructure costs and faster delivery times for database management related functions allowing database professionals the ability to focus on performance optimization and DB technical strategies.

From the perspective of application development and ownership, DBaaS addresses many of the limitations inherent to a traditional RDBMS solution. A major benefit for developers is the ability to rapidly self-provision databases for temporary use at minimal cost.

Database cloud services eliminate the need for organizations to dedicate many resources to on-site database storage. They don’t have to install, configure or maintain hardware, and basic administration of software is fully automated. In addition to the infrastructure footprint housing the database itself, DBaaS will often cover the physical and administrative requirements for high availability and disaster recovery. Options for replication, backup and load balancing are “baked-into” the service.

With DBaaS, database-related processes become standardized and repeatable, allowing teams to more easily procure services, deploy applications, plan capacity and manage resources. The DBaaS model can also help reduce data and database redundancy and improve overall Quality of Service.

Understand the Limitations

Sound’s great…right?! Well, it is…as long as you understand how the service you are getting is defined and more importantly what the service does not allow for. There are big advantages to be leveraged with DBaaS, however there are also some limitations.

Anything put in the in the cloud, is subject to network performance issues outside of your direct control. If the Internet service provider, the cloud service supporting the database, or any points in between become clogged or go down, they might experience issues related to data latency or application failure. At least if a problem occurs in-house, the infrastructure team can more easily troubleshoot its cause.

In addition, features available in the typical RDBMS are not always available in a DBaaS system. For example, Windows Azure SQL Database (formerly SQL Azure) is Microsoft’s DBaaS offering that provides a database platform similar to SQL Server. However, Windows Azure SQL Database doesn’t support features such as data compression and table partitions, and the Transact-SQL language elements available in SQL Database are only a subset of those available in SQL Server. Amazon’s Oracle offering via RDS restricts use of the following features:

  • Real Application Clusters (RAC)
  • Data Guard / Active Data Guard
  • Oracle Enterprise Manager (although “DB control” is OK)
  • Automated Storage Management
  • Data Pump
  • Streams

People looking to migrate databases to the cloud need to thoroughly asses what features may be needed but might not be available.

What about Database Administrators?

In addition, the nature of DBaaS (like any cloud based service) is limited by what can be automated in their service. They do not provide the targeted individual attention that may be needed to plan their configuration or tune their use. DBAs remain a valuable commodity whether operating in the cloud or in house. XaaS services delivered in the cloud do not accommodate non-standard requests or deep-dive tasks requiring human intervention or analysis.

Plan Your Work…Work Your Plan

Diligence is necessary when planning a move for database environments to DBaaS. The platform can offer considerable cost savings to an organization. For database admins DBaaS can allow them to increase their level of automation, simplify basic configuration tasks and focus on new technologies and opportunities for improving performance. Application developers benefit by the added agility and availability of development and test environments that can be provisioned rapidly. Application owners get more for less cost as environments supporting development are only paid for when they are in use and production environment resource costs are dynamically scaled with load.

The database tier is critical within the application tech stack. It is also one of the more costly components that business critical applications require. DBaaS solutions provide lower resource costs and much greater speed and flexibility during development. Customers have the opportunity to self-provision and self-administer for low criticality environments and DBAs can keep their attention on optimizing configuration and performance for systems where the level of importance to the business needs to be assured.

Posted in CIO, Cloud, Cloud Development, CTO, Information Technology | Tagged , , , , , , , , , , | Leave a comment

Migrating Applications to the Cloud

“Every cloud has its silver lining but it is sometimes a little difficult to get it to the mint”
-Don Marquis

To the cloud!

One of the first things you may be lead to believe about migrating applications to the cloud is the potential for substantial savings. The idea of a driving down IT costs by leveraging virtualization and third party providers can be quite compelling for budget stressed organizations.

The benefits of the cloud are quite real. However, migration must be done responsibly. By nature, applications operating in the cloud have a different architectural footprint. Without due planning, cloud migrated applications can lack resiliency and potentially have higher cost than they did before migration.

Conscious uncoupling

Legacy designed applications are typically built with the assumption that they will be run within the bounds of corporate data centers, protected by company firewalls and resourced by hardware sized to meet the applications peak loads. Under these circumstances, code and middleware operate together happily knowing that they live in an infrastructure with costly resources pre-allocated for them. Also, connections to other applications and systems are made because they have controlled and static access to data that they need.

Over time, additional applications become tightly-coupled together. Dependencies between applications become constraints to agility and can prevent or complicate changes to application infrastructures.

One of the primary fundamentals that cloud based applications leverage is the ability to scale elastically. As application load increases and decreases, the resources allocated to them are dynamically adjusted by the addition or removal of virtualized servers. This allows application owners to pay only for resources as they are needed and save on excess resources that may not be in use. To accomplish this, applications that use cloud environments work optimally when they are designed to be more loosely-coupled with the servers that they are on and any dependent applications they use or are used by.

All about dat base

When transactions within applications are not considered complete until they are committed and all data is up-to-date, they are referred to as being ACID (dependent on Atomicity, Consistency, Isolation and Durability). This is fine within the walls of highly controlled corporate data centers. However, when working with third party cloud providers, these dependencies or assumptions call for a different paradigm.

For success with cloud environments, transactions should operate under the paradigm that services are Basically Available, Soft-state and Eventually consistent (BASE). Application developers acknowledge through their code that resources that they use will periodically fail and that the transactions in their applications will eventually become consistent. Servers supporting the application can fail, come and go based on overall load, or be taken out temporarily for maintenance. Coding for BASE transactions encourage the resiliency of applications operating in cloud environments.soa

Service Oriented Architecture

Another key enabler for cloud-based applications is designing them to call on functionality provided by pre-defined reusable services. When data or functionality is used by multiple areas of an application or called upon by dependent applications, efficiency and resilience is obtained by pre-defining access methodology and leveraging an integration mechanism that assures intended manner of use. SOA is often referred to as an application integration mechanism, however Service Oriented Architecture at its heart describes a way to design applications in a manner that breaks the necessity to maintain static connections between inter-dependent systems. (Here is a good video that describes Service Oriented Architecture) The way SOA does this is through the implementation of an integration mechanism based on self-defining and standardized services.

With these concepts understood, most applications will benefit from some level of reconstruction to allow them to reap the advantages of the cloud. Application teams should be prepared to handle some redevelopment and re-architecting.  The cloud can also provide opportunity to streamline the software development lifecycle. Not only does the application code need consideration, but now that the infrastructure can be dynamically provisioned using code, it should also be included in the SDLC process.


DevOps refers to the ability to provisioning servers and compute environments dynamically and leveraging the speed and repeatability of this provisioning being based on code. When brought into the SDLC, individual development, test, or staging environments may be instantiated as needed. Creating these environments is incredibly fast compared with building physical legacy ones. Plus, the only cost incurred is during the actual time that the environments are being used. Once the tasks are completed, they can be destroyed rather than sit idle.

On-Demand provisioning of environments is a major feature of the cloud that can pull cost out of the application lifecycle. Additionally, they allow development teams greater flexibility and agility that can speed up cycle times during sprints.

Accelerated Development

Building on the strengths of DevOps, several methodologies are possible which allow for further acceleration of the development process. Automated testing and migration tools can be wrapped with workflow mechanisms. Elements of service management and traditional ITIL can be worked in to allow for continuous integration, deployment and delivery of entire applications or changes to existing applications. This frees up developers to focus their efforts on code rather than process.

Other Considerations

Aside from the development lifecycle impact of the cloud, other items must be taken into consideration when planning migrations. It should be kept in mind that not all applications are good candidates for the cloud. Depending on organizations compliance requirements, many applications may not lend themselves to deployment on public clouds. Applications containing information that is sensitive due to existence of personal information or intellectual property must do a thorough analysis of how security may be handled. Private clouds provided within the walls of a company may be more suitable.

Also, applications should be architected keeping in mind that developers and application teams should not have OS level access to cloud provisioned servers. A logging strategy should be incorporated to facilitate troubleshooting and obtaining output for developers as well as system administrators. This assures that provisioned servers remain in a state that will allow them to be dropped and recreated without modification beyond the code-based deployment.


There are immense benefits to migrating legacy applications to cloud based environments. However, the level of benefit is highly dependent on the diligence that is used in planning the migration. A straight “Lift and Shift” of many applications to cloud-based environments can potentially be counter-productive and result in higher costs for the application provider. Although your initial impulse is to get there fast, understand the potentials and plan your migration to ensure your long term success.

Posted in CIO, Cloud, Cloud Development, CTO, Information Technology | Tagged , , , , , | 1 Comment

IT 2.0 The Death of Design>Build>Run


It is easy to see how rapidly IT is evolving. As was foretold, technology continues to change at an increasing pace. Emerging technologies which increase the amount and relevance of information are driving businesses to explore more agile and efficient methods of IT service delivery.

IT supply frameworks that are based on traditional “Design>Build>Run” phases now struggle to keep pace with the need for faster delivery. The FastWorks idioms of “failing fast” and “Pivoting” become constrained by process heavy SDLCs and deficiencies in the economy of reuse. Additionally, opportunities with virtualized infrastructures and cloud provisioning have changed the IT landscape to closer resemble that of software development. The idea of “Infrastructure as Code” offers agile provisioning, but calls for the same diligence as application code.

Design>Build>Run worked well in the 90s and in the early part of this century managed to get us by when SOA became more widely adopted. Now, cloud infrastructures and services are promising greater agility. However, many businesses are constrained by their older IT delivery methods. The IT beast gets hungrier every day and we must re-examine the sources of our constraints and adjust accordingly. Service orientation is becoming more granular. Services and micro-services increase agility, but need to be managed.

Reduction in rework offers help. When a need arises, utilizing existing solutions (code and infrastructure patterns) reduces cycle times for builds. When services are smaller and more loosely coupled, they are adaptable to a wider variety of use cases and repetitive tasks are more easily automated. Service definition allows adoption to happen quicker and with fewer resources. For infrastructure code, this allows for rapid provisioning of compute environments. Additionally, the cloud and virtualization provides compute resources that can have temporary lifespans and at low cost.

The design element can be replaced by simply conceiving a solution based upon building blocks that are already available. Build becomes an exercise in combining or orchestrating these components. Run is now consuming the solutions to their intended purpose. Design>Build>Run evolves to Conceive>Combine>Consume. A newer paradigm that is lean and agile. IT Throughput is more nimble, providing the business greater value.


Presenting smaller pieces of infrastructure code that is frequently compiled in similar manner will further offer rapid deployment by allowing parameter driven blueprints or templates. Complex environments may be requested and provisioned in an automated and on-demand fashion, avoiding the bottlenecks of redesign and re-build. Likewise, integration methods between provisioned environments is simplified and easily automated because of their known configuration.

For business, the goal will be for the new IT to empower stakeholders rather than control them. IT will move from a cost-center to the value-center that it was intended to be. Infrastructure as code will alter the role of the enterprise architect from solution designers to solution orchestrators. Business will be able to gain more using less.

Posted in CIO, Cloud, CTO, Information Technology | Tagged , , , , , , , , , , , | 1 Comment

Swimming in the IT Value Stream

IT StressBefore 2001, when information technology promised companies a means to greater advantage, the general thought was that greater IT investment meant greater advantage. So, IT spend was focused on growing company IT infrastructures and staffing highly technical staff to support and operate it. As time passed and the rate of new technology increased, the economic pressure of owning and maintaining large IT infrastructures became burdensome for companies.

Eventually, opportunity seeking companies learned that standardizing the components of IT delivery would not only allow them to operate lean, but if done in a certain way, also be used by others. They found that they could off-set their own IT spend and increase their margins by leveraging unused portions of their compute resources. This was followed by the establishment of these as products unto themselves which could further improve margins. Many aspects of IT delivery became commoditized and offerings of Infrastructure, platform and software “as a service” were born. These services began to compete with other corporate IT departments who delivered at much higher costs and significantly slower speeds. Companies began to re-think their IT investment options in favor of more frugal offerings.

The overall effectiveness for an organization’s IT strategy is not so much determined by the amount of IT investment, but rather how it has chosen to invest in IT. The Cloud has become a very attractive option for companies. However, the model for these services was not designed to be a total replacement for IT, but rather an alternative for certain IT components. Mapping business goals to viable solutions and administering them so they do so still remained essential. Key to this is successfully identifying the IT Value Stream and directing finances in a manner that maximizes productivity.

So…what does this mean? … The “IT Value Stream” is the flow between the various functions which deliver information-related solutions to the advantage of the organization that it supports. The amount of advantage that IT can provide is directly related to the volume and speed of this stream as compared to the cost of keeping it flowing. The level of success depends on how much the delivered information provides a competitive edge to the company while minimizing the cost of supplying it.

IT Leadership plays the role of custodian and guardian for the IT value stream. If the flow is impaired or misdirected, leadership acts to re-align or fix it.

Improved Data @ Lower Costs = Business Advantage

All of this makes sense…but only if the value stream is recognized and managed. Without due diligence, the value stream will tend to break into different directions and the advantages that were intended to be gained are weakened. Here are a few items to consider:

  • Identify the components that make up the value stream
  • Strategize how to increase the throughput of each component
    • Encourage focus on efficiency…re-usability, automation (IT as code)
  • Architect for smaller loosely coupled value stream components (Services)
    • Service Orientated Architecture
    • Design for scalability
  • Build quality control into the flow
  • Look to continuously improve

These are some of the fundamentals behind the “DevOps“ shift in IT culture. DevOps examines the intersection between software development, technology operations and quality assurance, and then optimizes them to provide rapid delivery of IT solutions. As businesses move faster to compete and technologies advance at a more rapid pace, solutions such as virtualization, cloud services, agile development, and data center automation are heavily influencing how IT is operating.
Many organizations choose to invest in generalized technologies. Ambiguously defined IT departments deliver solutions from scratch, often building or coding items similar to ones they have done before. Also, assumptions are made as to their responsibility and ability… (“The DBAs take care of that.”, “Then the Sysadmins do their thing.”, “I thought you guys were supposed to watch for that!”)

When evaluating the various functions within a company’s IT value stream, benefit can be gained by identifying generalized technology based solutions and rethinking them as service providers. As IT components are defined as distinct services, the delivery of applications and information becomes more nimble. Expectations from teams consuming these services are clear and rework is avoided. Efficiency becomes a byproduct and higher costs are replaced by greater value.

Posted in CIO, Cloud, CTO, Information Technology | Leave a comment

Enterprise Architecture & the Edge of the Cloud


So you are a functional IT leader feeling the pressure around cloud migration. You don’t know what a VPC is but have learned about the “Infrastructure As A Service (IaaS)” and other opportunities are out there.

Which is the correct question?
“How do we get our applications into the cloud?”
“How can we strategically leverage the benefits of the cloud to increase our businesses competitive advantage?”

Seeing both of those questions at once…I think you know which one is correct. But which one of them have you heard or used more often lately?

The knee jerk reaction tends to be to try to get everything there so we can get back to our regular business while enjoying lower costs and increased responsiveness from the IT services. Due diligence proves out that this may not be the best way to think about it.

Moving to cloud-based IT infrastructure can be like dating…instead of looking for Miss or Mr. “Right”, we often focus our effort on finding Miss or Mr. “Right-Now”. Unfortunately, the reality is that the cloud does not allow us to wash our hands of all responsibilities around the IT function. However, properly leveraged, it can offer considerable business advantage.

So…where do we start to figure out how to take advantage of the cloud opportunity? One way to begin is to take a look inward at how your business currently uses IT. What are the IT supported functions or capabilities that your business performs and what common characteristics do they have?

If your business uses IT then you have some individual or individuals who take ownership of how your applications are implemented to bring value to your business. This would be the person or group that is responsible for defining or initiating how an application is delivered to provide its intended service. Many businesses take advantage of dedicating resources to performing this function. These are the “Enterprise Architects”. Enterprise Architecture (EA) can marry business goals and requirements to an understanding of the administrative and technical needs of the application technology stack. EA also facilitates creating relationships between services. They establish boundaries, enforce policies and enable reuse and interoperability.

The concept of Service Oriented Architecture (SOA) is used to describe strategies for the logical interoperability between different applications. When considering the actual compute layer and how it can be leveraged strategically for the business, the term Service Oriented Infrastructure (SOI) is used. With this in mind, EA looks at the IT business activities and categorizes them according to how they function from a holistic view. This perspective can help in identifying applications for migrating to a cloud-based infrastructure. As an example, the matrix below visualizes one way to categorizing IT related business activities. Activities are related by their relevance to the organizations “mission” and their level of standardization or maturity.

Capabilities and Services

Here, “Core Activities” refer to services that a business performs which are unique to their specific function or mission. These are the things that directly give them competitive advantage and differentiate them from other businesses. These activities can be mission critical in nature or play a secondary or enabling role. “Context Activities” perform functions that the business must do, but do not define or make the business unique. Context activities also may be mission critical or enabling in nature.

Using this model, the primary candidates for cloud migration would be applications whose functionality has become highly commoditized. These applications perform a vetted service and their operational management can be outsourced or offloaded to a third party with minimal impact on their function. These activities have very low risk to the core function of the business and may provide the best starting point for formalizing a cloud orchestration and migration blueprint. Once the cloud blueprint is in place, applications in the other quadrants may follow. Mission critical / core activities are migrated last to minimize potential service disruption.

The role of Enterprise Architect becomes increasingly important to businesses choosing to implement Cloud Computing. It is the Enterprise Architect who is positioned to understand which business processes will likely benefit from the elastic qualities of Cloud Computing and help drive the organizational change (people focus) required to move away from “server hugging” philosophies to being more focused on agile service delivery.

Many of the major hurdles for effective use of cloud computing are similar to those which EA is engaged with. In addition to application migration and Infrastructure as a Service (Iaas), EA must consider and strategize use of other levels of virtualized services and what role they should play for an organization. Service delivery, platform management, provisioning, integration and security are several other aspects for which EA plays a vital role. The architectural disciplines of EA help prevent service anarchy which can diminish value results.

Posted in CIO, Cloud, CTO, Information Technology | Tagged , , , | 1 Comment

Data Darwinism – Evolving the IT Development Paradigm

cavedataData asset-based strategies must be reliable, repeatable, and produce beneficial results that are well beyond their costs.
Typically, organizations derive their IT strategy based on known business need at a given point in time. Applications are created to provide answers to specific questions.
When I first learned IT, we started with some basic linear programming languages…Basic, Fortran, etc. Task #1 was to create a logic flow diagram. At some point in time, developers realized that many of the code pieces and parts can be reused both within the current process as well as by other processes. Instead of programming in a straight line, they began conditionally looping. When they noticed that the sub-processes that were being called would also work for other programs, they developed reusable classes and object orientation.
The IT industry grew up focused on process. Getting from A to Z. From single use applications to reusable classes to standardized libraries; IT evolved. However, until recently, the way data was used did not keep pace. IT was primarily bent toward process and building systems to perform those processes.
The growth of the internet as a business platform has spawned a different way of viewing IT. Focus is being re-directed from procedure, and the importance of data strategy is becoming clear.
When we look at all of our IT efforts, data is a common element. Data is the “content” that is shared across the internet as well as the blood that flows through the veins of our business applications. Applications use, generate and transform data. More and more we are realizing that from an enterprise perspective data that can be shared and integrated across processes delivers better value for a business. IT is evolving from being Application-Centric to being Data-Centric.appcentric

Business units A, B and C all have procedures for working with a specific piece of data. Each with their own bend for how they can get the most out of that data. Traditionally, each application project is funded and managed independently. Likewise, their infrastructures and databases are developed in a silo fashion. Although many applications need to access the data from individual application databases, those connections are only considered after the fact. APIs are created to link applications or share data as their need is realized. This results in “API Spaghetti” which is complex to manage. Additionally, many applications store like data locally in each of their databases. This data redundancy is costly from a storage perspective and also leads to poor data quality as each application alters that data based on its own needs.


When application development is managed from Data-Centric perspective, Data and content become the cornerstone upon which development projects base their architecture. Principles for managing enterprise level data are designed first. This is followed by engineering development platforms and infrastructures upon which applications are built which can leverage shared data.

If a well considered strategy is in place for managing this shared data, then cost of developing the applications that use them decreases. As business requirements change, and applications are updated and re-written, the data remains viable for use. This exponentially decreases the cost of altering the data when new solutions arise. Additionally, a strong data strategy minimizes data inaccuracy. Data quality is Maximized for enterprise.

Posted in CIO, CTO, Data Leadership, Information Technology, Leadership, Uncategorized | Tagged , , , , , , , , | 1 Comment