Data Sovereignty and Cloud Migration

The value proposition of cloud computing is highly compelling. The cloud offers rapid provisioning of low cost compute resources, decreased burden of capital expenditures and geographically dispersed workloads. Many global companies are eager to shift and often secure_dataleap forward with migration projects only to be blocked when jurisdictional compliance rules for data are encountered. Shifting cloud strategies to address data sovereignty requirements can result in unplanned additional investments.

Data sovereignty regulations can complicate the delivery model that has made cloud computing attractive, presenting new concerns for companies operating in multiple countries. It’s often assumed that some workloads cannot leverage the benefits of the cloud without being impacted by jurisdictional concerns, so it helps to understand how best to address the issues of jurisdiction and computing across geographical borders in a way that supports varied demands from different applications.

Data sovereignty rules generally focus around the idea that digital data is subject to the laws or legal jurisdiction of the country where it is stored. However, many countries have concerns extending beyond basic export control of data, and will also look at where creation/processing occurs on that data as well as where and how it is encrypted.

What about governmental bodies attempting to protect information while utilizing the cloud? Germany, Israel and South Korea and a growing list of other countries all have highly restrictive data sovereignty laws & practices. When planning cloud migrations, it is wise to consider the implications up-front and, whenever possible, “Bake-in” steps for remediation to prevent unexpected rework and cost.

Step 1

Know your data – Awareness of what will be stored in the cloud can require considerable analysis. Companies best prepared to be agile with their compute environments are those having a firm grasp on the nature of their data before migration.

  • Data classification – Does your data include information that may have implications around personal privacy, sensitive financials, security, or company intellectual property?
  • Scope of each classification – To what level or range of impact does the specific data to be migrated contain for each classification category?

Step 2

Determine your risk tolerance – Depending upon the potential impact if a breech were to occur, companies should be ready to make judgements as to what they are prepared to handle for each data classification.

  • Map classifications to include intended as well as potential use (or misuse)
  • Weigh data use against international views and rules on privacy

Step 3

Identify tolerable solutions for each classification – Develop and standardize acceptable methods of securing each type of data in a manner that will meet any sovereignty laws that may be applicable to you.

  • Understand the sovereignty rules as well as any activity within the specific geography that may impact these rules.
  • Architect to provide electronic evidence if breaches occur. In the event of an issue, preparedness for dealing with the problem can lessen its impact.

Technology advances at an ever increasingly rapid rate. However, jurisdictional regulation will always need to play catch-up. Data is the life-blood of many large organizations. Due diligence in understanding how it can be used and protected should not be an afterthought.

Posted in CIO, Cloud, CTO, Data Leadership, Information Technology, Uncategorized | Tagged , , , , , , | Leave a comment

Technical Debt…A Fickle Mistress

It is not uncommon for IT leadership to loose awareness of the true cost of application ownership. Financial challenges and budget constraints make easy targets out of any effort that is not delivering immediate results. As teams look at ways to become more resilient, it is important to consider what factors may have resulted in any shortcomings in terms of resiliency to begin with.

When making investments in IT, leadership should be considering the lifecycle of the investment they are making. How long should it last? What needs to be done to maximize the viability of the investment? Just like a car needs regular oil changes and tune-ups, applications, hardware and other IT investments need care and feeding to assure they able to continue to deliver what they are intended. Similarly, if these items are not given due consideration during the planning of the investment, solutions for fixing resulting issues can be much more expensive.

Much like your credit card, if you neglect regular payments or only pay the minimum due, the interest on the remainder adds to your principle and the ultimate cost of what was originally purchased becomes larger… your debt increases. In IT this is referred to as “Technical Debt”.

At one point in my IT career, I had the opportunity to play a key role in the migration of a legacy financial system to a large-scale ERP solution. At the time the trade journals were full of stories about lawsuits concerning failed ERP implementations. I remember reading about a lot of finger pointing, most of which was only to cover poor estimation of effort on behalf of the contractors engaged to manage the transition.

Although our implementation was ultimately successful, the perspective my position as a technical lead allowed me to gain insight into some of the factors that both contributed to our success as well others that limited our momentum and slowed progress.

This project was to implement a number of functional modules, each related to a corresponding area in our legacy system. Time and again the software vendor would explain the benefits of remaining as close to a vanilla, out-of-the-box implementation as possible. Although customization was facilitated by their design, they indicated that we were better off avoiding it wherever possible. This was to be our project norm going forward.

As you might guess, during requirements gathering, many areas that were previously identified as applicable to a vanilla installation, would not meet the needs of our functional areas. The out-of-the-box software that was initially a good match for our functional processes, did not quite meet the more granular aspects of what our functional people expected. What was discovered was that over time, non-standard use of the legacy system was undocumented but became accepted process. As business processes shifted, use of the system had been modified and expanded from its original conception. This use became engrained in their common work processes and was now necessary to everyday functionality. Our problem now was that we had to first invest time and resources into catching up on all of the newly needed requirements. This was a challenge because none of it was documented; it was just understood by the functional teams who were using it every day.

We also learned of many shadow systems that were created to supplement the legacy system. Integration points, data feeds to and from different systems. Over time, supplemental functionality had been pieced together that had little and sometimes no resiliency or life-cycle management.

As an example…I recall discovering that in order to close our books each month, we needed to import data from an outdated system that one of our functional areas maintained. Although it was old, it worked and because it worked, several other functional areas began piggybacking off of this system as a means of getting their work done with minimal investment. Of course, this outdated system was absolutely not compatible with the new ERP. It was written using code libraries that were not considered secure by IT Risk and were not even supported by the operating systems on our new machines. Since it was working, nobody ever considered maintaining it with patches or upgrades. After all … the business unit owning it was only concerned with the viability that it provided for their small area. Even though its use had grown beyond their group, no investment was made to bring the system to a level of resilience or compliance suitable for its expanded role.

If due attention was given based on that system’s role in the enterprise, then the organization would have made sure due diligence was done to see that system was given appropriate care. Instead, the system went unattended until an event happened that required modernization. Although costly to play catch-up with patching and upgrades, this would even have been more expensive to the organization if the triggering event was the result of some failure that prevented the system from delivering its business value.

The hard fact is that this debt must eventually be paid for. If regular payments (resiliency maintenance) are not planned or considered as a standard part of doing the business of IT, then eventually, a larger expense results. Unfortunately, this debt is very easily overlooked or considered optional when budgeting efforts or challenges are made. Payment of technical debt is too easily “leaned-out”, but eventually snowballs into greater expense down the line.

This can be a difficult lesson for application owners. Even worse is that often the lesson is learned at the expense of an application owner that was not part of the application implementation when initial budgets or resiliency models were planned.

Some IT Management Gurus look at technical debt as the result of two things… choice and level of prudence. Choice can be intentional or inadvertent. Prudence can be reckless or discerning. Here are some examples of how these play together:

Technical Debt

  • When you look at your IT budget, do you consider the future effect of your decisions with regard to resilience?
  • Is application life-cycle longevity and financial risk avoidance “baked” into your budgetary planning?
  • Are short-term savings or personal recognition more important than the exponentially larger expenses that can occur later or the overall financial well-being of the organization?

Responsible application ownership involves examination and planning for all phases within the life-cycle of their systems. It is easy to put off application vitality and resilience and often tempting because of ever existing pressure to lower expenses. Just like in your personal finances, living off of credit will add exponential overhead to your bottom line.

Posted in CIO, CTO, Information Technology, Tacit IT Leadership | Tagged , , , , , | Leave a comment

IT 2.0 – Transformation Perspectives

BT Method

Not too long ago I wrote a blog post (I.T. 2.0 – The Death of Design>Build>Run) that discussed the value to business of moving away from operational IT mindsets and towards a newer IT Paradigm. Although generally well received, I was not surprised to find that there were some who recoiled at the idea of shifting away from the traditional stability of operating under what has been an institution of IT Methodology.

As much as we may wish to avoid change, evolution refuses to give in. Even more difficult…the rapidly changing IT landscape drives the need for constant reexamination and does not afford the luxury of passivity. Embracing change is key to the IT supply framework’s potential for delivering value.

Cloud computing has driven an upheaval in the strategic IT landscape. Where IT once was considered a necessary cost of doing business now is positioned to take its place as more of a value center.

Transformation is a forward moving effort. Evolving the way IT is supplied to the business is not well served by framing change in based on past delivery styles. Agility will struggle if constrained by compartmentalizing efforts into phases like Design, Build and Run.

A major component of cloud methodology is achieving a level of automation that will free the development life cycle for applications from being hindered by provisioning and deployment operations. This is DevOps.

There are a number of incorrect opinions around exactly what DevOps is.

DevOps is not:

  • A set of automation tools
  • DEVelopment taking on the OPerational function
  • OPerations taking responsibility for DEVeloping code
  • Architects taking responsibility for code migration

DevOps is merely a culture shift that reduces cycle time by automating redundant orchestration activity.

  • Architects still drive the blueprint for IT
  • Developers still code
  • Operations still execute and monitor

What was before considered “Design” now is focused on the blueprinting efforts of enterprise architects and conceptual efforts of application design teams.

Build” becomes a development effort to code both their application functionality as well as the commands to provision their infrastructures, plus maintain repositories of reusable code reducing cycle times.

Run” becomes the operational component that consumes the code by executing it and monitoring its level of successful operation.

IT shapes itself to promote an “Assemble-to-order” structure leveraging services and micro-services.

Another aspect of IT Transformation involves embracing a “Start-up” mentality. Over time, organizations tend to create monolithic icons of IT projects that are cumbersome to evolve as needed. Replacing large development efforts with smaller ones that can be combined to deliver more comprehensive functionality not only builds components that can be made available for reuse, but more importantly provides the ability to shift or pivot if priorities or business climate warrants. Additionally, it allows faster recovery from failure.

A “start-up” mentality takes the position that you only know what you know right now and need to ensure that you retain value when the unknowns of tomorrow present themselves.

Commitment to IT transformation is by nature diametrically opposed to traditional IT. Transformation cannot succeed by simply re-forming existing methodologies. With transformation comes risk of failure. Small failures lead to organic growth. Effectively managing scope during transformation will bring longer-term viability and success.

Posted in CIO, Cloud, Cloud Development, CTO, Information Technology | Tagged , , , , , , , , , | Leave a comment

Considering DBaaS (Database as a Service)

dbaas_300x75When planning to use a DBaaS solution, customers will likely need help with tasks such as deployment, migration, support, off-site backup, system integration and disaster recovery. Then there are the applications that connect to the database and the databases themselves that need to be designed, developed, deployed, tuned and monitored. And what about organizations that are implementing hybrid systems or need help managing multiple cloud service providers?

DBaaS is an excellent solution for both DBAs and Application Architects to increase their level of agility, efficiency and cost effectiveness. Lean IT strategies benefit from potentials for lower infrastructure costs and faster delivery times for database management related functions allowing database professionals the ability to focus on performance optimization and DB technical strategies.

From the perspective of application development and ownership, DBaaS addresses many of the limitations inherent to a traditional RDBMS solution. A major benefit for developers is the ability to rapidly self-provision databases for temporary use at minimal cost.

Database cloud services eliminate the need for organizations to dedicate many resources to on-site database storage. They don’t have to install, configure or maintain hardware, and basic administration of software is fully automated. In addition to the infrastructure footprint housing the database itself, DBaaS will often cover the physical and administrative requirements for high availability and disaster recovery. Options for replication, backup and load balancing are “baked-into” the service.

With DBaaS, database-related processes become standardized and repeatable, allowing teams to more easily procure services, deploy applications, plan capacity and manage resources. The DBaaS model can also help reduce data and database redundancy and improve overall Quality of Service.

Understand the Limitations

Sound’s great…right?! Well, it is…as long as you understand how the service you are getting is defined and more importantly what the service does not allow for. There are big advantages to be leveraged with DBaaS, however there are also some limitations.

Anything put in the in the cloud, is subject to network performance issues outside of your direct control. If the Internet service provider, the cloud service supporting the database, or any points in between become clogged or go down, they might experience issues related to data latency or application failure. At least if a problem occurs in-house, the infrastructure team can more easily troubleshoot its cause.

In addition, features available in the typical RDBMS are not always available in a DBaaS system. For example, Windows Azure SQL Database (formerly SQL Azure) is Microsoft’s DBaaS offering that provides a database platform similar to SQL Server. However, Windows Azure SQL Database doesn’t support features such as data compression and table partitions, and the Transact-SQL language elements available in SQL Database are only a subset of those available in SQL Server. Amazon’s Oracle offering via RDS restricts use of the following features:

  • Real Application Clusters (RAC)
  • Data Guard / Active Data Guard
  • Oracle Enterprise Manager (although “DB control” is OK)
  • Automated Storage Management
  • Data Pump
  • Streams

People looking to migrate databases to the cloud need to thoroughly asses what features may be needed but might not be available.

What about Database Administrators?

In addition, the nature of DBaaS (like any cloud based service) is limited by what can be automated in their service. They do not provide the targeted individual attention that may be needed to plan their configuration or tune their use. DBAs remain a valuable commodity whether operating in the cloud or in house. XaaS services delivered in the cloud do not accommodate non-standard requests or deep-dive tasks requiring human intervention or analysis.

Plan Your Work…Work Your Plan

Diligence is necessary when planning a move for database environments to DBaaS. The platform can offer considerable cost savings to an organization. For database admins DBaaS can allow them to increase their level of automation, simplify basic configuration tasks and focus on new technologies and opportunities for improving performance. Application developers benefit by the added agility and availability of development and test environments that can be provisioned rapidly. Application owners get more for less cost as environments supporting development are only paid for when they are in use and production environment resource costs are dynamically scaled with load.

The database tier is critical within the application tech stack. It is also one of the more costly components that business critical applications require. DBaaS solutions provide lower resource costs and much greater speed and flexibility during development. Customers have the opportunity to self-provision and self-administer for low criticality environments and DBAs can keep their attention on optimizing configuration and performance for systems where the level of importance to the business needs to be assured.

Posted in CIO, Cloud, Cloud Development, CTO, Information Technology | Tagged , , , , , , , , , , | Leave a comment

Migrating Applications to the Cloud

“Every cloud has its silver lining but it is sometimes a little difficult to get it to the mint”
-Don Marquis

To the cloud!

One of the first things you may be lead to believe about migrating applications to the cloud is the potential for substantial savings. The idea of a driving down IT costs by leveraging virtualization and third party providers can be quite compelling for budget stressed organizations.

The benefits of the cloud are quite real. However, migration must be done responsibly. By nature, applications operating in the cloud have a different architectural footprint. Without due planning, cloud migrated applications can lack resiliency and potentially have higher cost than they did before migration.

Conscious uncoupling

Legacy designed applications are typically built with the assumption that they will be run within the bounds of corporate data centers, protected by company firewalls and resourced by hardware sized to meet the applications peak loads. Under these circumstances, code and middleware operate together happily knowing that they live in an infrastructure with costly resources pre-allocated for them. Also, connections to other applications and systems are made because they have controlled and static access to data that they need.

Over time, additional applications become tightly-coupled together. Dependencies between applications become constraints to agility and can prevent or complicate changes to application infrastructures.

One of the primary fundamentals that cloud based applications leverage is the ability to scale elastically. As application load increases and decreases, the resources allocated to them are dynamically adjusted by the addition or removal of virtualized servers. This allows application owners to pay only for resources as they are needed and save on excess resources that may not be in use. To accomplish this, applications that use cloud environments work optimally when they are designed to be more loosely-coupled with the servers that they are on and any dependent applications they use or are used by.

All about dat base

When transactions within applications are not considered complete until they are committed and all data is up-to-date, they are referred to as being ACID (dependent on Atomicity, Consistency, Isolation and Durability). This is fine within the walls of highly controlled corporate data centers. However, when working with third party cloud providers, these dependencies or assumptions call for a different paradigm.

For success with cloud environments, transactions should operate under the paradigm that services are Basically Available, Soft-state and Eventually consistent (BASE). Application developers acknowledge through their code that resources that they use will periodically fail and that the transactions in their applications will eventually become consistent. Servers supporting the application can fail, come and go based on overall load, or be taken out temporarily for maintenance. Coding for BASE transactions encourage the resiliency of applications operating in cloud environments.soa

Service Oriented Architecture

Another key enabler for cloud-based applications is designing them to call on functionality provided by pre-defined reusable services. When data or functionality is used by multiple areas of an application or called upon by dependent applications, efficiency and resilience is obtained by pre-defining access methodology and leveraging an integration mechanism that assures intended manner of use. SOA is often referred to as an application integration mechanism, however Service Oriented Architecture at its heart describes a way to design applications in a manner that breaks the necessity to maintain static connections between inter-dependent systems. (Here is a good video that describes Service Oriented Architecture) The way SOA does this is through the implementation of an integration mechanism based on self-defining and standardized services.

With these concepts understood, most applications will benefit from some level of reconstruction to allow them to reap the advantages of the cloud. Application teams should be prepared to handle some redevelopment and re-architecting.  The cloud can also provide opportunity to streamline the software development lifecycle. Not only does the application code need consideration, but now that the infrastructure can be dynamically provisioned using code, it should also be included in the SDLC process.


DevOps refers to the ability to provisioning servers and compute environments dynamically and leveraging the speed and repeatability of this provisioning being based on code. When brought into the SDLC, individual development, test, or staging environments may be instantiated as needed. Creating these environments is incredibly fast compared with building physical legacy ones. Plus, the only cost incurred is during the actual time that the environments are being used. Once the tasks are completed, they can be destroyed rather than sit idle.

On-Demand provisioning of environments is a major feature of the cloud that can pull cost out of the application lifecycle. Additionally, they allow development teams greater flexibility and agility that can speed up cycle times during sprints.

Accelerated Development

Building on the strengths of DevOps, several methodologies are possible which allow for further acceleration of the development process. Automated testing and migration tools can be wrapped with workflow mechanisms. Elements of service management and traditional ITIL can be worked in to allow for continuous integration, deployment and delivery of entire applications or changes to existing applications. This frees up developers to focus their efforts on code rather than process.

Other Considerations

Aside from the development lifecycle impact of the cloud, other items must be taken into consideration when planning migrations. It should be kept in mind that not all applications are good candidates for the cloud. Depending on organizations compliance requirements, many applications may not lend themselves to deployment on public clouds. Applications containing information that is sensitive due to existence of personal information or intellectual property must do a thorough analysis of how security may be handled. Private clouds provided within the walls of a company may be more suitable.

Also, applications should be architected keeping in mind that developers and application teams should not have OS level access to cloud provisioned servers. A logging strategy should be incorporated to facilitate troubleshooting and obtaining output for developers as well as system administrators. This assures that provisioned servers remain in a state that will allow them to be dropped and recreated without modification beyond the code-based deployment.


There are immense benefits to migrating legacy applications to cloud based environments. However, the level of benefit is highly dependent on the diligence that is used in planning the migration. A straight “Lift and Shift” of many applications to cloud-based environments can potentially be counter-productive and result in higher costs for the application provider. Although your initial impulse is to get there fast, understand the potentials and plan your migration to ensure your long term success.

Posted in CIO, Cloud, Cloud Development, CTO, Information Technology | Tagged , , , , , | 1 Comment

IT 2.0 The Death of Design>Build>Run


It is easy to see how rapidly IT is evolving. As was foretold, technology continues to change at an increasing pace. Emerging technologies which increase the amount and relevance of information are driving businesses to explore more agile and efficient methods of IT service delivery.

IT supply frameworks that are based on traditional “Design>Build>Run” phases now struggle to keep pace with the need for faster delivery. The FastWorks idioms of “failing fast” and “Pivoting” become constrained by process heavy SDLCs and deficiencies in the economy of reuse. Additionally, opportunities with virtualized infrastructures and cloud provisioning have changed the IT landscape to closer resemble that of software development. The idea of “Infrastructure as Code” offers agile provisioning, but calls for the same diligence as application code.

Design>Build>Run worked well in the 90s and in the early part of this century managed to get us by when SOA became more widely adopted. Now, cloud infrastructures and services are promising greater agility. However, many businesses are constrained by their older IT delivery methods. The IT beast gets hungrier every day and we must re-examine the sources of our constraints and adjust accordingly. Service orientation is becoming more granular. Services and micro-services increase agility, but need to be managed.

Reduction in rework offers help. When a need arises, utilizing existing solutions (code and infrastructure patterns) reduces cycle times for builds. When services are smaller and more loosely coupled, they are adaptable to a wider variety of use cases and repetitive tasks are more easily automated. Service definition allows adoption to happen quicker and with fewer resources. For infrastructure code, this allows for rapid provisioning of compute environments. Additionally, the cloud and virtualization provides compute resources that can have temporary lifespans and at low cost.

The design element can be replaced by simply conceiving a solution based upon building blocks that are already available. Build becomes an exercise in combining or orchestrating these components. Run is now consuming the solutions to their intended purpose. Design>Build>Run evolves to Conceive>Combine>Consume. A newer paradigm that is lean and agile. IT Throughput is more nimble, providing the business greater value.


Presenting smaller pieces of infrastructure code that is frequently compiled in similar manner will further offer rapid deployment by allowing parameter driven blueprints or templates. Complex environments may be requested and provisioned in an automated and on-demand fashion, avoiding the bottlenecks of redesign and re-build. Likewise, integration methods between provisioned environments is simplified and easily automated because of their known configuration.

For business, the goal will be for the new IT to empower stakeholders rather than control them. IT will move from a cost-center to the value-center that it was intended to be. Infrastructure as code will alter the role of the enterprise architect from solution designers to solution orchestrators. Business will be able to gain more using less.

Posted in CIO, Cloud, CTO, Information Technology | Tagged , , , , , , , , , , , | 1 Comment

Swimming in the IT Value Stream

IT StressBefore 2001, when information technology promised companies a means to greater advantage, the general thought was that greater IT investment meant greater advantage. So, IT spend was focused on growing company IT infrastructures and staffing highly technical staff to support and operate it. As time passed and the rate of new technology increased, the economic pressure of owning and maintaining large IT infrastructures became burdensome for companies.

Eventually, opportunity seeking companies learned that standardizing the components of IT delivery would not only allow them to operate lean, but if done in a certain way, also be used by others. They found that they could off-set their own IT spend and increase their margins by leveraging unused portions of their compute resources. This was followed by the establishment of these as products unto themselves which could further improve margins. Many aspects of IT delivery became commoditized and offerings of Infrastructure, platform and software “as a service” were born. These services began to compete with other corporate IT departments who delivered at much higher costs and significantly slower speeds. Companies began to re-think their IT investment options in favor of more frugal offerings.

The overall effectiveness for an organization’s IT strategy is not so much determined by the amount of IT investment, but rather how it has chosen to invest in IT. The Cloud has become a very attractive option for companies. However, the model for these services was not designed to be a total replacement for IT, but rather an alternative for certain IT components. Mapping business goals to viable solutions and administering them so they do so still remained essential. Key to this is successfully identifying the IT Value Stream and directing finances in a manner that maximizes productivity.

So…what does this mean? … The “IT Value Stream” is the flow between the various functions which deliver information-related solutions to the advantage of the organization that it supports. The amount of advantage that IT can provide is directly related to the volume and speed of this stream as compared to the cost of keeping it flowing. The level of success depends on how much the delivered information provides a competitive edge to the company while minimizing the cost of supplying it.

IT Leadership plays the role of custodian and guardian for the IT value stream. If the flow is impaired or misdirected, leadership acts to re-align or fix it.

Improved Data @ Lower Costs = Business Advantage

All of this makes sense…but only if the value stream is recognized and managed. Without due diligence, the value stream will tend to break into different directions and the advantages that were intended to be gained are weakened. Here are a few items to consider:

  • Identify the components that make up the value stream
  • Strategize how to increase the throughput of each component
    • Encourage focus on efficiency…re-usability, automation (IT as code)
  • Architect for smaller loosely coupled value stream components (Services)
    • Service Orientated Architecture
    • Design for scalability
  • Build quality control into the flow
  • Look to continuously improve

These are some of the fundamentals behind the “DevOps“ shift in IT culture. DevOps examines the intersection between software development, technology operations and quality assurance, and then optimizes them to provide rapid delivery of IT solutions. As businesses move faster to compete and technologies advance at a more rapid pace, solutions such as virtualization, cloud services, agile development, and data center automation are heavily influencing how IT is operating.
Many organizations choose to invest in generalized technologies. Ambiguously defined IT departments deliver solutions from scratch, often building or coding items similar to ones they have done before. Also, assumptions are made as to their responsibility and ability… (“The DBAs take care of that.”, “Then the Sysadmins do their thing.”, “I thought you guys were supposed to watch for that!”)

When evaluating the various functions within a company’s IT value stream, benefit can be gained by identifying generalized technology based solutions and rethinking them as service providers. As IT components are defined as distinct services, the delivery of applications and information becomes more nimble. Expectations from teams consuming these services are clear and rework is avoided. Efficiency becomes a byproduct and higher costs are replaced by greater value.

Posted in CIO, Cloud, CTO, Information Technology | Leave a comment

Enterprise Architecture & the Edge of the Cloud


So you are a functional IT leader feeling the pressure around cloud migration. You don’t know what a VPC is but have learned about the “Infrastructure As A Service (IaaS)” and other opportunities are out there.

Which is the correct question?
“How do we get our applications into the cloud?”
“How can we strategically leverage the benefits of the cloud to increase our businesses competitive advantage?”

Seeing both of those questions at once…I think you know which one is correct. But which one of them have you heard or used more often lately?

The knee jerk reaction tends to be to try to get everything there so we can get back to our regular business while enjoying lower costs and increased responsiveness from the IT services. Due diligence proves out that this may not be the best way to think about it.

Moving to cloud-based IT infrastructure can be like dating…instead of looking for Miss or Mr. “Right”, we often focus our effort on finding Miss or Mr. “Right-Now”. Unfortunately, the reality is that the cloud does not allow us to wash our hands of all responsibilities around the IT function. However, properly leveraged, it can offer considerable business advantage.

So…where do we start to figure out how to take advantage of the cloud opportunity? One way to begin is to take a look inward at how your business currently uses IT. What are the IT supported functions or capabilities that your business performs and what common characteristics do they have?

If your business uses IT then you have some individual or individuals who take ownership of how your applications are implemented to bring value to your business. This would be the person or group that is responsible for defining or initiating how an application is delivered to provide its intended service. Many businesses take advantage of dedicating resources to performing this function. These are the “Enterprise Architects”. Enterprise Architecture (EA) can marry business goals and requirements to an understanding of the administrative and technical needs of the application technology stack. EA also facilitates creating relationships between services. They establish boundaries, enforce policies and enable reuse and interoperability.

The concept of Service Oriented Architecture (SOA) is used to describe strategies for the logical interoperability between different applications. When considering the actual compute layer and how it can be leveraged strategically for the business, the term Service Oriented Infrastructure (SOI) is used. With this in mind, EA looks at the IT business activities and categorizes them according to how they function from a holistic view. This perspective can help in identifying applications for migrating to a cloud-based infrastructure. As an example, the matrix below visualizes one way to categorizing IT related business activities. Activities are related by their relevance to the organizations “mission” and their level of standardization or maturity.

Capabilities and Services

Here, “Core Activities” refer to services that a business performs which are unique to their specific function or mission. These are the things that directly give them competitive advantage and differentiate them from other businesses. These activities can be mission critical in nature or play a secondary or enabling role. “Context Activities” perform functions that the business must do, but do not define or make the business unique. Context activities also may be mission critical or enabling in nature.

Using this model, the primary candidates for cloud migration would be applications whose functionality has become highly commoditized. These applications perform a vetted service and their operational management can be outsourced or offloaded to a third party with minimal impact on their function. These activities have very low risk to the core function of the business and may provide the best starting point for formalizing a cloud orchestration and migration blueprint. Once the cloud blueprint is in place, applications in the other quadrants may follow. Mission critical / core activities are migrated last to minimize potential service disruption.

The role of Enterprise Architect becomes increasingly important to businesses choosing to implement Cloud Computing. It is the Enterprise Architect who is positioned to understand which business processes will likely benefit from the elastic qualities of Cloud Computing and help drive the organizational change (people focus) required to move away from “server hugging” philosophies to being more focused on agile service delivery.

Many of the major hurdles for effective use of cloud computing are similar to those which EA is engaged with. In addition to application migration and Infrastructure as a Service (Iaas), EA must consider and strategize use of other levels of virtualized services and what role they should play for an organization. Service delivery, platform management, provisioning, integration and security are several other aspects for which EA plays a vital role. The architectural disciplines of EA help prevent service anarchy which can diminish value results.

Posted in CIO, Cloud, CTO, Information Technology | Tagged , , , | 1 Comment

Data Darwinism – Evolving the IT Development Paradigm

cavedataData asset-based strategies must be reliable, repeatable, and produce beneficial results that are well beyond their costs.
Typically, organizations derive their IT strategy based on known business need at a given point in time. Applications are created to provide answers to specific questions.
When I first learned IT, we started with some basic linear programming languages…Basic, Fortran, etc. Task #1 was to create a logic flow diagram. At some point in time, developers realized that many of the code pieces and parts can be reused both within the current process as well as by other processes. Instead of programming in a straight line, they began conditionally looping. When they noticed that the sub-processes that were being called would also work for other programs, they developed reusable classes and object orientation.
The IT industry grew up focused on process. Getting from A to Z. From single use applications to reusable classes to standardized libraries; IT evolved. However, until recently, the way data was used did not keep pace. IT was primarily bent toward process and building systems to perform those processes.
The growth of the internet as a business platform has spawned a different way of viewing IT. Focus is being re-directed from procedure, and the importance of data strategy is becoming clear.
When we look at all of our IT efforts, data is a common element. Data is the “content” that is shared across the internet as well as the blood that flows through the veins of our business applications. Applications use, generate and transform data. More and more we are realizing that from an enterprise perspective data that can be shared and integrated across processes delivers better value for a business. IT is evolving from being Application-Centric to being Data-Centric.appcentric

Business units A, B and C all have procedures for working with a specific piece of data. Each with their own bend for how they can get the most out of that data. Traditionally, each application project is funded and managed independently. Likewise, their infrastructures and databases are developed in a silo fashion. Although many applications need to access the data from individual application databases, those connections are only considered after the fact. APIs are created to link applications or share data as their need is realized. This results in “API Spaghetti” which is complex to manage. Additionally, many applications store like data locally in each of their databases. This data redundancy is costly from a storage perspective and also leads to poor data quality as each application alters that data based on its own needs.


When application development is managed from Data-Centric perspective, Data and content become the cornerstone upon which development projects base their architecture. Principles for managing enterprise level data are designed first. This is followed by engineering development platforms and infrastructures upon which applications are built which can leverage shared data.

If a well considered strategy is in place for managing this shared data, then cost of developing the applications that use them decreases. As business requirements change, and applications are updated and re-written, the data remains viable for use. This exponentially decreases the cost of altering the data when new solutions arise. Additionally, a strong data strategy minimizes data inaccuracy. Data quality is Maximized for enterprise.

Posted in CIO, CTO, Data Leadership, Information Technology, Leadership, Uncategorized | Tagged , , , , , , , , | 1 Comment

Tacit I.T. Leadership (pt. 2)


In my part 1 on this topic, I discussed how contextual intelligence is the ability to rapidly and intuitively recognize and apply the dynamic circumstances inherent in an event or situation. Tacit Leadership uses contextual intelligence with purposeful changes in action or opinion in order to direct results and exert appropriate influence in that context.

That is the $500,000 way of saying that tacit leadership uses the context of a situation to give the best results. But don’t we all do this by nature?…Yes, to a certain extent tacit leadership is in everyone’s toolbox. Unfortunately, opportunism, resource constraints, entitlement and personality can limit how much some are able to exercise this talent to a level where it is highly instinctual and optimally leveraged.

Sometimes life gets in the way…sometimes life enables. Awareness of the opportunities to flex this muscle can be one of the first steps to improvement.

So, what are the traits that strong leaders have that bless them with tacit IT leadership skills?

Context Diagnoses Knows how to appropriately interpret and react to changing and volatile surroundings.
Contextual IT Expertise Having a level of subject matter experience that is enough to provide meaningful application and practical use of given technologies to the mission or use case of the business.
Critical Thinking The ability to make practical application of different actions, opinions, and information.
Future Mindedness Having a concern for where the organization should be in the future…A forward-looking mentality and sense of direction.
Influential Can uses interpersonal skills to non-coercively affect the actions, opinions and decisions of others.
Awareness of Mission Understands and communicates how the performance of others can influence subordinate’s, peer’s, and supervisor’s perception of the mission at hand and the road needed to get there.
Change Acceleration Has the courage to raise difficult and challenging questions that others may perceive as a threat to the status quo. Proactive rather than reactive in rising to challenges, leading, participating in, or making change (i.e., assessing, initiating, researching, planning, constructing, and advocating).
Consensus Building Exhibits interpersonal skill and convinces other people to see the common good or a different point of view for the sake of the organizational mission or values by using listening skills, managing conflict, and creating win-win situations.
Conscious Leadership Intentionally assess and evaluates their own leadership performance and is aware of strengths and development needs. Is action oriented toward continuous improvement of leadership ability.
Effective and Constructive use of Influence Uses interpersonal skills, personal power, and influence to constructively and effectively, affect the behavior and decisions of others. Demonstrates the effective use of different types of power in developing a powerful image.

As leaders or when we interview candidates for leadership it helps to consider these traits; exercise them ourselves, and Invest in growing them in our reports. Test them in the folks we are considering hiring. This may sound obvious, but in many organizations IT reports up to functions that may not have contextual I.T. background. For example…I.T. may be a sub-function of finance. You may have a very strong financial leader, or even a strong process leader, but their I.T. context might be limited to what technology does for their function and not be equipped with enough technical exposure to set optimal direction for a given I.T. effort.


People used to say, “Change is inevitable”…Today, that does not seem to say enough…especially when leading information technology. Change is not only inevitable; it is more rapid and accelerating constantly. Things that made us experts yesterday are now commoditized and automated. New technologies are always coming in and we need to be able to understand how they fit into the mission of our business rather than just how they work or what they can do for us.


Posted in Data Leadership, Information Technology, Leadership, Tacit IT Leadership | Tagged , , , , , | 1 Comment