This is the first of a three-part series on Taming Complexity for Competitive Advantage
New technologies face a lot of reluctance.
Markets are not static. One of the most significant barriers to continued success is the ability to anticipate and adapt to change. But change can be tricky and complex, even if it ultimately means delivering better value. Complexity comes not just from the technologies but also from the process that must change. Change is not instantaneous, and it is not viable to abandon legacy models. Organizations must manage the complexity of migrating between operational models without disrupting operations.
Companies that can tame these complexities can use the resulting speed and agility to create competitive advantages. We’ll take a look at just how to accomplish this in this blog series.
There is a catchphrase I have heard since the beginning of my career. It is used by executives and managers alike at companies of all sizes and departments. It has been used so much that it has become part of the business dogma.
“Doing more with less.”
I recognize the emphasis to “do more.” I get confused about where this idea of “less” comes from. Have you seen an inventory of hardware and software assets recently?
Not only is the sheer quantity of growth Legacys and data, but the rapid pace of innovation and new technologies are adding complexity to the resulting architectures and operations. The duplication of technologies at almost every company results from siloed procurement decisions, fiefdom wars, or simply a failure to understand the extent of the capabilities that have already been purchased.
Companies have spent decades and billions of dollars chasing technology “quick fixes” for these growing complexities. But the quick fixes end up being just another piece of the ever-increasing complexity with no clear path of augmenting, migrating, or integrating them into the required processes. The onus falls back to engineering and operations to determine how to make it all work together.
This brings to mind another common phrase:
“Duct tape and bubble gum”
So the only things we find “less” today are patience and funding. So it is to these two metrics that we must turn as we look to tackle the complexity that results from the growing inventory of infrastructure and software assets.
It is fair to say that patience, or, more accurately, the lack thereof, is one of the design objectives of the public cloud. Public cloud providers have seen the success of their platforms not by providing the best technologies but by improving the provisioning and procurement process for those who had become, well, impatient.
Companies move to the cloud not because of what they get but how they get it.
Cloud has brought instant gratification to the consumers of IT services. Acquiring a new compute instance, more storage, or extending a network was reduced from weeks or months to moments. And you don’t have to navigate your organization’s complexity to get it done.
To accomplish this wizardry, cloud providers have an SLO/SLA-based operating model. This approach is often referred to as a Cloud Operating Model. It reduced the interactions between the providers and consumers of the specific IT services. The Cloud Operating Model requires the highest degree of automation and orchestration where the human element in day-to-day activities has been minimized, if not eliminated.
The infrastructure platforms are also abstracted away. No one calls up Amazon to ask which platform their EC2 instance is running on. It also changes the accounting process as you deal more with technology services than hard assets.
As public cloud providers gained market share, traditional data center technology companies looked to level the playing field by introducing the option of Private Clouds. The idea was to provide the same simplified provisioning process while maintaining ownership and control of the underlying hardware.
One of the key differences is that most private cloud providers care about what is underneath. Their business models are based on selling/leasing the underlying hardware and software. And while providers are moving to cloud-like pay-as-you-go models, they usually require clients to start the process with new hardware/software.
These approaches can have a material impact on the balance sheet when assets that are not fully depreciated must be replaced. It can also be annoying to have to rip-and-replace working environments.
The public cloud providers are addressing these issues by extending their servers and orchestration layer to on-premises infrastructure. Azure Stack and AWS Outpost offer solutions that allow you to run respective offerings on systems in your data centers. While you can reuse your hardware, you are now locked into their software.
This lock-in results not from deploying a specific technology but from dependency to integrate that technology into an associated process. Technology provides needed capabilities. And there are many different ways to obtain that capability. The issue comes in how that capability is integrated into the process. Technology vendors usually provide integration for their technologies, but that is typically the extent of it.
The good news is that almost all technologies have a Restful API. The bad news is that the client has to build the bridging layers between the various technologies. It becomes even more complex when companies choose a best-of-breed approach, which can increase the number of bridges to build.
A solution to this dilemma is an orchestration layer designed to provide an interface to use those APIs to automate even the most complex workflows. Orchestration creates a way to codify the workflow and provide process-as-code. It abstracts reliance on specific vendors, freeing the enterprise to deploy the technologies based on cost, scale, or reliability, not because of lock-in.
To learn more about our class-leading orchestration capabilities, visit Orchestral.ai. Discover how intelligent orchestration can reduce infrastructure complexity and accelerate digital transformation.
In Part II of this series, we take a look at complexity in the context of legacy systems in – “Coming to Terms with the Past: Complexities and the Endurance ofLegacy Systems.