No more safe bets

In the olden days there was a saying in our industry stating something along the lines of: “You don’t get fired for buying IBM”. This pointed to the then well established compute services IBM had on offer that provided a good fit for the compute problems at that time. Since most software originated from one vendor, integration was fairly straight forward or even by default provided, although proprietary standards were used. In the currently diverse compute landscape in our industry, such a “solid choice” doesn’t exist any longer. We continuously see new standards emerging, tech stacks maturing and seemingly good bets being super-seeded rapidly by new waves of innovation. So given this, what should a CTO do?

Understand the bet you’re making

Generally speaking, a bet implies putting at risk something of value, like a monetary investment, in hopes of a pay off at a later point in time. But bets in case of enterprises seem to follow a different definition. In many cases, it involves reducing the risk to almost zero because the “price of getting it wrong” may get you fired. Unfortunately that also reduces the potential upside to about zero. From my perspective I personally would not even call it a bet. The strange thing is, that this way of reasoning is not a negative outcome! for the CTOs. It’s showing a reliable and steady hand, moving the enterprise along. I deliberately picked the word: “along” here, because it’s most certainly is not moving ahead or into a better state.

Living in the past

The result is that companies at that scale are more worried about protecting the past than thriving in the future. This seems to be a common pattern and show a lack of awareness of how the field of computing is progressing.

Situational awareness

How can we improve this situation? Well, we know a few ways to evaluate the world around us in better ways, especially by applying Wardley mapping ;-). But there is a bit more to it, let’s get into that.

Why make a bet?

Obsolescence, the state of a system or organization can find itself in, when its usefulness has been reduced to zero, or the cost of modernizing outweighs the value it delivers. To be able to avoid this state, it is implied that continuous adaptation and maintenance is required just to stay relevant and alive. Adaptation is the part that allows systems to evolve beyond their original composition, purpose and scope. On many occasions this involves re-design, re-platform or (partial) re-write of solutions to stay in a position where it keeps delivering a net positive value. Adaptation is also where the associated risk and adjoined bet-making come into play. Given today’s pace of new technology that can be applied, how do you select the right way or select a right target architecture that provides a way forward? Most likely the answer is going to be “it depends” and contains a large body of uncertainty. Thus a bet is needed and we can help to improve the odds.

Understand the landscape

Key to improving the odds is to understand the landscape you operate in. To gain an understanding of the landscape, first and foremost, one needs to have insight in the lifecycle of the systems in scope. What systems are driving revenue, what systems pose a risk through the fabric of their composition. Second to that I’d say, inter system dependencies are of key importance to understand. Knowing how information flows through systems, the interfaces they use to achieve that and the time constraints they operate under. This will set constraints on how you can change or replace a system. The more decoupled, the better the odds of making a good bet.

Blindly applying common principles

We see a lot of headlines promising to solve all compute problems but the reality is a lot more sobering. However, there are some common principles that may improve the odds further. The problem with some of these is that they are based on oversimplification. So be careful and assess what is applicable in your domain. Having said this, let’s look at some suggestions.

There are NO silver bullets

By trying to generalize solutions, for example: “we need to change everything to micro-services/kubernetes”. Remember, generic solutions are optimized for none! These silver bullet approaches are a common pitfall I see in the organizations I work in. Of course k8s is powerful, but it also requires deep knowledge to operate to have a slight chance to be successful with it. More important, not every workload will be a good fit for that platform. And looking beyond the technology, it might also not be the best for for the teams or departments delivering and operating these systems.

By capturing the assessment of the current situation as a unique perspective, we don’t easily oversimplify, because we seek to understand where we are. That helps to open our minds to the fact that not everything has to fit a certain outcome. Once the perspective has been formed we can start to think in more general directions and find ways to leverage uniformity and potentially economies of scale. That may find its origin in adopting certain compute abstractions, common platforms and services. But that design comes after we made an assessment of where we are now.

Drive API creation

Especially in enterprises we see a lot of, for the lack of a better word, “ancient” integration patterns. Where a lot of integrations depend on moving files from A-to-B. This creates non-explicit contracts and dependencies between systems that cause great headaches in modernization. Having APIs in place makes system dependencies explicit and creates a much clearer to navigate landscape. Something Jeff Bezos was aware of over 20 years ago when he published his API Mandate. Of course this approach also has it’s limits and therefore it’s good to check out downsides as well.

API Limitations

One of the problematic areas of APIs seems to be around COTS applications. Aspects like authentication and authorization often are ““custom””. Pretty much a euphemism for “it’s your problem”. Other things we see is problems with idempotency, mixture of RPC and Restfull styles, or other Restful aspects being implemented randomly. APIs are one of the aspects that is fairly easy to evaluate and especially when its not meeting these common principles should be a red flag, thus a product to avoid.

From transformation programs to continuous adoption

Another aspect that is hampering a lot of large companies is that they tend to build large migration approaches for all systems. This “tying together of systems modernization” in big batches, as explained by Don Reinertsen in defining the “cost of delay” and Flow, causes a secondary order effect (a loss) by being late on delivering potential value. I have not seen companies use this way of evaluating systems.

Continuous adoption

So whenever there is opportunity to deliver better value from a system, the shortest responsible path to do so should be followed. This implies that as soon as we see a way to improve value delivery, that opportunity should be taken. For the enterprise landscape we should continuously modernize and adopt the latest technologies, ways-of-working etc. that improve value delivery. This completely removes the need for transformation programs, acceleration programs or infrastructure overhaul programs, because each system individually has already optimized to its best delivery state.

COTS packages

In Wardley mapping we also consider the notion of buy-versus-build-versus-outsource for a piece of technology. In enterprises over time this notion of bought software/systems (COTS packages) builds up to a rather big catalog. That catalog has a lot of inertia and becomes quite the handful to manage. With regards to modernizing these solutions we see time and time again that vendors of such packages will not provide support for modernization or updated licensing. This keeps these systems tied to the old ways of operating and potentially one of the straws that breaks the camels back. A way of dealing with that is to actually re-assess the need for the system by evaluating the value it delivers and see if a competitor has a better fitting offer. This change often comes at a great expense, for example having to do conversions of data, but do not underestimate the long term financial upside of this. Removing maintenance cost, creating opportunity for new ways to use the data and provide better integration, security and governance all add up. Most likely our own bias know as “the sunk cost fallacy” is hampering us in these types of decisions.

Incentive to change

Voting with your wallet has as a nice side-effect. Vendors of COTS packages who don’t provide modernization and integration options will see their business dwindle if you move away or avoid buying their services. This provides incentive to them to provide better products and better integrations. All-in-all resulting in better competition in the market and better rates to buy products, delivering “more bang for the buck”. This is where enterprises do have real power and one aspect that they may not consider properly.

When to consider technology ready for adoption

So then, when to get onboard? In times where a major software version was released once every two to three years, this used to be a standard question with a standard answer to follow suit. Let’s wait two or three revision before we consider adoption so it can prove its value, mature and the market provides enough expertise, so we can adopt safely. Again, the emphasis on safety!

Use market signals

Nowadays with many software offering being released at very early stages to prove value and hugely fast iterations that has become a lot harder decision. However, Warldley maps still gives us a good way to evaluate the state of the technology by using weak signals to monitor evolution. That also ties into the notion of adoption or diffusion curves. Getting in on the right part of the curve is essential to making the right bet.

“Rogers diffusion of innovation theory”

What enterprises should aim for

As we’ve noticed before, risk reduction seems to be the highest point on the agenda for enterprises when considering building, buying or renting software or systems. But there are additional factors at play. One might say “stability and supporting continuity” is another point. To some of you this may sound mediocre and definitely is not pushing the boundaries. But, you then may be surprised that keeping an enterprise rolling is quite a feat by itself. Business continuity can easily be compromised by cowboy like behavior. This is also why most of the considered options are for enterprises, as defined Roger’s diffusion of innovations theory, are “late-majority” technologies. This has reduced technology variability, created a better defined purpose and scope and has produced more readily available knowledge on how to operate and run at scale.

Too early

These aspects are in stark contrast to technologies that are considered too early on the curve. Clear signs this is the case are publications that describe the wonder and predict how it’s going to change everything. Enterprises are not the place for this type of innovation. Often they seem to struggle with novelty. To be able to remain profitable they need to keep their focus on things like operational excellence. If innovation becomes part of the company, it usually better to split it off and create an independent startup that can move fast and break things without disrupting business as usual.

The “sweet spot”

When technology is moving towards “early majority”, it is about to hit the sweet spot for enterprises to start adoption. Since most enterprises move slowly (and are not continuously moving yet), this is a good time. Mostly because at the time that things are actually being adopted more time has passed and the technology has matured even further, but still has not reached “early majority”. This provides a good upside and a limited downside to technology adoption. A secondary effect is that the organization needs to develop a continuous adaptive mindset. So change becomes normal, and curiosity for the novel increases. That also creates a better culture and through that improves employee job satisfaction.

Too late

When technology is adopted in the “late majority” stage, the large upside of adopting is already diminished. It’s now the cost of doing business, as the most mature stage where we find commodity services in Wardely maps also suggest. When an enterprise is unable to adopt at an earlier stage, its usually a sign that teams delivering systems are hard at work to keep things running and ““don’t have time”” to innovate. This is a clear sign of a death spiral about to onset on a piece of the enterprise. To overcome such a situation only deliberate action in the form of a dedicated effort directed from leadership can change the destined outcome. When you find yourself continuously taking this action you need to seriously reconsider your entire technology approach. To make matters worse, since all enterprises are software companies at this day and age, your inability to renew is a telltale sign of looming obsolescence.

Towards continuous innovation

What has been written here of course doesn’t represent each and every enterprise, but most of them have serious issues in keeping up with the world around them. Sometimes driven by the result of rapid growth and companies taken over having to be integrated, others by large amounts of tech debt, low morale among employees or attrition. Many factors are part of these complex dynamics being present in the environments we call enterprises. There is no single definitive answer in how-to approach and resolve everything. However a strategy that allows systems to evolve independently is an asset that alway will provide value in the long term. Together with continuous innovation and adoption of new technologies I’m confident that enterprises can thrive.