The curse of backward compatibility and technology substitution
For a number of years, the IT industry pursued co-existence (integrating old with new) as a migration strategy in a number of software and hardware refresh projects. The benefits of this approach are ease of migration, sweating existing assets and little disruption to the way solutions are managed and operated.
The tendency sometimes is to carry out pure technology substitution meaning replacing an older version of software or hardware with newer ones while refraining from exploiting the latest functionality. Even worse, the desire is quite often to fit the new software and hardware functionality around old ways of doing things.
Here are a few examples to name a few:
- The use VoIP handsets and even Unified Communication simply to replace legacy analogue phones rather than investigating their potential for transforming existing business processes
- The continuous reliance on legacy connectivity between data centres when better architectural solutions exist
- The continuous reliance on storage replication when in some instances, more cost effective and elegant solutions exist at the application layer
The fact is that every technology reaches its performance limit at some stage and is naturally superseded so the dilemma is when do you cut your losses and build in parallel rather than integrate? The answer to this question is not straightforward and will depend on a number of factors including the amount of sunken investment in older technology, the timing of the refresh, the complexity of the solution and how as an organisation you are willing to embrace the new or stick with the old and familiar. Another factor to consider is whether you can continue to provide competitive services based on a mix of old and new or is it necessary to completely switch to the new? Again here are a couple of examples to illustrate:
- Next time there is a data centre network refresh; do you buy the latest switches with Ethernet fabric capabilities and run them in compatibility mode so it is possible to expand existing capacity or do you start from scratch? The former approach will mean that the overall network will have to operate at the lowest common denominator effectively forgoing the stability benefits of the latest products and technologies
- Do you architect your business applications to be cloud ready taking advantage of the latest technologies (messaging queues, stateless web and application servers, decoupling of components in-line with SOA principles, global load balancing, content delivery for static data etc.) or do you continue to build applications to fit legacy infrastructure capabilities such as disk array replication, layer 2 extensions between data centres, clustering etc.? The former approach for example, can enable you to build highly resilient and scalable solutions that can take advantage of unprecedented amount of computing power in the cloud for data analysis and mining purposes
So next time you are contemplating a technology refresh, it is perhaps worth investigating whether there is an opportunity to do things differently. An opportunity not just to optimise the cost and performance of your IT systems but also whether the new technology can help you gain competitive advantage by revamping or accelerating existing business processes. This is not always possible technically or commercially but by trying you at least consider your options even if the decision at the end is to opt for co-existence.