Airline Operations

Legacy systems: Why do we still tolerate them in airline ops?

Legacy systems: Why do we still tolerate them in airline ops?

It is estimated that airline disruptions cause losses of $60 billion each year on a global scale. As per the latest Air Travel Consumer Report published by the United States Office of Aviation Enforcement and Proceedings, over 20% of all flights in all US airports experienced delays in the time period covered (May 2018). A good 7% of these are attributed to late arriving aircraft, while another 5% are due to factors which are within the airline's control.

But apart from the compelling statistics, real world examples continue to showcase the importance of investing in a reliable operations management platform. The storyline is typically consistent – a perfectly good strategic business objective is undermined by a myopic technology strategy. Obsolete legacy systems, which do not have the requisite platform level integration or data management capabilities to enable a truly connected ecosystem demanded by the new generation airline. 

Why are legacy platforms so problematic?

Some of these systems were originally designed with the assumption that whatever technology was available at the time was in fact the gold standard for the future as well and therefore the system would never require any further improvement.

What we call a legacy system today was once an innovative new computer system which was used to solve some key short term business problems. But very often, these were designed to handle the core piece of the business and that was never seen as a restriction until there was a need to load more pieces of the business workflow into the platform and to achieve some level of integration to ensure efficiency. Over time, several of these systems ended up being liabilities because the business was heavily dependent on them for daily operation but were restricted by its capability and capacity.

Not all legacy systems are obsolete, but in the context of airlines we can conclude that a majority of them are mere data processors rather than true enablers. The problem is made more complicated because some of these systems were originally designed with the assumption that whatever technology was available at the time was in fact the gold standard for the future as well and therefore the system would never require any further improvement. It was a major investment and therefore assumed to be for a lifetime.

Nor was it expected that the system would ever need to communicate or exchange large amounts of data with the outside world. This means typical legacy systems do not include any provision for significant upgrades or sufficient standardization to be integrated effectively with another platform that originated outside the ecosystem. At the same time, these systems were expensive to sustain because some of the underlying technology was so out of date that even the original manufacturers were unable to provide support. Systems built in-house often had poor (or incomplete) documentation and this made maintenance highly complex. Small changes made to the code base could have unforeseen effects and possibly crack open the entire system.

Quick fixes and the issue of sustainability

It is not surprising that IT departments caught in this trap prefer to push away the impending expenditure to another year's budget and hope for the best.

Replacing the system entirely represented not just a major new investment of money but also a significant downtime for the critical systems – this was simply unaffordable for many businesses. Given that disposing of the old system wasn't a possibility, technologists soon began to work around the problem and started externally interfacing functional systems to the underlying legacy system, rather than integrating them in the true sense. This means the processed information output given by the legacy system would be picked up using non-integrated systems which were programmed to carry out the workflows using that information as their inputs.

In other words, the archaic system has its life extended "artificially" by layering one additional module after another on it to enable the functionality required for the modern age. Note that the focus is on "functionality" but there is a serious lack of sustainability in the way the larger system is designed. The system may have cracks at crucial points of integration, limiting the addition of newer systems and necessitating a complete overhaul and redesign to accommodate the needs of the future. Also, every addition made to the underlying system comes at a cost.

BBC Capital recently published an analysis of such decisions, comparing it to a gambler's intent of chasing losses with the hope of making a big win to offset it all. From a business management perspective, this means the airline's technology strategy may have been handicapped by the intention to avoid heavy capital costs and to prevent revenue slowdowns despite the impending promise of better efficiencies in the future.

Step zero: Cost control through the cloud

It is tempting to suggest that the airline should begin by throwing out their old system. However, swapping out the entire IT infrastructure and replacing them with a new range is a massive expenditure with a significant budgetary impact. Plus, demand fluctuations makes ensuring scalability an expensive affair in airlines handling such large volumes. It is not surprising that IT departments caught in this trap prefer to push away the impending expenditure to another year's budget and hope for the best.

The solution therefore lies in a cloud based platform for airline operations management – one that is able to scale up and down according to the airline's requirements without placing great demands on infrastructure and processing power. SaaS mode implementations also provide greater level of flexibility to the end user who can access the system through a variety of devices without getting tied down to one massive system interface in a corner of the office. With minimal investment in technology infrastructure, airline companies are able to make use of volume based expenditure control, without shelling out money for a buffer of capacity which may never ever be used in the real world.

Strategic approach to future proof ops

Getting the cost restrictions out of the way is only the beginning. The airline, once it decides to get rid of its archaic legacy technology, must necessarily be able to deploy a new technology platform that enables it to be future proof. This has two implications in the context of airline operations.

1. What new operational paradigms must a next generation airline ops management platform help the airline prepare for?

2. How does the platform enable the airline to effectively tap the wealth of domain experience and expertise lying locked in the human capital among its assets?

We'll delve into the answer to these questions and more in the second part of this blog series.

Daniel Stecher is Vice President of Airline Operations at IBS, and has more than 20 years of experience in the aviation and logistics industries. Prior to joining the IBS family, he was product manager and consultant for the schedule management, operations control and crew management product at Lufthansa Systems. Daniel is perpetually on the move, having raked up literally over a million miles of business travel in his career. He enjoys delicious home cooked food, reading books and the odd round of golf in his spare time.

Click here for more information on IBS' portfolio of solutions for the airline industry and more 

1
 
Comments
No comments made yet. Be the first to submit a comment
Saturday, 23 September 2023

Captcha Image