Integration may take a few interpretations, including things such as having one nation assimilated into another, or getting new processes incorporated into a workflow. However, typically from a software perspective we are talking about getting different systems talking to each other, and sharing data correctly. In terms of successful implementations, these can often be the sticking points, depending on the relevant parties’ understanding of the needs of the other, technical skills and available tools in the various solutions being integrated.
I am probably old enough to be considered one of the grey haired, “Get off my lawn” types, and I am in awe of the tools available to us these days for communication. This vastly simplifies the process of integrating solutions, as the tools available can handle communications, security, description of content, confirmation of receipt and action of data, and so on. Ensuring the relevant data is included in this process still forms a crucial part of the implementation activity, and on occasion certain data may not be available simply because another system never saw the potential to need it in future, but usually data is available for consumption in real time or near real time via various APIs pretty easily. In our industry the focus is most commonly on inventory, bookings, pricing and schedule information, and where these used to be based on closed proprietary systems, these days significant information is available in the public domain.
Whilst systems such as IATA teletype messaging has been around for years, and is still in use albeit on less physical platforms than the old ticker tape teletype machines (yep, again, I am old enough to have used these), they still operated on essentially a closed network that you had to have access to in order to transfer information. Formats updated over the years to include XML based messaging, which had the huge benefit of actually describing the data without having to refer to a manual every time to decode a message. Messaging could be distributed on a wider basis to include things like Airport information, other information providers, and even the passenger directly. Just think how you can not only get things like your boarding gate information or delay updates directly on your phone via the airline’s app (still a somewhat closed network), but you can typically punch a flight number into Google, and it not only has longer term scheduling type information and ticket pricing, but will also give you current departure information on active departures such as aircraft registration, gate numbers, actual departure times and similar, all on a public platform.
Such data availability has not always been the case. I was reminded of this recently when reminiscing with a colleague over past implementations. Many years ago screen scraping from terminal emulators was not an uncommon way to transfer information. Integration would comprise 2 parts, namely the screen scrape type data collection from the booking system, and macro type sendkey or similar uploads to apply settings to inventory. Initially these required dedicated communications cards in PCs, although later this evolved to allow direct connections over the internet.
However, both approaches typically involved being able to log in to a res system host, send commands to query data, read the host responses and write these to a file for processing. The processing consisted of understanding the content of each line, either because there was some form of identifier, or because the 4th line of the response “always contained data type x”. Occasionally a connection or a terminal went down for some reason beyond your control, but you could build in ways to retry after a period, or restart a process once communications had been re-established.
But we had one client where we had failures in these extracts regularly. And by regularly I mean you could nearly set your watch by them. Tuesday evenings into Wednesday mornings, almost without fail, we had nothing. We did troubleshooting of connections, of PC Hardware, of the communications cards. We added more and more logging to the application to be able to debug errors. The logging, well, it didn’t. Nothing helped.
And then the aforementioned colleague was on a site visit with the client. He happened to be staying late in the office to catch up on some work. As 8PM or so came around, so did the cleaning crew. And as he sat there, he watched them cross the office, clearing trash cans, wiping down desks, and UNPLUGGING THE POWER to the extract machine in order to plug in a vacuum cleaner! Once they were done, they swapped plugs back again, the machine did not boot up until the next morning when the staff arrived, and turned it on as a matter of course without thinking about it.
All the tech approaches in the world couldn’t solve that little human error, which, as many things in life, was solved with a piece of duct tape, this time over the plug and switch.
Given the robust nature of today’s APIs, and the error checking and alerting, data transfers are pretty easy, and the risk of data loss in integration is miniscule. It does make for a much more reliable process, but it also takes away a lot of these interesting anecdotes! And whilst human error may still occur, with the world a much more tech savvy place overall now, it is also much less likely, at least in such an odd fashion.
I would be interested to hear if any of you have experienced similar “system failures”, especially in more recent years.