The Growing IT Mess at Big Banks

The Growing IT Mess at Big Banks:
There is a very useful and accessible article at BBC (hat tip Richard Smith) on the information technology hairballs and baked-in level of future problems at major banks. The article was prompted by widespread problems at NatWest this week in accessing accounts.
It’s hard to manage large, complex IT installations when they require frequent feature changes and upgrades due to customer and regulatory requirements, plus (on the trading side) product innovation, but it is made vastly worse when it is treated like a stepchild, which is the attitude in most financial firms. Perhaps readers can add to the list, but the only firm I’ve ever worked with that treated IT as a strategic priority was O’Connor & Associates, which in its heyday ran the biggest private Unix network in the world and spent half its budget on technology. Even then, it had the usual trading firm problems of everyone wanting their work done yesterday and not wanting to spend any money on documenting the work the developers did (which would have added 20% to the development costs but lowered lifetime costs).
What is not sufficiently well recognized is that IT risks are a not-sufficiently-well-recognized source of systemic risk. We seem some recognition of that, in the emphasis firms place on having backup facilities that are kept in ready-to-go condition. But firms below the TBTF level can have costly, even catastrophic mistakes.
First, systems tend to agglomerate, rather than having data exported out into newer, tidier, faster software:
“There’s been massive underinvestment in technology in banks – it seems to be the case that the whole damn thing is held together by sticking plaster,” he [Michael Lafferty, chairman of the research company Lafferty Group] says only half-jokingly.
“You hear stories of Cobol programmers being dug up and brought back from retirement after 20 years.”
The result of all this agglomeration is either that you lose an clear idea of how things all hang together, or you have people working manually or with kludged programs across systems. The danger with overbuilding is you can have parts that you’d built around and didn’t even know were there any more spring back to life in costly, nasty ways:
“Most IT applications carry around dead code – which lies dormant because none of the live modules are using it. When Knight Capital ran an update in its systems, some of the dead code was brought back to life, causing the system to spit out incorrect trades.”
Then you have more widely recognized problems, that of acquisitions leading to integration failures:
“Because of the banking licences in the UK, when you bring two organisations together, the transition from two systems to one system can take up to 10 years,” he [ Ralph Silva, the London-based vice president of Banking Strategy at HfS Research] explains.
“It takes a long time as it all has to be done by the book by the regulators’ rules.
“There’s one big bank in our country that has a total of 50 different mortgage systems as a result of history and mergers and acquisitions.
“That’s insanity. It should have maybe one for retail and one for wholesale. But 50 is ludicrous… it hugely raises the danger.”
Mortgages are even a bigger mess in the US due to changes in product features and considerable variations in state law. But is is made much worse by poor IT management. From what I can tell, the only services that have decent platforms are relatively young “combat servicers,” and even they maintain that getting too large will make a hash of their operations.
Another factor that can mess up IT integration is a difference in cultures. For instance, believe it or not, Countrywide’s prize asset was its servicing platform, software it had developed internally. But Bank of America didn’t like or do custom, it relied as much as possible on vendor-provided software. It proceeded to upload its customer data and integrate stray systems into the Countrywide platform, and then manage it like a BofA installation, which resulted in it losing the specialists who knew the systems even faster than it would have otherwise.
The BBC piece flags other shortcomings in relying on off-the-shelf programs:
Further complicating matters is the fact many banks have opted to buy in software from third parties, letting them slim down their own IT departments.
“When it’s outsourced you can’t make changes to the [bought] code,” says Ralph Silva…
“You have to make changes to your own code – and that increases the risks.
I’m curious to get the input of IT professionals, since even from my limited experience with financial firm IT, I’ve heard of numerous stories of massive projects that have big overruns and are quietly taken out of their misery. My impression is the record of large projects is so terrible that I wonder if any large-scale projects ever get done in a manner that could accurately be called a success.
But the other problems I’d add to the list are:
Firms going from decentralized to centralized to decentralized IT. We may be long past traders having any control over their lives, but at least in the 1990s, you’d have firms changing their views on how to manage IT, plus traders stealthily funding their own development of risk modeling tools when they couldn’t stand to wait for the IT officialdom to produce whatever it was they thought they needed. One of the notorious legacies of that was that Salomon Brothers until the later 1990s was running its bond trading risk management on a monster Excel spreadsheet because the traders had built it and had never ceded control over it (well, of course, they finally did).
Maybe this fight between the producers and the service departments is a long-settled issue at the really big firms, but I wonder if it is still an issue at medium-sized players.
Personnel policies ill suited to the customized nature of critical bank software. Even if financial firms were willing to spend enough for developers to document their work well, it’s always best to have the members of a team who built code in the picture to help tweak it. But financial firms, like pretty much all of Corporate America, have long gone for shorter job tenures and, particularly in IT, contract workers. Even if a lot of IT “contractors” wind up working for a particular bank for, say, two or three years, that is far less than the expected life of a lot of code. The old career paths, where a seasoned employee could expect at least five, and ideally fifteen to twenty years at the same employer, are much better suited for managing mission critical yet fragile systems like software. The failure to give more job security to programmers working on critical transactions platforms seems remarkably short-sighted.
Reader comments and corrections very much appreciated!

Comments

Popular posts from this blog

Hand Foot Mouth Disease in Adults

How Reuters See The Legacy Of Arafat