Blog

There has been plenty of media hype about regtech for much of the past decade, but has anyone thought to check whether those who implement it are better off for their investments? By Rupert D.E. Brown, CTO of Evidology Systems

Meaningful data on improved compliance or reduced fines is hard to come by as regulations change all the time and the fines and sanctions imposed by courts and regulators do not follow any standardised scale.

There is also a much more insidious problem in the addition of new IT systems into already complex operating environments: a change which almost always makes things worse. There are multiple dimensions to these challenges, namely:

    •    Content – Having to provide the new systems with reference and transaction data not already exposed by existing systems.

    •    Platform – The new regtech system may well be implemented on a different hardware/software combination to existing customer systems, or these days it might run in an external “cloud” not already used by the purchaser.

    •    Connection – Moving the data to and from the new system may require different messaging or other middleware technologies (or adapters).

All of these factors take extra time and effort, both to implement and then continue to operate. Let’s now look at a couple of examples where this added complexity caused real world problems

    •    ESMA fines Regis-TR €186 000 for EMIR data breaches (europa.eu)

In this case, the problem was the classic case of ‘garbage in, garbage out’: incorrect and partial data was being sent to the Regis-TR repository, which was not being properly checked by the Extract/Transform/Load subsystem. This meant that the reports being produced were completely useless to the regulator. The calculations that produced the reports were not found to be in error.

    •    UK Competition and Markets Authority (CMA) Letter to Barclays about its breaches of the Open Banking API

In this case, Barclays was mis-stating the number of ATMs and branches available to customers whenever their Open Banking API implementation was queried by third party systems.  

This incident is in many ways far more serious because it calls into question whether:

    •    Barclays actually knows systematically how many ATMs it has at any point in time and what state they are in – there must be a monitoring system somewhere!

    •    Whether the regulator at the CMA actually took any professional advice as to how an ATM management system actually works and therefore what the best practice solution should be?

The outcome of this case is that the API call which resulted in the regulatory “breach” is now subject to some form of undisclosed “manual control” according to the public record (is it perhaps a spreadsheet?). This is utterly ludicrous for something that operates and changes state in real time across the UK.

Clearly regulators need to understand the impact and risks of complexity far better to be effective in their jobs and to ensure firms act both competently and efficiently.

Complexity creates another challenge for regulators and regulated entities, i.e. the problem of retaining institutional knowledge and avoiding data entropy.

The recent Danske Bank GDPR incident is a classic example of this. It had forgotten where all the instances of customer data resided across its systems estate, and as a result it had neglected to delete some data that should have been removed promptly as customer circumstances changed. NB: there was no leakage of customer data, nor it seems any incorrect data used in operational processing.

Both regulated industries and the IT tool providers that sell into them are still playing catch-up to try to mitigate the complexity paradox. The number of data lineage tool vendors now jostling for position in this space has grown rapidly over the past year, and we can expect to see a shakeout of the smaller niche players soon.

Data lineage however is only part of the problem. None of the tools in this space accurately look at the number of system touchpoints and the underlying network protocols used to transport the data. Most of them work on a simplistic discovery process, looking at the schemas and content of database systems rather than accurately mapping real physical flows (and batch cycles).

What is really needed next is to start treating regulations and their interpretation in the same way as large institutions handle reference data (currencies, geographic entities, etc): as a continually changing corpus of content that has a direct bearing on their business outcomes.

To do this, there needs to be a direct binding of the requirements of a regulation to the artefacts used day to day by a company.  The definition of each specific binding is probably best treated the same way as a piece of technology source and configuration code and managed on the same continuous delivery platforms.  

It is important too to note that just binding to the shiny new “Regulation XYZ Platform” is not sufficient. All of the planning, testing and operational change management artefacts also need to be evidenced, as HSBC found out last year with its £96m money-laundering fine from the FCA due to “inappropriate testing and poor risk assessment of new scenarios”.

NB: these bindings are not just static ‘signposts’ to documentation and report artefacts. They are often injections of a change of state in a workflow and may also be direct invocations of executable code, hence the need to manage them diligently. They also sit at the boundary between operational controls and risks which enterprises need to identify and keep under constant scrutiny.

Understanding and mitigating against the regtech paradox is going to be a challenge for all business sectors for the foreseeable future. Sadly, we still seem to be reacting to, and then building the same old siloed solutions one regulation at a time and, in doing so, suffering from the same inevitable and embarrassing operational failures.