Capital

Data pooling has been debated in the banking industry as a solution for meeting certain modelling requirements in the Fundamental Review of the Trading Book (FRTB), and though this solution appears tempting, it does have flaws. By Charlie Browne, head of Market & Risk Data Solutions at GoldenSource

FRTB is due to come into force some time after January 2022, depending on the national regulator in question. It will bring with it reforms designed to address the shortcomings of Basel 2.5, which failed to address many key structural deficiencies in the market risk framework. As the industry prepares to meet this next wave of wide-ranging regulation, conversations have been focused on the potential use of data pooling to alleviate brand new requirements for non-modellable risk factors.

Under the requirements of FRTB, for the first time, banks will be required to prove that risk factors trade by retrieving real prices. This is a mammoth undertaking and the question of where banks are going to get this data is one of the most compelling arguments for data pooling. The idea being that banks, data vendors, exchanges and trade repositories, would combine all of their data in order to ensure a robust number of transactions have taken place previously. It is a convincing proposition; banks simply do not have enough of their own data. Add to this the fact that data is very expensive, and that the majority of firms are keen to consolidate costs after several heavy years of regulatory demands, and the initial attraction is clear.

NMRF questions

At this stage, it is difficult to predict how this solution would play out. Would a single vendor become a one stop shop or would banks be reluctant to rely on a single source and instead spread the risk by enlisting multiple vendors?

Then there is the question of who will be responsible for working out if a risk factor is modellable or not. Though we are still a couple of years away, early indications show that there are some banks which may not be sufficiently confident to rely solely on the data pool for these calculations, instead looking to use their own methodologies and processes.

And there are other notable drawbacks. Under the stringencies of FRTB the regulator may require banks to show, several years down the line, where they got the pricing data to determine that a risk factor was modellable. If this information came from a data pool, will the data pool be able to provide the necessary auditability and approvals required? In other words, if a data pool says that a risk factor is modellable, will they then have the capacity, and accept the responsibility, to face off any tough questions from the regulator further down the line?

Although the logistical framework surrounding non-modellable risk factors and the Risk Factor Eligibility Test (RFET) are both important hurdles to overcome, banks should be wary about ploughing too much time into solving what is only one small part of a much wider reaching set of rules.

The temptation to focus primarily on tackling the RFET likely lies in the fact that it is the only part of FRTB that is completely unprecedented. However, banks should avoid expending all of their resources trying to solve this single aspect and instead should concentrate their resources on thinking about how addressing the whole of FRTB can benefit their overall data strategy.

FRTB will bring with it huge data challenges, the gravity of which many firms simply are not yet prepared for. Banks need to make sure that they are implementing a proper data-centric approach so that they are ready to meet massive potential challenges around aspects such as time-series cleansing, instrument lineage and single identifiers, to name but a few. And it is not as simple as just having the data (from data pools or otherwise), firms must also ensure that they have the systems in place to run and interpret all of the calculations.  

A silver lining?

But there is good news for firms. Because the guidelines are so wide-ranging, if you get your data strategy right for FRTB then you will automatically address the data requirements for a lot of other regulations. For example, BCBS 239, Prudential Valuations and CCAR. This is a massive opportunity for firms to evaluate their entire data infrastructure and ensure that they are taking a holistic approach to regulation rather than addressing different directives in silo. The last few years of almost constant regulatory change, have seen a “bolt on” approach to regulation with compliance teams addressing different regulations with different solutions and different timelines.

As with any new regulation, the temptation with FRTB is for banks to focus largely on the aspects which are completely new and unknown. This is why the conversation around data pools as a solution to non-modellable risk factors has become so prominent. But firms which put too much time and resource into addressing this one single aspect could be missing a trick. In many ways FRTB is a huge opportunity for compliance teams to take a step back, take stock and put together a comprehensive data strategy that protects them against multiple regulatory requirements, as well as future-proofing them for years to come.

A Speakers’ Corner is an area where open-air public speaking, debate and disclosure are allowed. The original and most noted is in the north-east of Hyde Park in London

This article is free to read, request a no obligation trial access to Global Risk Regulator.