Financial Markets

Regulators have shown some leniency towards the industry over complying with some of MiFID II’s reporting and data requirements. However, regulators’ patience is far from infinite and the onus really is on the industry to ensure that it is fully compliant soon. By Stuart Campbell, director at Protiviti.

Data has been a huge challenge in preparing for the Markets in Financial Instruments Directive II (MiFID II) right from the beginning and before final texts were confirmed: bear in mind that the level 1 text was finalised in 2014. Regulators and the industry alike have known data quality was going to be a challenge, even with a year’s slippage in MiFID II implementation. 

The general consensus was that regulators wanted to get the information flowing while accepting data quality issues and that over time the industry would work to improve quality. It was always ambiguous as to what regulators would tolerate in the short to medium term. The onus is really on the industry to identify the root causes of their data quality issues and have credible plans to improve them.

In particular, delays in implementing the double volume cap (DVC) appear to relate to problems that some venues have experienced in submitting some DVC reports that affect a relatively small proportion of their instrument set. However, some venues are understood not to have submitted double volume cap data. Some believe that the way European Securities and Markets Authority’s (ESMA) systems are designed is not helping its ability to produce accurate numbers for affected instruments: there appear to be unhelpful interdependencies between the reference data submitted by venues and the DVC submissions of other venues. Because of these kinds of issues, it made sense for ESMA to delay publication from January 2018 to March 2018 (as per their press release dated 9 January 2018) despite the legal obligation to apply the DVC from January 2018.     

Protiviti are working with clients investing in digital technologies such as robotic process automation (RPA), in order to improve the quality of their data. Tools such as RPA can be an effective and efficient solution to improve data quality.

Root factors in data quality include:

  • Data reliability (Will we get the same data if we repeat the process?)
  • Data validity (Does the data really cover what we think it covers?)
  • Data integrity (Is the data free of manipulation whether deliberate or unintentional? Who has access to the data and how would we know if it had been changed and by who?)
  • Data accuracy/precision (Are there problems with cut-off and is it complete?)
  • Data timeliness (Is data being received in time?)
  • Data security/confidentiality (How is the loss of data prevented? How is privacy ensured? How would a breach of data security/confidentiality be detected (after the event or in real time?)

The challenges of operating legacy systems often exacerbate these root causes.

As well as investing in digital solutions such as RPA, many clients have or are in the process of establishing a data governance model and framework in order to address root causes.