Algorithmic trading is on the rise within banks as well as other types of financial services firms – one study is forecasting a compound annual growth rate of 22% by 2025. Today, algo trading is no longer a niche activity, and it is being baked into firm-wide strategies to drive efficiencies, lower costs, and gain competitive advantages. However, as the prominence of algo trading increases, the risk profiles of the banks and other financial services firms will change, and in many cases, increase.
Regulators around the world have already recognized this potential for adjustment in firms’ risk profiles. For example, in the UK and Europe there is now a heightened regulatory focus. Under the Markets in Financial Instruments Directive II (MIFID II) – specifically RTS 6 – banks and other financial firms within scope are required to undertake a self-assessment and validation process for their algorithmic trading operations. A core component of this is assessing, validating and auditing the firm’s algorithmic trading controls.
Within the EU, it’s likely that there will be additional regulatory focus on algorithmic trading going forward. There are many similarities between the growth and evolution of algorithmic trading and the OTC derivatives market, before the implementation of increased scrutiny and regulation in that area, such as the EU’s European Market Infrastructure Regulation (EMIR), which came into force in 2012.
Now, US regulators are also increasing their focus on algorithmic trading, taking a lead from the UK approach.
New risk environment
So, what are the issues that regulators are so worried about, and should firms be concerned too? Algorithmic trading can bring significant benefits to firms, but as this form of technology evolves, the risks develop and change too. Regulators want to be sure that firms are keeping on top of these potential challenges.
Perhaps regulators’ biggest concern is the quality of the algorithmic code itself – the classic case for this is Knight Capital, which lost $461 million in 2012 due to a trading error which stemmed from a badly designed piece of code. This loss event destroyed the firm, and so regulators are worried that the impact of a piece of bad algo code could mushroom, causing systemic damage as well. Exacerbating those concerns is the fact that today algorithms are being created in shorter timescales than ever before to take advantage of market dynamics – regulators fear that this quick turnaround could lead to code errors too.
There are other emerging, related technology risks as well. Today, some algorithms are being created through artificial intelligence (AI) and machine learning technology – machines building machines. It’s not the machines that regulators fear though, but rather the humans who build them, and the possibility of human error being compounded through the automated nature of this new technological approach.
Heightening this worry is a talent shortage. There is more demand among financial services firms for people who can write algorithms and AI code than there is supply. Firms could wind up hiring less skilled or experienced people, who are potentially more likely to make the kinds of errors regulators want firms to avoid.
Moreover, the talent shortage also makes it more likely that banks will hire in talent which may not have the right moral outlook – this is a key risk, and it is greatly exacerbated in firms which already have a weak ethical culture. Overall, the culture that firms have in place within their algorithm teams is vitally important for the proper management of the whole range of risks that regulators are focusing on.
All of this is not just a sell-side issue – the buy-side is increasingly using algorithms for investment signalling, as a way of automating aspects of the investment process and reducing reliance on human talent. However, just as on the sell-side, this can lead to additional risk for the same sorts of reasons – which is why the algorithms behind robo-traders are being looked at in the US at the moment.
New risk solution
It’s clear that banks and other financial services firms need to be able to recognize the risks that are evolving out of the rise of algorithmic trading. Firms must be able to identify existing and emerging risks effectively. They must also create, document and maintain the relevant internal controls around algorithmic trading operations as they develop.
We are helping grow a community of banks to implement the right risk and controls frameworks around their algorithmic trading operations.
For example, our analysis of this community found that many banks could benefit from a shared understanding of both risks and controls. By sharing information about risks and controls within this benchmarking community, these firms have made their individual organizations stronger, and also contributed to the overall reduction in systemic risk within the financial services industry. No individual bank could achieve this on its own.
This power of the network defence model also applies to the risks associated with algorithmic trading, and the construction of an inventory (e.g. optimum use of control attributes across community). Overall, banks and other financial services firms are in a much better position to identify emerging risks and implement the right controls in a fast-moving area such as algorithmic trading by working together to share intelligence and insights.
Acin is the leading risk and control data standards, benchmarking and controls data analytics company. For more information about Acin and the network of banks it supports, please visit www.acin.com.