OECD offers policy recommendations for AI regulation in financial services

The Organisation for Economic Co-operation and Development (OECD) has published a new report offering policy advice on the regulation of AI in financial services.

According to Regulation Asia, the report offers recommendations to ensure the use of AI, machine learning and big data in finance is consistent with financial stability, market integrity, consumer protection and competition objectives.

A key recommendation provided by the report is the introduction of suitability requirements for financial services driven by AI, as well as add-on capital buffers that are based on AI algorithms.

The report underlines that while AI is able to drive competitive advantages for financial companies and boost their efficiency alongside enhance services for consumers, AI applications in finance may create or intensify financial and non-financial risks as well as create potential consumer and investor protection concerns around the fairness of data management, data usage and consumer results.

Furthermore, emerging risks from the deployment of AI techniques must be identified and dealt with in order to support and promote the use of responsible AI and existing regulatory and supervisory requirements must be clarified and changed to address conflicts of existing arrangements with AI-based applications.

The report underlined that policymakers should start to consider sharpening their focus on improving data governance by financial sector businesses in order to strengthen consumer protection across a range of AI applications in finance, as well as deal with risks connected to data privacy, data concentration, unintended bias and discrimination and confidentiality.

In addition, the OECD suggested that policymakers should consider the introduction of certain requirements or best practices for data management in AI-based techniques. The report explained the requirements that could touch upon data quality, safeguards against potential biases and the adequacy of datasets used.

The report stated that disclosure requirements around the use of AI techniques in financial services could also be considered – including information about the AI system’s capabilities and limitations – to make financial consumers remain informed of the potential impacts on customer outcomes. The introduction of suitability requirements was also suggested.

A key issue which the report also brings to light is the supposed lack of transparency and explainability of many advanced AI-based models which the OECD claims ‘could amplify system risks related to pro-cyclicality, convergence, flash crashes and increased market volatility’. The OECD said the supervisory focus may need to shift from documentation of the development and prediction processes to a ‘more technical approach that encompasses adversarial model stress testing or outcome-based metrics’.

Policymakers have also been called to consider requiring clear model governance frameworks and attribution of accountability to the human in order to grow trust in AI-driven systems.

The report also notes that there is a need for increased assurance from financial companies about the robustness and resilience of AI models, which can be strengthened with testing in extreme market conditions.

The full OECD report can be accessed here.

Copyright © 2021 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.