Go back to menu

AI in the Dutch financial sector – All SAFEST and sound?

Dutch Central Bank publishes guidance

30 August 2019

Artificial intelligence (AI) is well-known as the theory and development of computer systems able to perform tasks that traditionally have required human intelligence. Financial institutions have taken an increasing interest in this development and AI applications in such institutions have become increasingly common. They are already used in financial products and services, such as robo-advice (e.g. when taking out a mortgage), loan and insurance underwriting and algorithmic trading. AI applications are found both in front- and back-office functions of financial institutions, such as chatbots (for customer's queries), client onboarding tools, transaction data analysis, fraud detection, and risk and portfolio management.

Both on a European level and on a local level, the use of AI has caught the attention of regulators and supervisory authorities. In the Netherlands, the Dutch Central Bank (De Nederlandsche Bank, (DNB)) expects that the increased use of AI in the financial sector will have a big impact in the near future. With the growing use of AI, the influence that AI will have on (core) processes within financial institutions will also grow and with that operational, prudential and reputational risks may arise.

DNB starts a dialogue and coins SAFEST principles

To start a dialogue with the financial sector, DNB published a discussion paper 'General principles for the use of Artificial Intelligence in the financial sector' on 25 July 2019. This discussion paper formulates general principles for the use of AI applications by financial institutions. According to DNB, financial institutions using AI-driven applications should pay attention to six key principles, collectively known as 'SAFEST':

  • Soundness
  • Accountability
  • Fairness
  • Ethics
  • Skills
  • Transparency.

DNB stated that the SAFEST principles should be seen in the context of the regulatory requirement of "controlled and sound business operations". Proportionality applies to these principles and takes into consideration the scale, complexity and materiality of the AI applications. How the SAFEST principles are applied will also depend on the role a specific AI application takes in the decision-making process of the financial institution (i.e. whether the AI application serves a descriptive, diagnostic, predictive, prescriptive or automation purpose).

DNB refers to the below ‘heat map’ in this respect.


The SAFEST principles should be interpreted in the following ways.


The DNB's discussion paper states that AI applications should be reliable and accurate, behave predictably, and operate within in the boundaries of applicable rules and regulations, including non-financial regulations (e.g. GDPR). These aspects become particularly important when various financial institutions apply identical (or relatively similar) AI-driven applications because systemic risks might then arise.

Principles for financial institutions to consider:

  • Ensure general compliance with regulatory obligations of AI applications ("compliance-by-design").
  • Mitigate financial (and prudential) risks in the development and use of AI applications.
  • Pay special attention to the mitigation of model risk for material AI applications.
  • Safeguard and improve the quality of data used by AI applications.
  • Be in control of (the correct functioning of) procured and/or outsourced AI applications.

Applicability of AI in processes, products and internal models of a financial institution may be  complicated and could result in the dysfunction of AI applications or unintended outcomes that damages the financial institution itself, its clients and/or other relevant stakeholders. DNB emphasises that when AI applications become more material, financial institutions should demonstrate they thoroughly understand their responsibility for their AI applications and that they have operationalised accountability for these applications. Model complexity or reliance on third parties can never be used as arguments for limiting the financial institution's accountability with respect to the use of AI.

Principles for financial institutions to consider:

  • Assign final accountability for AI applications and the management of associated risks clearly at board level.
  • Integrate accountability in the financial institution's risk management framework.
  • Operationalise accountability with regard to external stakeholders.

It is vital for the trust and stability of the financial sector that financial institutions use AI in a fair manner. For instance, AI-driven decision-making models that facilitate loan offerings should be able to define their concept of fairness and financial institutions need to demonstrate how they ensure that their AI applications behave accordingly.

Principles for financial institutions to consider:

  • Define and operationalise the concept of fairness in AI applications.
  • Review AI applications for unintentional bias (on an ongoing basis).

Ethics is closely linked to the principle of "fairness" and means that financial institutions must ensure that AI applications are in line with their ethical standards. Financial institutions need to ensure that their customers and other stakeholders are not mistreated or harmed – directly or indirectly – because of the company's deployment of AI, even if such applications operate within the boundaries of applicable laws and regulations.  

Principles for financial institutions to consider:

  • Specify objectives, standards and requirements in an ethical code, to guide the implementation and application of AI.
  • Align the (outcome of) AI applications with the financial institution’s legal obligations, values and principles.

Wrongful use of AI can result in damage to the (trust in the) financial system. It is the financial institution’s responsibility to ensure that its senior management, risk management and compliance functions have an adequate level of expertise. Both management and employees need to have sufficient understanding of the strengths and limitations of AI-enabled systems.

Principles for financial institutions to consider:

  • Ensure that senior management has a suitable understanding of AI (in relation to their roles and responsibilities).
  • Train risk management and compliance personnel in AI.
  • Develop awareness and understanding of AI within the financial institution.

Financial institutions should be transparent in their policies on how they use AI in their business processes. These policies must be suitable to enable adequate risk management and internal audits. This also means effective supervision of the AI applications to ensure stable operations on an ongoing basis.

Principles for financial institutions to consider:

  • Be transparent about your policy and decisions regarding the adoption and use of AI internally.
  • Advanced traceability and explanation of AI-driven decisions and model outcomes.
Final thoughts

DNB mentioned that as part of its supervision it will consider critically the potential impact of financial institutions' AI applications on the basis of the SAFEST principles. This means that financial institutions using AI applications not only in their products/services, but also in internal (core) operations and processes and outsourcing arrangements, should be aware of the development of DNB's SAFEST principles and how this will impact (existing) AI-driven products, services, processes and models.

The SAFEST principles are based on DNB's preliminary views on the responsible use of AI in the financial sector. DNB would like to open a dialogue in relation to its SAFEST principles.

Should you have any thoughts on the above or DNB's discussion paper, please get in touch and let us know.