Here we venture to push the boundaries and argue that due to the advances in various technologies, such as Natural Language Processing, for instance GTP-3 modeling approaches[i], it is becoming pre-eminently important to align and even enhance the global regulatory capabilities to match the capabilities being used by various global market participants and stakeholders. The alternative of applying contradictory local based regulatory treatments of generic AI algorithms tends to be non-sensical from the start.
The recent release of IOSCOPD684 appears to be providing support in the global direction[ii]. The below is an extract from the recent release with some supporting narration.
“Measure 1: Regulators should consider requiring firms to have designated senior management responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML. This includes a documented internal governance framework, with clear lines of accountability. Senior Management should designate an appropriately senior individual (or groups of individuals), with the relevant skill set and knowledge to sign off on initial deployment and substantial updates of the technology.”
The senior management teams and board of director are required to account for this key-person risk and acknowledge the need to up-skill to match the speed of technological advances across finance and banking.
“Measure 2: Regulators should require firms to adequately test and monitor the algorithms to validate the results of an AI and ML technique on a continuous basis. The testing should be conducted in an environment that is segregated from the live environment prior to deployment to ensure that AI and ML: (a) behave as expected in stressed and unstressed market conditions; and (b) operate in a way that complies with regulatory obligations.”
There is no such thing as a black-box AI solution, and as such the senior management team is going to be likely held responsible for inadequate results, misrepresentations and/or potential fraud derived by AI mechanisms.
“Measure 3: Regulators should require firms to have the adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm utilises. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on the level of knowledge, expertise and experience present.”
It is not only the senior management team but also the compliance officers that are to be fully versed in the intricacies of “black-box” AI solutions. More importantly, it is the global nature of the industry offering AI solutions as third-party provider that is likely going to attract attention of local regulatory bodies.
“Measure 4: Regulators should require firms to understand their reliance and manage their relationship with third-party providers, including monitoring their performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope of the outsourced functions and the responsibility of the service provider. This agreement should contain clear performance indicators and should also clearly determine rights and remedies for poor performance.“
On a global basis, it is critical to setup key performance indicators that allow for multinational operating environment, hence potentially crossing jurisdictions and cloud-locations and algorithms domiciliation.
“Measure 5: Regulators should consider what level of disclosure of the use of AI and ML is required by firms, including: (a) Regulators should consider requiring firms to disclose meaningful information to customers and clients around their use of AI and ML that impact client outcomes. (b) Regulators should consider what type of information they may require from firms using AI and ML to ensure they can have appropriate oversight of those firms.”
Based on Llewellyn (1999) conduct of business regulation is looking at the “global” customer level here via provision of AI solutions. It is not only the human resource, employees, providing product or services it is AI driven algorithm making potential business-related decisions.
“Measure 6: Regulators should consider requiring firms to have appropriate controls in place to ensure that the data that the performance of the AI and ML is dependent on is of sufficient quality to prevent biases and sufficiently broad for a well-founded application of AI and ML.”
Lastly, one of the most challenging areas, the field of unbiased AI algorithms, is likely going to be in the domain of global regulation.
ZYANZA stands ready to assist you in your AI journey!