SAS Institute has integrated its data analytics management platform with a software toolkit released by Singapore’s Monetary Authority of Singapore (MAS) to help financial institutions ensure they are using artificial intelligence (AI) responsibly. The move aims to address a common challenge these businesses face incorporating core principles governing the ethical use of AI.
MAS in February launched the open source toolkit to automate processes needed to assess a company’s AI adoption based on four principals–fairness, ethics, accountability, and transparency, or FEAT. Developed by the Veritas consortium, the toolkit allows for plugins to enable integration with the financial institution’s IT systems.
SAS said Wednesday the integration of its analytics management platform, Viya, with the Veritas toolkit would enable the assessment of FEAT principles in analytical models, tapping natural language generation and workflow capabilities in Viya. The SAS platform also works with the toolkit to enhance automation.
Singapore businesses often struggled to operationalise their adoption of AI based on the Veritas principles, said Manisha Khanna, SAS’ Asia-Pacific head of AI and analytics. The integration with the toolkit helped cut down on the complexity, for instance, by automatically identifying sensitive attributes to ensure compliance.
It also facilitated “explainable AI” decision flow, Khanna said, adding that results generated by AI models were automatically tagged with a text-based explanation of how these were determined.
In addition, regulatory and audit reports could be automatically populated, further improving staff productivity, she said.
SAS’ Singapore managing director Lim Hsin Yin noted that the adoption of responsible AI was low due to trust and balance, in which businesses needed to have trust in the technology as well as balance corporate profitability with social responsibility in order to adopt AI ethically.
The collaboration with MAS aimed to address this, Lim said.
SAS’ chief data ethics practice officer Reggie Townsend said: “The need for regulatory frameworks to capitalise on the promise of AI while mitigating its risks is not only important, but urgent in these times. Both technology providers and regulators have a critical role to play to ensure AI development is centred around people to ensure a positive impact on our society.”
Townsend said AI and machine learning applications learn over time and could place societies in scenarios that they could not anticipate. Legislation, hence, was needed to manage this while the definition of “responsible innovation” with AI continued to evolve, he said.
This was not unlike previous transitions, for instance, when automobiles were first introduced and plied the streets alongside horse carriages and the absence of traffic lights.
He said SAS had established an oversight committee that operated internally to ensure the necessary controls were in place to help the software vendor mitigate its own risks and apply AI responsibly. Sometimes, this meant walking away from certain customer deployments, so its platform was not part of an ecosystem it did not want to be in, he added.
Asked about vendors such as Microsoft and IBM that banned the sale of facial recognition to law enforcement, Townsend said SAS had not drawn such “hard lines” because the context behind the use of AI was important. For instance, the adoption of facial recognition technology in facilitating passport authentication provided a positive outcome.
He added that the internal processes SAS established helped ensure implementations that breached its principles of responsible AI use would be flagged.
According to Damien Pang, MAS’ deputy chief fintech officer and executive director for data and technology architecture, the Veritas consortium has grown since launch to comprise more than 30 entities. This, he said, reflected strong interest from the industry to drive trust in the use of AI through the FEAT principles.
Asked about the takeup rate for the toolkit, Pang said MAS did not track the number of downloads since this was not necessarily an accurate indication of actual use.
He urged organisations to understand what was required to use AI responsibly. Adding that the FEAT principles would help build consumer trust, he said this was critical to drive the adoption of digital services and enable businesses to leverage data to better serve customers.