CertAI in action
Different industries have different concerns when it comes to ensuring that their AI is trustworthy. CertAI works with you to align the main business drivers in each of their areas to ensure Trustworthy AI solutions.
When organizations from the financial service sector implement AI solutions, fairness is one of the key risk areas to consider. An ethical AI solution is one that protects personal information and acts in a responsible way with appropriate control levels for the area of use.
Within the banking industry, all steps of the value chain use AI functionality. This includes credit assessment and approval,. fraud detection, and money laundering. Both, internal and customer facing processes rely on AI capabilities to maximize the level of automation. For example., customer service bots might be impacted by upcoming regulation that will require organizations using them to identify them as non-humans to customers.
Fairness is a mature topic in the banking industry today, as at the simplest level AI is a mathematical model. Mathematical models are well understood in banking and have been a key component in this industry for many years. When the EU AI Act comes into play, companies using AI in high-risk use cases will be exposed to increased requirements. An external assessment will be a key element to support these new regulations in case of audits by regulators or industry authorities. This can also be used as a competitive advantage with customers. Validation of performance would go a long way in support of this. Today external stakeholders and investors focus on company sustainability programs, five years from now the focus will be around Trustworthy AI. Already being prepared to meet these regulations and reviews with an external expert’s validation will take organizations one step ahead of their competitors.
When looking at a production environment priority shifts to robustness. In this scenario its focus is on AI not making mistakes and being able to produce a result with uncertainty present. One example is object recognition. The AI would need to be able to identify objects even if the visual is somewhat obscured.
Reliability and data security are also key topics in these use cases. If AI is offered as part of a service to others, data will be exchanged between the organization using the service and the one providing the Trustworthy AI solution. As a result, there is a chance that sensitive design data is shared and this must be protected at all times.
In the industrial context, the upcoming AI Act regulation is not as important as these applications are unlikely to be considered as high-risk. Quality is more important and a validation of performance would go a long way to help with internal stakeholder discussions and external quality perceptions. Transparency of AI used both internally and externally will help to provide continuous areas of improvement to address. The situation would be different if the organization were producing safety-critical systems such as machines that could harm a human or autonomous driving control components. These topics would be regulated and definitely considered high-risk. These are just two of the scenarios that might have a need for external Trustworthy AI Assessment support. Many others will likely evolve in the coming years as the use of AI becomes more widespread across other industries.
Is my AI at risk?
Our cataloge of Trustworthy AI dimensions allows us to assess your AI system on the process, technical, and implementation levels. We help you to provide your users, employees and customers a Trustworthy AI experience.