CertAI

The six dimensions of Trustworthy AI:
an overview

The six dimensions of Trustworthy AI have been academically discussed in great detail. By now, companies in the AI industry are acutely aware of factors such as Fairness, Autonomy and Control, Transparency, Privacy, Robustness as well as Safety and Security.

Each dimension has its technical, operational, and legal challenges.

Although all six dimensions of AI need to be assessed based on the individual risk level, some dimensions might play a more significant role than others. Depending on the industry, the use case and the requirements of regulators, clients, and their customers, AI companies need to balance their efforts across the six dimensions for both strategical and economic reasons. Especially in this situation, business leaders need evidence-based criteria, data and best practice experiences to validate and strengthen their decisions.

Better AI scaling decisions with CertAI’s Trustworthy AI Standard across six dimensions

With the Trustworthy AI Standard and its guiding principles, CertAI addresses this imminent need for clarity and transparency of crucial decisions in AI. CertAI offers a reliable and concrete assessment: always business-oriented and implementable, focusing strongly on on risk management and risk mitigation, and all based on an innovative and scientific approach of our research partner Fraunhofer IAIS, Europe’s largest application-oriented research organisation.

CertAI’s offering to AI companies starts where others might stop: We individually assess the current situation and specific risks of companies and their AI applications on six dimensions. This way, companies can measure individual decisions within the six dimensions, and prioritise actions across all six dimensions by value and incurred risks for stakeholders.

The six dimensions of Trustworthy AI – how CertAI adds value

As the AI requirements of individual companies and industries differ, CertAI’s experienced subject-matter experts provide support in navigating around the particularly tough spots. For each dimension, CertAI has developed guiding principles for a consistent assessment and comparison of all dimensions.

1. Fairness

In this dimension, AI companies usually struggle with the assessment of discrimination in their application as there is applied experience missing, especially in the evaluation and prediction of people and their behaviours. Fairness is most difficult to assess due to blurred guidelines. CertAI works closely with specialists in HR and Ethics to ensure we ask the right questions in our assessments around governance, unconscious biases, and transparency issues, so companies can treat applicants fairly when applying for a job, a loan, or an insurance policy.

By getting a full and deep picture of the risks, companies might avoid costly penalties, while gaining transparency to take measured business risks.

2. Transparency

The transparency dimension is tightly connected with the fairness dimension. Machine learning algorithms follow a certain pattern, which in several scenarios needs to be made transparent. An example is the right of loan applicants to get insights regarding the reasoning behind decisions. Here, all decisions of the AI algorithm must be traceable and understandable. CertAI‘s focus is on transparent communication to users that an AI is involved, interpretability for experts and users and overall auditability.

3. Robustness / Reliability

Reliability and robustness indicate the performance of AI applications. While this is on the radar of all technical teams, CertAI provides assessments and validations regarding the reliability during regular operation and in edge-cases. Furthermore, CertAI offers evasion strategies and helps to estimate the uncertainty of AI applications by checking the expected behaviour with different datasets. This way, companies gain more confidence for scaling AI applications. The Robustness dimension is the most important to ensure that your AI delivers the maximum business impact.

4. Autonomy and Control

In the autonomy and control dimension, it is all about finding the fine balance of human-machine-interactions. CertAI assesses the distribution of tasks between human and AI, and risks resulting from humans overriding the system or the AI making incorrect decisions without human feedback. Our experts also focus on information and empowerment of users and stakeholders and on the control of dynamics.

5. Privacy

To satisfy data security according to GDPR rules, companies must implement relevant measures. An example: Occasionally, work councils of companies challenge business decisions. Having a concise and detailed assessment of the risks and risk management measures at hand, will create more transparency and might speed up discussions.

6. Safety and Security

Cybersecurity and hacking are high on the agenda of AI companies. Corrupted datasets lead to manipulated AI decisions with fatal and costly consequences. CertAI offers vast experience from our cybersecurity insurance experts to assess the IT security and operational cybersecurity risks.

Scale AI applications with higher confidence

CertAI’s Trustworthy AI Standard and guiding principles not only enable clients to get a grip on individual risks in all six AI dimensions, but they also make these risks manageable and actionable. This leads to well-informed decisions with higher confidence and increases the quality of discussions with internal and external stakeholders. It also empowers AI companies to take full advantage of their AI, to scale and speed up operations and to gain competitive advantage.

We look forward to discussing your individual situation.

Is my AI system at risk?

Ensure you have Trustworthy AI by executing our recognized assessments.