Trustworthy AI

AI that has been shown to be fair, transparent, explainable, robust, reliable and of consistent quality.

Trustworthy AI refers to AI systems that demonstrably meet criteria for reliability, safety, fairness, robustness, explainability, and accountability. It is the overarching goal of AI assurance practices, which collect and validate evidence that systems conform to these principles throughout the lifecycle. 

Trustworthiness is a multi-dimensional attribute, evaluated through technical tests, audits, documentation, and compliance with governance frameworks. Regulatory initiatives such as the NIST AI Risk Management Framework and the EU AI Act explicitly define trustworthiness as a policy objective for high-risk AI systems.