Trustworthy AI Assessment - Bridging the gap between trust and transparency in data spaces

In the rapidly evolving landscape of artificial intelligence, creating trust and transparency in data spaces has become a key concern for both providers and users. The Trustworthy AI Assessment Initiative, launched by LORIA and Affectlog, seeks to address this issue through the use of two operational platforms.

What is a trusted AI assessment?

Trusted AI assessment encompasses a comprehensive approach that aims to enable participants in the data space to establish and maintain trust in their profiling algorithms, and likewise to enable users to trust these providers. This initiative is implemented via two platforms: the Audit Platform for Data and Algorithms developed by LORIA, and the Security Evaluation Platform by Affectlog. Each platform provides a unique but complementary method for auditing and assessing the trustworthiness and ethical considerations of AI applications and algorithms within a set timeframe of 12 months from the first quarter of 2024.

The main benefit of these platforms lies in their ability to enable transparent, safe and ethical assessment of AI algorithms for data stakeholders, such as educational institutions, healthcare providers and technology companies.

  • Building trust: By providing a transparent assessment of AI algorithms, these platforms help to build trust between AI application providers and their users.

  • Security and Ethical Assurance: Through comprehensive audits and security evaluations, the platforms ensure that AI applications adhere to high standards of data privacy, security, and ethical use.

  • Increased transparency: By clearly communicating algorithm functions and test results, these platforms increase transparency and make it easier for non-experts to understand the functioning and effects of AI applications.

Special features of the trustworthy AI assessment platforms

The audit platform for data and algorithms (LOLA by LORIA) is characterised by its focus on security, control, transparency and extensibility. It ensures that data shared within the platform remains secure and inaccessible from the outside, offers data providers control over access rights and provides transparent audit reports on algorithm performance. In addition, LOLA’s architecture is designed to be easily extended with new use cases, making it a versatile tool for a wide range of applications.

Trustworthy AI Assessment Platform (Affectlog)

Affectlog’s platform leverages AI to extend the existing Trustworthy AI benchmarks, providing granular trustworthy AI risk assessment at scale tailored to the needs of AI models throughout the application development lifecycle.

The trustworthiness AI assessment metrics align with industry standards and provide a clear snapshot of an application’s risk posture. Utilizing machine learning algorithms to automate the assessment process and identify vulnerabilities offers an innovative approach to trustworthy AI assessment. These algorithms leverage both labeled and unlabeled data to improve the accuracy and effectiveness of the generated machine learning models.

Design time Analyses of System Models for Supporting AI Trustworthiness and Legal Compliance

The solutions of LORIA and Affectlog are amended by CARiSMA (CompliAnce, Risk, and Security ModelAnalyzer), developed by University of Koblenz and Fraunhofer ISST. CARiSMA is a comprehensive open-source software suite that enables system designers and security experts to perform automated compliance analyses, risk analyses and security analyses of software and system models. Thereby, it allows the user to consider security requirements early on in the development process. UML (Unified Modeling Language) models are annotated with security specific requirements which can be tailored to the users’ needs and thus can cover a wide range of topics. Checks are performed on UML models which analyze the models with respect to security specific requirements and provide the user with detailed feedback on the model’s compliance with the previously defined security requirements.

CARiSMA’s plug-in architecture ensures extensibility, making it ideal for evolving needs. In the context of Trustworthy AI assessment, it is planned to develop a new approach to automatically generate compliance documents that are required by AI providers to fulfill documentation obligations as defined in EU’s AI Act. Additionally, an extension which allows for the assessment of AI enabled systems regarding AI specific security issues as identified by the Open Worldwide Application Security Project (OWASP) community will be implemented to analyze models of those systems.