Trustworthy AI Assessment - Bridging the gap between trust and transparency in data spaces
In the rapidly evolving landscape of artificial intelligence, creating trust and transparency in data spaces has become a key concern for both providers and users. The Trustworthy AI Assessment Initiative, launched by LORIA and Affectlog, seeks to address this issue through the use of two operational platforms.
What is a trusted AI assessment?
Trusted AI assessment encompasses a comprehensive approach that aims to enable participants in the data space to establish and maintain trust in their profiling algorithms, and likewise to enable users to trust these providers. This initiative is implemented via two platforms: the Audit Platform for Data and Algorithms developed by LORIA, and the Security Evaluation Platform by Affectlog. Each platform provides a unique but complementary method for auditing and assessing the trustworthiness and ethical considerations of AI applications and algorithms within a set timeframe of 12 months from the first quarter of 2024.
The main benefit of these platforms lies in their ability to enable transparent, safe and ethical assessment of AI algorithms for data stakeholders, such as educational institutions, healthcare providers and technology companies.
- Building trust: By providing a transparent assessment of AI algorithms, these platforms help to build trust between AI application providers and their users.
- Security and Ethical Assurance: Through comprehensive audits and security evaluations, the platforms ensure that AI applications adhere to high standards of data privacy, security, and ethical use.
- Increased transparency: By clearly communicating algorithm functions and test results, these platforms increase transparency and make it easier for non-experts to understand the functioning and effects of AI applications.
Special features of the trustworthy AI assessment platforms
The audit platform for data and algorithms (LOLA by LORIA) is characterised by its focus on security, control, transparency and extensibility. It ensures that data shared within the platform remains secure and inaccessible from the outside, offers data providers control over access rights and provides transparent audit reports on algorithm performance. In addition, LOLA’s architecture is designed to be easily extended with new use cases, making it a versatile tool for a wide range of applications.
Security Evaluation Platform (Affectlog)
Affectlog’s platform leverages AI to extend the Application Security Verification Standard (ASVS), providing nuanced security and privacy assessment at scale tailored to the needs of AI models throughout the application development lifecycle.
The assessment matrix, dynamic rating scale, and secure coding checklist align with industry standards and provide a clear snapshot of an application’s security posture. Utilizing semi-supervised learning algorithms to automate the assessment process and identify vulnerabilities offers an innovative approach to AI security assessment. These algorithms leverage both labeled and unlabeled data to improve the accuracy and effectiveness of the generated machine learning models.
Both platforms are developed to be operated by Prometheus-X on a standardized architecture for data spaces. Involved partners like the Fraunhofer Institute for Software and Systems Engineering (ISST) guide this process to improve interoperabitity and sovereignty of involved actors. This collaborative approach not only ensures the robustness and reliability of the platforms, but also facilitates their integration and adaptation to industry standards.