Building Blocks

Trustworthy AI: Algorithm Assessment

AI is powering more and more of the services we use every day—from recommendation systems to digital learning platforms. Within the Prometheus-X ecosystem, one key question guides this evolution: Can we trust the algorithms behind these services? That’s where the Trustworthy AI: algorithm assessment Building Block comes in.

Why Trust Matters

At its core, Prometheus-X makes it possible to share datasets securely and in compliance with European data protection rules. These datasets often feed into AI models that shape the services companies deliver. But secure data alone is not enough—users and regulators also want assurance that the algorithms processing this data are reliable, transparent, and fair. That’s exactly what Trustworthy AI assessment provides.

This Building Block offers a toolbox for measuring and auditing AI models, helping organizations prove that their solutions are safe, unbiased, and compliant. For businesses, this kind of certification can make the difference in winning contracts or building customer trust.

A Simple Example

Take education. Imagine a group of regional academies issuing a call for tenders to buy EdTech solutions. Beyond great content, they also want guarantees: How will the AI recommending or evaluating student progress be monitored? With Trustworthy AI tools, providers can demonstrate that their algorithms are transparent, secure, and aligned with ethical standards—helping schools make confident choices.

Built for Transparency and Compliance

Education is just one example, but the principle applies across regulated sectors. The European AI Act makes it mandatory to analyze risks and align with obligations tied to different risk levels. This Building Block gives auditors practical tools to assess risks, in line with EU standards and frameworks like OWASP’s CARiSMA for GenAI and machine learning security.

From Design to Deployment

What makes this approach stand out is its focus on data-driven assessments that span the entire AI lifecycle. CARiSMA allows vulnerabilities to be detected at the system design stage, before real data is ever used. Once models are deployed, AffectLog provides continuous AI risk assessments, fairness and bias auditing, and explainability dashboards which ensures organizations can demonstrate compliance across the full model lifecycle. Finally, LOLA benchmarks algorithms in real-world conditions, making sure they deliver accurate, trustworthy results not only at launch but throughout their operational life. Together, these tools create a continuous loop of validation, giving organizations a 360° view of AI performance, trust, and safety.

Privacy and Security by Design

Equally important is the way privacy safeguards are embedded into every step. CARiSMA works in controlled environments without accessing sensitive data, AffectLog enables ephemeral data spaces and compliance-by-design monitoring within the company’s own infrastructure, and LOLA relies on secure data centers and private clusters. This ensures that the evaluation of trustworthiness never comes at the cost of data protection.

Who Benefits and What’s Next

These tools address the needs of multiple stakeholders: architects and designers aiming to build secure AI systems, MLOps teams managing deployed models, and institutions that rely on AI in sensitive areas such as education. Real-world pilots have already demonstrated their value—LOLA has been used to evaluate a learner recommendation system and is now applied in a teacher recommendation platform by Maskott, while CARiSMA has been tested on generic AI architectures to flag risks at an early stage.

Looking ahead, the goal is to make these solutions widely available. Cloud-ready, open-source versions are planned for 2025, paving the way for industrial adoption. The long-term vision is clear: these assessment tools should become standard instruments for certifying AI applications in Europe—helping ensure compliance with the AI Act and, more importantly, strengthening user trust.

In short, Trustworthy AI: algorithm assessment is more than just another feature of the Prometheus-X ecosystem. It’s a cornerstone of how we make AI safe, transparent, and reliable—so that when data flows, trust follows.

If you are interested in how the building block works, take a look at the video below.

What are Building Blocks? 

Prometheus-X’s “Building Blocks” are open-source, modular components designed to facilitate the creation of secure, interoperable, and human-centric data spaces, particularly in sectors like education and skills. These building blocks support both personal and non-personal data management, aligning with European data strategies and regulations such as GDPR.

Have you read the interview with development lead László Gönczy? Follow the link and read it now