The Transparency Service for AI Model Cards develops a standardized structure for creating and managing AI model cards, including performance metrics, model limitations and ethical considerations, offering a user interface and SDK for card creation and management. It also includes management functionalities like version control and updates, enabling developers to maintain a history of model improvements. Additionally, the service provides varying levels of detail within model cards to cater to both technical experts and non-technical stakeholders.
The Transparency Service for AI Model Cards has connections with the Media Asset Annotation and Management as the transparency service will be used to document the detection models integrated into MAAM, helping non-expert users understand the results, explore model limitations, and address ethical issues related to their usage.
The Transparency Service also connects with the Personal Companion for Understanding Disinformation as it will document the AI models used in this tool, helping media professionals understand the results, explore model limitations, and address ethical concerns. It will clarify how the model identifies debunked claims, matches claims across various web sources, detects logical fallacies, and identifies hate speech. By offering these insights, the service ensures users can responsibly interpret detection outcomes and trust the tool’s recommendations.
Finally, the Transparency Service for AI Model Cards also links to the Trustability and Credibility Assessment as it will document the models behind trustability dashboard, helping media professionals understand how trustability assessments are made for information sources on the internet and next-generation platforms, like the Fediverse.