Media Asset Annotation and Management (MAAM)

Automating Media Annotation and Detecting Synthetic Content, Including Deepfakes

The Media Asset Annotation and Management (MAAM) service is an advanced system that uses AI models for content annotation (labeling), retrieval, and integrity analysis.

It is enhanced with improved forgery detection, explainability features, and user profiling for disinformation analysis.

The service is tailored to serve professionals such as journalists and researchers, providing robust tools for managing and verifying media assets. These enhancements aim to bolster the platform’s effectiveness in combating media manipulation and disinformation.

MAAM has connections to the Transparency Service for AI Model Cards as it will use a transparency service to help end-users understand the results of annotation models, including AI-generated content detectors (for synthetic content and deepfakes).

MAAM also relates to Disinformation Detection in Next-generation Social Media as Multimedia content collected from next-generation social media can be imported into MAAM to detect cases of AI-generated or manipulated content, such as synthetic images, image forgeries, deepfakes, and more.