Skip to content

Use cases

Use Case #1 : AI Tooling for Trusted Content

Use case manager: Deutsche Welle

Current problem and needs: since 2022 news media organisations experience a lack of support solutions that reflect recent changes in the digital news landscape. There is an increase in the potential for wider disinformation aspects and occurrences. News media organisations now require AI-powered support tools for: 1) detecting fully AI-generated textual/image content; 2) detecting entirely AI-generated audio-visual productions (e.g., podcasts); and 3) getting support with analysing new types of disinformation in emerging social media universes (e.g., metaverses, fediverses) while observing ethical/regulatory compliance requirements.

Use case objectives and expected benefits: 1) review new tools/services in the context of near real-live journalism workflows, 2) compare the new tools/services against relevant services in the market, and 3) disseminate use case results within news/disinformation related stakeholders and professional comm Benefits relate to improving new AI powered tooling for the news media verification and journalism environment and sharing related learnings.

AI-CODE Services that will be tested and validated:
•Disinformation Detection in the Next-generation Social Media
•Trustability and Credibility Assessment for Content and Sources
•Transparency Service for AI-model Cards
•Media Asset Annotation and Management (MAAM)
•Personal Companion for Understanding Disinformation for Media Professionals

Use Case #2 : Use of AI Tools to Discover Potential Foreign Influence Operations Faster

Use case manager: Debunk

Current problem and needs: In the last 3 years most threat actors are switching to audio visual content that is way more complicated to analyse. Video content has a much higher impact on society than just text based. In 2022, Debunk team did analysis for NATO StratCom analysing 350 hours of most prominent Kremlin TV shows. Number of influence campaigns in Tik Tok, YouTube shorts and Instagram has increased dramatically.

Use case objectives and expected benefits: Debunk analysts monitor and analyse potential information influence cases from Kremlin actors on a daily basis. Since the invasion of Ukraine, the number of information attacks, cyber-attacks, and coordinated inauthentic behaviour (CIB) increased sharply. Covert influence operations have adopted a brute-force, “smash-and-grab” approach of high-volume but very low-quality campaigns across the internet. Therefore, consistent assessment of the information environment, mapping out hostile actors, and exposing attempts of algorithmic manipulation is crucial – and not only at the time of war or significant events. To detect CIB such cases, analysts aim to apply AICODE content-driven data processing tools to spot the anomalies. The expected benefits are that these automated tools for source analysis would help Debunk analysts to detect new harmful sources a lot faster. In addition, by automatic analysing thousands of videos from social media, they would greatly improve analysis and productivity.

AI-CODE Services that will be tested and validated:
•Trustability and Credibility Assessment for Content and Sources
•Disinformation Detection in the Next-generation Social Media
•Media Asset Annotation and Management (MAAM)

Use Case #3 : Interactive Coaching for Harnessing Generative AI to Create High-quality Trusted Content

Use case manager: Euractiv, Deutsche Welle and Debunk

Current problem and needs: Generative AI technology makes combating disinformation in the media more challenging. It enables malicious actors to spread false information on a large scale, especially in critical areas like climate change and political polarization. Current mechanisms in place to detect and mitigate disinformation are not able to keep up with the constantly evolving strategies used by these actors and the speed of technological developments, especially regarding both malicious use of generative AI and involuntary spread of misinformation due to the lack of knowhow for using generative AI to create high-quality, trustworthy content. Media professionals need new approaches to combat disinformation and maintain trust in the media. The proposed interactive coaching service will help media professionals to better understand how generative AI works, how to use it safely and responsibly to create high-quality content and to counter misinformation, and how to anticipate and respond to new developments in this technology. By doing so, the service aims to help media professionals create high-quality content that avoids the spread of disinformation, ultimately contributing to a more informed and trustworthy media landscape.

Use case objectives and expected benefits: The objective of this use case is to provide media professionals with an interactive simulator (incl. a counternarrative recommendation module), and an AI-based Personal Companion along with effective coaching service offerings to combat the spread of disinformation and misinformation harnessing generative AI technologies. The expected benefits include enhanced competencies of media professionals to use generative AI tools safely and responsibly, improved quality of media content, countering disinformation and reinforcing trust in media offerings.

AI-CODE Services that will be tested and validated:
•Generative AI Interactive Coaching Service & Dynamic Simulator for Media Professionals
•Personal Companion for Understanding Disinformation for Media Professionals