AI-CODE

Empowering Media Professionals
against Disinformation

Making EU digital environments safer

To foster trust in today’s evolving digital landscapes, users need cutting-edge strategies that guarantee the delivery of reliable and accurate information while actively countering online disinformation.

AI-CODE is a pioneering, three-year interdisciplinary initiative harnessing the power of artificial intelligence to create future-proof technological solutions that foster trust and integrity online tackling the challenges of this evolving digital landscape.

Current Challenges

The media sector is experiencing rapid and unprecedented innovation, driven largely by the intense development of new technologies—particularly those based on Generative Artificial Intelligence (Gen-AI). These technologies are profoundly shaping and influencing the online environment.

AI plays a dual role, both positive and negative, in creating and spreading information in the current digital era. Its potentially disruptive impact extends to citizens, democracy, and society as a whole.

Why AI-CODE?

Large-scale disinformation campaigns pose a significant challenge for Europe. There is an urgent need for innovative AI-based solutions to ensure media freedom and pluralism, deliver credible and accurate information, combat disinformation, and support the European Democracy Shield.

Our mission

The main goal of the interdisciplinary AI-CODE project is to advance state-of-the-art research into a novel ecosystem of services designed to support media professionals in producing trusted information using Gen-AI.

AI-CODE’s innovative ecosystem consists of six distinct services. Three services are user-driven and three content-driven. The services will be validated through three use cases addressing AI-driven media disinformation.

Services

Disinformation Detection in Next-Generation Social Media

Trustability and Credibility Assessment

Media Asset Annotation and Management (MAAM)

Generative AI Interactive Coaching Service & Dynamic Simulator

Transparency Service for AI Model Cards

Personal Companion for Understanding Disinformation

Use cases

Use cases

Use case #1

AI tooling for trusted content

Use case #2

Use of AI tools to support the discovery of potential foreign influence operation faster

Use case #3

Interactive coaching for harnessing generative AI to crete high-quality trusted content

Use case #1

AI tooling for trusted content

Use case #2

Use of AI tools to support the discovery of potential foreign influence operation faster

Use case #3

Interactive coaching for harnessing generative AI to crete high-quality trusted content