ABOUT

AI-CODE is a groundbreaking initiative aimed at tackling online disinformation by assessing future trends, co-designing solutions, offering content-driven and user-centered services, and creating both business and social impact.

Combining innovative methodologies and a multidisciplinary approach, our consortium of leading experts from academia, media, industry, and ethics is dedicated to addressing this critical challenge and delivering impactful results.

Objectives

1. Assess Future Developments

Assess the future developments related to the impact of generative AI and the next-generation social media on the domain of online disinformation and trust.

2. Provide Content-driven Services

Provide a set of content-driven services for disinformation detection and trust assessment in next generation social media​.

3. Provide User-driven Services

Provide a set of user-driven services dedicated to media professionals to support trusted information production using the emerging technologies.

4. Utilise Co-design Methods

Co-design, demonstrate and evaluate innovative approaches and technologies with end users​.

5. Create Business and Social Impact

Develop an effective and sustainable business strategy. Generate business and social impact in the information ecosystem.

AI-CODE
at a Glance

MAJOR OUTCOMES

A total of 6 services will be delivered from content and user-centred perspectives, comprising of 20+ tools.

3 OVERARCHING
USE CASES

  • AI tooling for trusted content
  • AI tools to support the discovery of potential foreign influence operations
  • Interactive coaching for harnessing generative AI to create trusted content

TARGET AUDIENCE

  • Media professionals
  • Media companies
  • Fact checkers
  • Media literacy NGOs
  • Academia & Scientific community
  • Technological providers
  • Civil society

TECHNOLOGY DOMAINS

  • Trustworthy generative AI for text
  • Trustworthy AI for countering visual synthetic media
  • Content and source credibility and trust assessment
  • Human-AI collaboration

BENEFITS

  • Sustainable usage of AI throughout 6 ready to use services
  • Increased inclusiveness by supporting a human-centered AI
  • Increased citizens’ trust in new technologies
  • Supporting EU Leadership in AI

BACKGROUND

AI-CODE is built off the research outcomes from the following four large running projects:

COLLABORATIONS

  • Active participation in the AI Against Disinformation Cluster
  • Collaboration with key EU initiatives and funded projects
  • Engagement with European Digital Media Observatory (EDMO) and integration with EDMO Hubs AI-on-Demand Platform

CONSORTIUM

AI CODE brings together 14 partners (8 EU countries) comprising of:

  • 3 Media companies
  • 3 SMEs
  • 7 Academics and Researchers
  • 1 SSH, legal, and ethical expert partner

Methodology

The project adopts a service-oriented methodology combined with agile practices, enabling teams to rapidly design and develop prototypes that achieve high Technology Readiness Level (TRL).

This approach allows AI-CODE to proactively tackle emerging technologies and threats to a robust information ecosystem.

Implementation follows a dynamic, iterative process, building on three major phases: Study, Work, and Impact.

1. STUDY!

The first phase, from December 2023 to November 2024, is to study and analyse, together with media professionals, the impact, future evolutions, ethical, legal and social issues of disinformation.​

2. WORK!

The second phase, from December 2024 to November 2025, consists of the design and implementation of new tools/services.

3. IMPACT!

The third phase, from December 2025 to November 2026, consists of experimentation and field validation of the developed tools and methodologies within specific use cases.

The Consortium

The AI-CODE consortium is a well-balanced team of 14 partners from 8 European countries (Belgium, Greece, Germany, Italy, Lithuania, Netherlands, Slovakia, Spain), combining expertise in research, media, technological development, and legal/ethical domains.

Coordinated by DST, it includes 7 research partners (CERTH, FBK, UPM, KInIT, EIPCM, RU, UNICAL), 3 media professionals (DW, DEB, EURACTIV), 3 SMEs (DST, NAL, ATC), and 1 SSH, legal and ethical expert (CEPS).

This diverse group brings together complementary skills and experience to develop tools and technologies that help media professionals combat disinformation and produce trustworthy content.

Our partners